ITS – Inter- and Transdisciplinary Sessions
ITS1.1/ERE7.1 – Multi-scale water-energy-land nexus planning to manage socio-economic, climatic, and technological change
EGU2020-15192 | Displays | ITS1.1/ERE7.1 | Highlight
Quantifying synergies and trade-offs in the water-land-energy-food-climate nexus using a multi-model scenario approachJonathan Doelman, Tom Kram, Benjamin Bodirsky, Isabelle Weindle, and Elke Stehfest
The human population has substantially grown and become wealthier over the last decades. These developments have led to major increases in the use of key natural resources such as food, energy and water causing increased pressure on the environment throughout the world. As these trends are projected to continue into the foreseeable future, a crucial question is how the provision of resources as well as the quality of the environment can be managed sustainably.
Environmental quality and resource provision are intricately linked. For example, food production depends on availability of water, land suitable for agriculture, and favourable climatic circumstances. In turn, food production causes climate change due to greenhouse gas emissions, and affects biodiversity through conversion of natural vegetation to agriculture and through the effects of excessive fertilizer and use of pesticides. There are many examples of the complex interlinkages between different production systems and environmental issues. To handle this complexity the nexus concept has been introduced which recognizes that different sectors are inherently interconnected and must be investigated in an integrated, holistic manner.
Until now, the nexus literature predominantly exists of local studies or qualitative descriptions. This study present the first qualitative, multi-model nexus study at the global scale, based on scenarios simultaneously developed with the MAgPIE land use model and the IMAGE integrated assessment model. The goal is to quantify synergies and trade-offs between different sectors of the water-land-energy-food-climate nexus in the context of sustainable development goals (SDGs). Each scenario is designed to substantially improve one of the nexus sectors water, land, energy, food or climate. A number of indicators that capture important aspects of both the nexus sectors and related SDGs is selected to assess whether these scenarios provide synergies or trade-offs with other nexus sectors, and to quantify the effects. Additionally a scenario is developed that aims to optimize policy action across nexus sectors providing an example of a holistic approach that achieves multiple sustainable development goals.
The results of this study highlight many synergies and trade-offs. For example, an important trade-off exists between climate change policy and food security targets: large-scale implementation of bio-energy and afforestation to achieve stringent climate targets negatively impacts food security. An interesting synergy exists between the food, water and climate sectors: promoting healthy diets reduces water use, improves water quality and increases the uptake of carbon by forests.
How to cite: Doelman, J., Kram, T., Bodirsky, B., Weindle, I., and Stehfest, E.: Quantifying synergies and trade-offs in the water-land-energy-food-climate nexus using a multi-model scenario approach, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-15192, https://doi.org/10.5194/egusphere-egu2020-15192, 2020.
The human population has substantially grown and become wealthier over the last decades. These developments have led to major increases in the use of key natural resources such as food, energy and water causing increased pressure on the environment throughout the world. As these trends are projected to continue into the foreseeable future, a crucial question is how the provision of resources as well as the quality of the environment can be managed sustainably.
Environmental quality and resource provision are intricately linked. For example, food production depends on availability of water, land suitable for agriculture, and favourable climatic circumstances. In turn, food production causes climate change due to greenhouse gas emissions, and affects biodiversity through conversion of natural vegetation to agriculture and through the effects of excessive fertilizer and use of pesticides. There are many examples of the complex interlinkages between different production systems and environmental issues. To handle this complexity the nexus concept has been introduced which recognizes that different sectors are inherently interconnected and must be investigated in an integrated, holistic manner.
Until now, the nexus literature predominantly exists of local studies or qualitative descriptions. This study present the first qualitative, multi-model nexus study at the global scale, based on scenarios simultaneously developed with the MAgPIE land use model and the IMAGE integrated assessment model. The goal is to quantify synergies and trade-offs between different sectors of the water-land-energy-food-climate nexus in the context of sustainable development goals (SDGs). Each scenario is designed to substantially improve one of the nexus sectors water, land, energy, food or climate. A number of indicators that capture important aspects of both the nexus sectors and related SDGs is selected to assess whether these scenarios provide synergies or trade-offs with other nexus sectors, and to quantify the effects. Additionally a scenario is developed that aims to optimize policy action across nexus sectors providing an example of a holistic approach that achieves multiple sustainable development goals.
The results of this study highlight many synergies and trade-offs. For example, an important trade-off exists between climate change policy and food security targets: large-scale implementation of bio-energy and afforestation to achieve stringent climate targets negatively impacts food security. An interesting synergy exists between the food, water and climate sectors: promoting healthy diets reduces water use, improves water quality and increases the uptake of carbon by forests.
How to cite: Doelman, J., Kram, T., Bodirsky, B., Weindle, I., and Stehfest, E.: Quantifying synergies and trade-offs in the water-land-energy-food-climate nexus using a multi-model scenario approach, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-15192, https://doi.org/10.5194/egusphere-egu2020-15192, 2020.
EGU2020-20646 | Displays | ITS1.1/ERE7.1 | Highlight
Natural capital, ecosystem services, and conservation – Maps to sustain both nature and humanityPamela Collins, Rachel Neugarten, Becky Chaplin-Kramer, Dave Hole, and Steve Polasky
Ecosystems around the world support both biodiversity and human well-being, providing essential goods and services including food, fiber, building materials, moisture/temperature regulation, carbon sequestration, disaster risk reduction, and spiritual/cultural meaning. While we all depend on these benefits to survive and thrive, they are especially critical to the world’s most vulnerable people. And as populations and economies grow and the climate continues to change, humanity may find itself needing nature’s benefits in new and unexpected ways.
Mapping ecosystem service provision globally along with biodiversity is essential to effective, just, and lasting conservation planning and prioritization. Identifying global ecosystem service hotspots is key to enabling multi-scale water-energy-land nexus planning for managing socio-economic, climatic, and technological change. This presentation will showcase the latest results of a first-of-its-kind effort to collect the best available spatial datasets of global ecosystem service provision and synthesize them into a common “critical natural capital” framework that highlights global ecosystem service “hotspots” for both humanity overall and the world’s most vulnerable people in particular. Drawn from a wide range of observational and modeling studies conducted by physical and social scientists around the world, this innovative synthesis represents the first attempt to create an integrated spatial map of all that we know about humanity’s dependence on nature, on land and at sea.
Biodiversity is intimately linked to ecosystem services, since intact ecosystems with diverse and abundant native flora and fauna have the greatest ability to provide these irreplaceable services to humanity. Thus, conserving nature for biodiversity and conserving nature for human well-being are two sides of the same coin. This presentation will explore how to integrate these maps of the world’s critical natural capital into the global conservation conversation. These maps will enable investors and policymakers at the global and national scales to explore the potential consequences to humanity of diverse area-based conservation strategies, providing crucial context for the Post-2020 Global Biodiversity Framework and related conversations.
Sustainable use and management of land and sea, in line with the vision outlined by the Sustainable Development Goals, is essential to preserving both biodiversity and humanity’s ability to thrive on this planet. The upcoming negotiation of the Post-2020 Global Biodiversity Framework represents a key opportunity to set the planet on a path to more strategic and effective management of the terrestrial and marine realms, and our maps can inform decision-making around the size and spatial distribution of protected areas and other effective conservation measures. Society can only manage what it can monitor, and with the clearer vision of the most important places for both biodiversity conservation and ecosystem service provision these maps provide, humanity will be well-poised to start the next decade off on the right foot.
How to cite: Collins, P., Neugarten, R., Chaplin-Kramer, B., Hole, D., and Polasky, S.: Natural capital, ecosystem services, and conservation – Maps to sustain both nature and humanity, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20646, https://doi.org/10.5194/egusphere-egu2020-20646, 2020.
Ecosystems around the world support both biodiversity and human well-being, providing essential goods and services including food, fiber, building materials, moisture/temperature regulation, carbon sequestration, disaster risk reduction, and spiritual/cultural meaning. While we all depend on these benefits to survive and thrive, they are especially critical to the world’s most vulnerable people. And as populations and economies grow and the climate continues to change, humanity may find itself needing nature’s benefits in new and unexpected ways.
Mapping ecosystem service provision globally along with biodiversity is essential to effective, just, and lasting conservation planning and prioritization. Identifying global ecosystem service hotspots is key to enabling multi-scale water-energy-land nexus planning for managing socio-economic, climatic, and technological change. This presentation will showcase the latest results of a first-of-its-kind effort to collect the best available spatial datasets of global ecosystem service provision and synthesize them into a common “critical natural capital” framework that highlights global ecosystem service “hotspots” for both humanity overall and the world’s most vulnerable people in particular. Drawn from a wide range of observational and modeling studies conducted by physical and social scientists around the world, this innovative synthesis represents the first attempt to create an integrated spatial map of all that we know about humanity’s dependence on nature, on land and at sea.
Biodiversity is intimately linked to ecosystem services, since intact ecosystems with diverse and abundant native flora and fauna have the greatest ability to provide these irreplaceable services to humanity. Thus, conserving nature for biodiversity and conserving nature for human well-being are two sides of the same coin. This presentation will explore how to integrate these maps of the world’s critical natural capital into the global conservation conversation. These maps will enable investors and policymakers at the global and national scales to explore the potential consequences to humanity of diverse area-based conservation strategies, providing crucial context for the Post-2020 Global Biodiversity Framework and related conversations.
Sustainable use and management of land and sea, in line with the vision outlined by the Sustainable Development Goals, is essential to preserving both biodiversity and humanity’s ability to thrive on this planet. The upcoming negotiation of the Post-2020 Global Biodiversity Framework represents a key opportunity to set the planet on a path to more strategic and effective management of the terrestrial and marine realms, and our maps can inform decision-making around the size and spatial distribution of protected areas and other effective conservation measures. Society can only manage what it can monitor, and with the clearer vision of the most important places for both biodiversity conservation and ecosystem service provision these maps provide, humanity will be well-poised to start the next decade off on the right foot.
How to cite: Collins, P., Neugarten, R., Chaplin-Kramer, B., Hole, D., and Polasky, S.: Natural capital, ecosystem services, and conservation – Maps to sustain both nature and humanity, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20646, https://doi.org/10.5194/egusphere-egu2020-20646, 2020.
EGU2020-4106 | Displays | ITS1.1/ERE7.1
Downscaling flows in the water-food-energy NexusStefan C. Dekker, Maria J. Santos, Hanneke Van 'tVeen, and Detlef P. van Vuuren
The variabilities in both time and space of the flows between the components of the water-food-energy are dependent on many driving factors. In this study we use global scenarios from the Integrated Assessment Model IMAGE to analyse future changes in flows in the water, food and energy nexus. With Sankey diagrams we show how flows between energy and food production will likely increase by 60% and water consumption by 20% in 2050 by using a reference scenario. The inclusion of climate action policies, combined with dietary changes, increased yield efficiency and food waste reduction leads to similar resources uses of water and land, and much lower greenhouse gas emissions compared to 2010.
We found that based on data, spatial scales are an important but complicating factor in nexus analysis. This is because different resources have their own physical and spatial scale characteristics within the nexus. To examine the effect of scaling on future nexus development, we analyse how local decisions and local resource availability of the use of biomass as energy source impacts other resources. Biomass use potentially impacts forest systems and might compete with land for food and water resources within the nexus. The use of biomass and more specifically charcoal will likely further increase mainly due to urbanization in developing countries. We have examined how different shared socio economic pathway (SSP) scenarios result in (i) future demand for biomass for energy and compare those to measured (with remote sensing) and modelled net primary productivity values of forested systems, (ii) estimate the amount of land needed for biomass production that might compete with food production, and (iii) estimate the water amount needed to produce biomass to meet the different biomass demands. We found that current productivity of non-protected forests is globally higher than the demand, but regionally it closely meets the demand for tropical areas in Central America and Africa. This while tropical areas in South America and Indonesia show decreasing biomass demands for energy for the SSP1-SSP3 scenarios. From this analysis we clearly see differences at regional scales in the competition between the resources land and water are found.
We conclude that a nexus framework analysis which estimates flows between the different components across scales is fundamental to understand system sustainability. Such approach benefits from combining global scenarios of Integrated Assessment models with local conditions to understand the sustainability in the nexus in time and space.
How to cite: Dekker, S. C., Santos, M. J., Van 'tVeen, H., and van Vuuren, D. P.: Downscaling flows in the water-food-energy Nexus , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4106, https://doi.org/10.5194/egusphere-egu2020-4106, 2020.
The variabilities in both time and space of the flows between the components of the water-food-energy are dependent on many driving factors. In this study we use global scenarios from the Integrated Assessment Model IMAGE to analyse future changes in flows in the water, food and energy nexus. With Sankey diagrams we show how flows between energy and food production will likely increase by 60% and water consumption by 20% in 2050 by using a reference scenario. The inclusion of climate action policies, combined with dietary changes, increased yield efficiency and food waste reduction leads to similar resources uses of water and land, and much lower greenhouse gas emissions compared to 2010.
We found that based on data, spatial scales are an important but complicating factor in nexus analysis. This is because different resources have their own physical and spatial scale characteristics within the nexus. To examine the effect of scaling on future nexus development, we analyse how local decisions and local resource availability of the use of biomass as energy source impacts other resources. Biomass use potentially impacts forest systems and might compete with land for food and water resources within the nexus. The use of biomass and more specifically charcoal will likely further increase mainly due to urbanization in developing countries. We have examined how different shared socio economic pathway (SSP) scenarios result in (i) future demand for biomass for energy and compare those to measured (with remote sensing) and modelled net primary productivity values of forested systems, (ii) estimate the amount of land needed for biomass production that might compete with food production, and (iii) estimate the water amount needed to produce biomass to meet the different biomass demands. We found that current productivity of non-protected forests is globally higher than the demand, but regionally it closely meets the demand for tropical areas in Central America and Africa. This while tropical areas in South America and Indonesia show decreasing biomass demands for energy for the SSP1-SSP3 scenarios. From this analysis we clearly see differences at regional scales in the competition between the resources land and water are found.
We conclude that a nexus framework analysis which estimates flows between the different components across scales is fundamental to understand system sustainability. Such approach benefits from combining global scenarios of Integrated Assessment models with local conditions to understand the sustainability in the nexus in time and space.
How to cite: Dekker, S. C., Santos, M. J., Van 'tVeen, H., and van Vuuren, D. P.: Downscaling flows in the water-food-energy Nexus , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4106, https://doi.org/10.5194/egusphere-egu2020-4106, 2020.
EGU2020-12119 | Displays | ITS1.1/ERE7.1
Multi-scale Food- Energy-Water Nexus to link national, regional and local sustainability based on resource-sheds and system dynamics modeling: A Case study of JapanSanghyun Lee, Makoto Taniguchi, and Naoki Masuhara
The aim of this study is to develop a Food-Energy-Water (FEW) Nexus platform based on boundaries of resources and system dynamics modeling. For example, water-shed indicates river basin, aquifer or water supply area regarded as non-tradable boundary. However food-shed indicates both food production and consumption area in addition to food trade. Energy-shed is mainly defined by electricity distribution. Therefore, the boundary of each resource is different and we link water, energy, and food boundaries such as resource-sheds in the FEW Nexus. As a case study, we analyze the interlinkage among national, regional, and local sustainability in terms of resource management and socio-economic-environmental impacts in Japan. First, we analyze the local characteristics of FEW Nexus as a prefecture level using the FEW indices, and assess the potential issues under future industrialization or economic growth situations. Second, we combine the local FEW Nexus into regional platform, for example, the Kansai regional Nexus including Osaka, Kyoto, Shiga, Hyogo, Nara, and Wakayama prefecutures. Finally, we adpat the boundary of resource-sheds into the regional Nexus and assess the changes in local resource management on regional resource sustainability using system dynamics modeling. Thus, we assess the impacts of changes about water, energy, and food management in each prefecture on regional water and energy security in Kansai region. This study could contribute to develop a common framework for scientists and policy-makers to evaluate sustainable resource management with multi-scale perspective, thus it has the potential to achieve integrated water, energy and food security.
How to cite: Lee, S., Taniguchi, M., and Masuhara, N.: Multi-scale Food- Energy-Water Nexus to link national, regional and local sustainability based on resource-sheds and system dynamics modeling: A Case study of Japan, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12119, https://doi.org/10.5194/egusphere-egu2020-12119, 2020.
The aim of this study is to develop a Food-Energy-Water (FEW) Nexus platform based on boundaries of resources and system dynamics modeling. For example, water-shed indicates river basin, aquifer or water supply area regarded as non-tradable boundary. However food-shed indicates both food production and consumption area in addition to food trade. Energy-shed is mainly defined by electricity distribution. Therefore, the boundary of each resource is different and we link water, energy, and food boundaries such as resource-sheds in the FEW Nexus. As a case study, we analyze the interlinkage among national, regional, and local sustainability in terms of resource management and socio-economic-environmental impacts in Japan. First, we analyze the local characteristics of FEW Nexus as a prefecture level using the FEW indices, and assess the potential issues under future industrialization or economic growth situations. Second, we combine the local FEW Nexus into regional platform, for example, the Kansai regional Nexus including Osaka, Kyoto, Shiga, Hyogo, Nara, and Wakayama prefecutures. Finally, we adpat the boundary of resource-sheds into the regional Nexus and assess the changes in local resource management on regional resource sustainability using system dynamics modeling. Thus, we assess the impacts of changes about water, energy, and food management in each prefecture on regional water and energy security in Kansai region. This study could contribute to develop a common framework for scientists and policy-makers to evaluate sustainable resource management with multi-scale perspective, thus it has the potential to achieve integrated water, energy and food security.
How to cite: Lee, S., Taniguchi, M., and Masuhara, N.: Multi-scale Food- Energy-Water Nexus to link national, regional and local sustainability based on resource-sheds and system dynamics modeling: A Case study of Japan, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12119, https://doi.org/10.5194/egusphere-egu2020-12119, 2020.
EGU2020-13296 | Displays | ITS1.1/ERE7.1
Benefits of Cross-Border Cooperation for Achieving Water-Energy-Land Sustainable Development Goals in the Indus BasinAdriano Vinca, Simon Parkinson, and Keywan Riahi
The Indus Basin, a densely irrigated area home to about 300-million people, has expected growing demands for water, energy and food in the coming decades. With no abundant surface water left in the basin and accelerating use of groundwater, long-term strategic and integrated management of water and its interlinked sectors (water-energy-land) is fundamental for the sustainable development of the region. Cooperation among riparian countries is an alternative to current situation that could help achieving water-energy-land related Sustainable Development Goals, maximizing socio-environmental benefits and minimizing costs. We show a scenario-based analysis using numerical models (The Nexus Solution Tool) where we link local issues and policies to the Sustainable Development Goals, showing magnitude and geographical location of required investments to meet SDG and the associated impacts. Finally, we discuss the barriers to cross-border cooperation and explore cases of partial cooperation, which confirms significant environmental and economic benefits.
How to cite: Vinca, A., Parkinson, S., and Riahi, K.: Benefits of Cross-Border Cooperation for Achieving Water-Energy-Land Sustainable Development Goals in the Indus Basin, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13296, https://doi.org/10.5194/egusphere-egu2020-13296, 2020.
The Indus Basin, a densely irrigated area home to about 300-million people, has expected growing demands for water, energy and food in the coming decades. With no abundant surface water left in the basin and accelerating use of groundwater, long-term strategic and integrated management of water and its interlinked sectors (water-energy-land) is fundamental for the sustainable development of the region. Cooperation among riparian countries is an alternative to current situation that could help achieving water-energy-land related Sustainable Development Goals, maximizing socio-environmental benefits and minimizing costs. We show a scenario-based analysis using numerical models (The Nexus Solution Tool) where we link local issues and policies to the Sustainable Development Goals, showing magnitude and geographical location of required investments to meet SDG and the associated impacts. Finally, we discuss the barriers to cross-border cooperation and explore cases of partial cooperation, which confirms significant environmental and economic benefits.
How to cite: Vinca, A., Parkinson, S., and Riahi, K.: Benefits of Cross-Border Cooperation for Achieving Water-Energy-Land Sustainable Development Goals in the Indus Basin, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13296, https://doi.org/10.5194/egusphere-egu2020-13296, 2020.
EGU2020-20100 | Displays | ITS1.1/ERE7.1
Balancing Local and Global Sustainability of Urban Water Supply Systems with Water Security and Resilience GoalsElisabeth Krueger, Dietrich Borchardt, James Jawitz, and Suresh Rao
The sustainability of urban water systems is commonly analyzed based on local characteristics, such as the protection of urban watersheds or the existence of nature-based solutions for stormwater drainage. Water embedded in food and other goods consumed within cities, or the pollution caused by their production is generally not assessed as part of urban water system sustainability. However, indirect feedbacks can produce negative impacts (e.g., drought and water quality impairments) resulting from these water and ecological footprints. We therefore suggest that, within the context of nexus thinking, embedded water and ecosystem impacts should be part of urban water governance considerations.
We quantify the local and global sustainability of urban water supply systems (UWSS) based on the performance of local sustainable governance and the size of global water and ecological footprints. Building on prior work on UWSS security and resilience, we develop a new framework that integrates security, resilience, and sustainability to investigate trade-offs between these three distinct and inter-related dimensions. Security refers to the level of services, resilience is the system’s ability to respond to and recover from shocks, and sustainability refers to the long-term viability of system services. Security and resilience are both relevant at local scale (city and surroundings), while for sustainability cross-scale and -sectoral feedbacks are important. We apply the new framework to seven cities selected from diverse hydro-climatic and socio-economic settings on four continents. We find that UWSS security, resilience, and local sustainability coevolve, while global sustainability correlates negatively with security. Approaching these interdependent goals requires governance strategies that balance the three dimensions within desirable and viable operating spaces. Cities outside these boundaries risk system failure in the short-term, due to lack of security and resilience, or face long-term consequences of unsustainable governance strategies. Our findings have strong implications for policy-making, strategic management, and for designing systems to operate sustainably at local and global scales, and across sectors.
The corresponding article was accepted for publication in Environmental Research Letters on Jan. 15, 2020.
How to cite: Krueger, E., Borchardt, D., Jawitz, J., and Rao, S.: Balancing Local and Global Sustainability of Urban Water Supply Systems with Water Security and Resilience Goals, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20100, https://doi.org/10.5194/egusphere-egu2020-20100, 2020.
The sustainability of urban water systems is commonly analyzed based on local characteristics, such as the protection of urban watersheds or the existence of nature-based solutions for stormwater drainage. Water embedded in food and other goods consumed within cities, or the pollution caused by their production is generally not assessed as part of urban water system sustainability. However, indirect feedbacks can produce negative impacts (e.g., drought and water quality impairments) resulting from these water and ecological footprints. We therefore suggest that, within the context of nexus thinking, embedded water and ecosystem impacts should be part of urban water governance considerations.
We quantify the local and global sustainability of urban water supply systems (UWSS) based on the performance of local sustainable governance and the size of global water and ecological footprints. Building on prior work on UWSS security and resilience, we develop a new framework that integrates security, resilience, and sustainability to investigate trade-offs between these three distinct and inter-related dimensions. Security refers to the level of services, resilience is the system’s ability to respond to and recover from shocks, and sustainability refers to the long-term viability of system services. Security and resilience are both relevant at local scale (city and surroundings), while for sustainability cross-scale and -sectoral feedbacks are important. We apply the new framework to seven cities selected from diverse hydro-climatic and socio-economic settings on four continents. We find that UWSS security, resilience, and local sustainability coevolve, while global sustainability correlates negatively with security. Approaching these interdependent goals requires governance strategies that balance the three dimensions within desirable and viable operating spaces. Cities outside these boundaries risk system failure in the short-term, due to lack of security and resilience, or face long-term consequences of unsustainable governance strategies. Our findings have strong implications for policy-making, strategic management, and for designing systems to operate sustainably at local and global scales, and across sectors.
The corresponding article was accepted for publication in Environmental Research Letters on Jan. 15, 2020.
How to cite: Krueger, E., Borchardt, D., Jawitz, J., and Rao, S.: Balancing Local and Global Sustainability of Urban Water Supply Systems with Water Security and Resilience Goals, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20100, https://doi.org/10.5194/egusphere-egu2020-20100, 2020.
EGU2020-899 | Displays | ITS1.1/ERE7.1
Assessment of Local Water Resources for Sustainable Development GoalsHomero Castanier
Based on the framework of the Sustainable Development Goals (SDGs) – Targets - Indicators 2016-2030, the objective of this paper is to emphasize on water resources as a cross-cutting issue and at the center of sustainable development, presenting a specific analysis of the importance of a better knowledge of the hydrology - hydrometrics of country major and local basins as fundamental information for water resources sustainable management. This implies the review of specific indicators related to the knowledge at town level of water resources assessment and availability, fundamental to life, health, food security, energy, the environment, and human well-being.
There are limitations including the lack of accurate and complete data. Local sub-national variation in water resources and water withdrawal could be considerable, as at the level of local or individual river basins, and the lack of account of seasonal variations in water resources. Regional values may mask huge differences within regions and also within countries where people live in areas of serious water scarcity, although each country could have enough renewable water resources overall.
In order to ensure sustainable withdrawals and supply of freshwater to address water scarcity, and to implement integrated water resources management at all levels (targets 6.4 and 6.5 of the SDGs), a fundamental baseline is the assessment of available and exploitable water resources at local level, as well as its development feasibility.
Data on water resources availability is a key indicator that should be approached at local level, since in a majority of countries, i) most local and rural communities and towns do not count with the information regarding their water resources, ii) local information will contribute to improve the accuracy of information of renewable water resources at country level, iii) rural settlements are in general the most vulnerable, lacking services of drinking water and irrigation for food security, and iv) small variations on the estimations of available water resources would represent social, environmental and economic consequences on water resources management and sustainable development planning.
Based on the analysis of the ecohydrology of two case studies, it is demonstrated that there cannot be effective integrated water resources management (IWRM) at town level if there is a lack of information on water resources availability.
Considering the limitations described in regard to goals-targets-indicators of sustainable withdrawals and supply of freshwater to address water scarcity, and the implementation of integrated water resources management, it is indispensable to count with adequate and reliable local hydrological - hydrometric data and monitoring systems that would contribute to partially control these limitations, assessing available water supplies for community planning.
In reference to Agenda 2030, countries must implement a complementary indicator, as the percentage of the population whose water sources are monitored by means of adequate measuring methods, providing information on surface water and ground water regimes that influence water availability.
How to cite: Castanier, H.: Assessment of Local Water Resources for Sustainable Development Goals, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-899, https://doi.org/10.5194/egusphere-egu2020-899, 2020.
Based on the framework of the Sustainable Development Goals (SDGs) – Targets - Indicators 2016-2030, the objective of this paper is to emphasize on water resources as a cross-cutting issue and at the center of sustainable development, presenting a specific analysis of the importance of a better knowledge of the hydrology - hydrometrics of country major and local basins as fundamental information for water resources sustainable management. This implies the review of specific indicators related to the knowledge at town level of water resources assessment and availability, fundamental to life, health, food security, energy, the environment, and human well-being.
There are limitations including the lack of accurate and complete data. Local sub-national variation in water resources and water withdrawal could be considerable, as at the level of local or individual river basins, and the lack of account of seasonal variations in water resources. Regional values may mask huge differences within regions and also within countries where people live in areas of serious water scarcity, although each country could have enough renewable water resources overall.
In order to ensure sustainable withdrawals and supply of freshwater to address water scarcity, and to implement integrated water resources management at all levels (targets 6.4 and 6.5 of the SDGs), a fundamental baseline is the assessment of available and exploitable water resources at local level, as well as its development feasibility.
Data on water resources availability is a key indicator that should be approached at local level, since in a majority of countries, i) most local and rural communities and towns do not count with the information regarding their water resources, ii) local information will contribute to improve the accuracy of information of renewable water resources at country level, iii) rural settlements are in general the most vulnerable, lacking services of drinking water and irrigation for food security, and iv) small variations on the estimations of available water resources would represent social, environmental and economic consequences on water resources management and sustainable development planning.
Based on the analysis of the ecohydrology of two case studies, it is demonstrated that there cannot be effective integrated water resources management (IWRM) at town level if there is a lack of information on water resources availability.
Considering the limitations described in regard to goals-targets-indicators of sustainable withdrawals and supply of freshwater to address water scarcity, and the implementation of integrated water resources management, it is indispensable to count with adequate and reliable local hydrological - hydrometric data and monitoring systems that would contribute to partially control these limitations, assessing available water supplies for community planning.
In reference to Agenda 2030, countries must implement a complementary indicator, as the percentage of the population whose water sources are monitored by means of adequate measuring methods, providing information on surface water and ground water regimes that influence water availability.
How to cite: Castanier, H.: Assessment of Local Water Resources for Sustainable Development Goals, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-899, https://doi.org/10.5194/egusphere-egu2020-899, 2020.
EGU2020-20461 | Displays | ITS1.1/ERE7.1
Characterising and quantifying links between water, energy, and food consumption in a water-poor, energy-rich city; Adelaide, AustraliaMargaret Shanafield, Okke Batelaan, and Sundar Subramani
More than half of the world’s population are urban dwellers, and this percentage is on the rise. Therefore, understanding the links between water, energy, and food requirements of cities plays a critical role in determining global resource consumption. Adelaide is a mid-size, coastal Australian city in Australia with a population of almost 1.3 million inhabitants. With its plentiful access to wind and solar energy, the Adelaide region has one of the highest rates of renewable energy production in the world, and access to additional, conventional energies supplies from other parts of the Australian network. However, the water supplies in this region are theoretically limited, as groundwater depletion is already occurring in the food production areas surrounding the city, and municipal water supplies rely heavily on the fully allocated Murray River system. Therefore, optimization of the food, energy and water requirements of the city provides an opportunity for optimal use of valuable resources. Quantification of these industries was not trivial and provided data availability and comparison challenges. Lessons learned on a quantitative example of the water-energy-food nexus at city scale are presented.
How to cite: Shanafield, M., Batelaan, O., and Subramani, S.: Characterising and quantifying links between water, energy, and food consumption in a water-poor, energy-rich city; Adelaide, Australia, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20461, https://doi.org/10.5194/egusphere-egu2020-20461, 2020.
More than half of the world’s population are urban dwellers, and this percentage is on the rise. Therefore, understanding the links between water, energy, and food requirements of cities plays a critical role in determining global resource consumption. Adelaide is a mid-size, coastal Australian city in Australia with a population of almost 1.3 million inhabitants. With its plentiful access to wind and solar energy, the Adelaide region has one of the highest rates of renewable energy production in the world, and access to additional, conventional energies supplies from other parts of the Australian network. However, the water supplies in this region are theoretically limited, as groundwater depletion is already occurring in the food production areas surrounding the city, and municipal water supplies rely heavily on the fully allocated Murray River system. Therefore, optimization of the food, energy and water requirements of the city provides an opportunity for optimal use of valuable resources. Quantification of these industries was not trivial and provided data availability and comparison challenges. Lessons learned on a quantitative example of the water-energy-food nexus at city scale are presented.
How to cite: Shanafield, M., Batelaan, O., and Subramani, S.: Characterising and quantifying links between water, energy, and food consumption in a water-poor, energy-rich city; Adelaide, Australia, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20461, https://doi.org/10.5194/egusphere-egu2020-20461, 2020.
EGU2020-7296 | Displays | ITS1.1/ERE7.1
Data-driven modelling of potential trajectories of the global water-energy-food nexus systemSara Masia and Janez Susnik
There is increasing interest in the global water-energy-food (WEF) system and potential future system trajectories under global change, especially considering growing concerns over resource exploitation and sustainability. Previous studies investigating different aspects of this system have a number of shortcomings including not analysing all nexus sectors and/or not accounting for possible feedback between sectors, meaning it is difficult to identify system-wide tradeoffs, and makes comparison difficult. A global analysis of the WEF system linked to changes in potential gross domestic product (GDP) growth is presented, integrating the four sectors (water-energy-food-GDP) into a coherent analysis and modelling framework. GDP was included as previous related work demonstrates a link between GDP and each WEF sector. A system dynamics modelling approach quantifies previously qualitative descriptions of the global WEF-GDP system, while a Monte-Carlo sampling approach is adopted to characterise variability in resource use and growth at the global level. Correlative and causal analysis show links of varying strength between sectors. For example, the GDP-electricity consumption sectors are strongly correlated while food production and electricity consumption are weakly correlated. Causal analysis reveals that ‘correlation does not imply causation’. There are noticeable asymmetries in causality between certain sectors. Historical WEF-GDP values are well recreated with the exception of electricity production/consumption. Future scenarios were assessed using seven GDP growth estimates to 2100. Water withdrawals in 2100 and food production in 2050 are close to other literature estimations arrived at using very different means. Results suggest that humanity risks exceeding the ‘safe operating space’ for water withdrawal. Reducing water withdrawal while maintaining or increasing food production is critical, and should be decoupled from economic growth. Electricity production/consumption is also expected to grow, with the strength of growth linked to GDP pathways. Climate impacts of the production and consumption will depend greatly on the fuel source for the generation of power. This work provides a quantitative modelling framework to previously qualitative descriptions of the WEF-GDP system, offering a platform on which to build.
How to cite: Masia, S. and Susnik, J.: Data-driven modelling of potential trajectories of the global water-energy-food nexus system, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7296, https://doi.org/10.5194/egusphere-egu2020-7296, 2020.
There is increasing interest in the global water-energy-food (WEF) system and potential future system trajectories under global change, especially considering growing concerns over resource exploitation and sustainability. Previous studies investigating different aspects of this system have a number of shortcomings including not analysing all nexus sectors and/or not accounting for possible feedback between sectors, meaning it is difficult to identify system-wide tradeoffs, and makes comparison difficult. A global analysis of the WEF system linked to changes in potential gross domestic product (GDP) growth is presented, integrating the four sectors (water-energy-food-GDP) into a coherent analysis and modelling framework. GDP was included as previous related work demonstrates a link between GDP and each WEF sector. A system dynamics modelling approach quantifies previously qualitative descriptions of the global WEF-GDP system, while a Monte-Carlo sampling approach is adopted to characterise variability in resource use and growth at the global level. Correlative and causal analysis show links of varying strength between sectors. For example, the GDP-electricity consumption sectors are strongly correlated while food production and electricity consumption are weakly correlated. Causal analysis reveals that ‘correlation does not imply causation’. There are noticeable asymmetries in causality between certain sectors. Historical WEF-GDP values are well recreated with the exception of electricity production/consumption. Future scenarios were assessed using seven GDP growth estimates to 2100. Water withdrawals in 2100 and food production in 2050 are close to other literature estimations arrived at using very different means. Results suggest that humanity risks exceeding the ‘safe operating space’ for water withdrawal. Reducing water withdrawal while maintaining or increasing food production is critical, and should be decoupled from economic growth. Electricity production/consumption is also expected to grow, with the strength of growth linked to GDP pathways. Climate impacts of the production and consumption will depend greatly on the fuel source for the generation of power. This work provides a quantitative modelling framework to previously qualitative descriptions of the WEF-GDP system, offering a platform on which to build.
How to cite: Masia, S. and Susnik, J.: Data-driven modelling of potential trajectories of the global water-energy-food nexus system, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7296, https://doi.org/10.5194/egusphere-egu2020-7296, 2020.
EGU2020-18681 | Displays | ITS1.1/ERE7.1 | Highlight
Exploration of the Dynamics in the Swedish Water-Energy-Land-Food-Climate Nexus: Lessons from Combining Policy Analysis and System Dynamics ModelingMalgorzata Blicharska, Janez Susnik, Sara Masia, Lotte van den Heuvel, Thomas Grabs, and Claudia Teutschbein
EGU2020-4238 | Displays | ITS1.1/ERE7.1
Water-related synergists and antagonists in the SDGnexus NetworkBjörn Weeser and Lutz Breuer
Funded by the German Academic Exchange Service (DAAD) as a Higher Education Excellence in Development Cooperation (exceed), the SDGnexus Network is a global community of universities, research centers and stakeholders committed to promoting the Agenda 2030 for sustainable development. Supported for five years starting in 2020, the network will establish a common research framework related to the inter-linkages, trade-offs, and synergies between the Sustainable Development Goals (SDGs). As part of this endeavor, we will focus on water-related SDGs and how they interact, support, and counteract with other SDGs. We will particularly investigate the interaction between SDGs related to land use, food provision, and energy production.
Consisting of seven university core partners with four of them in Latin America (two each in Ecuador and Columbia) and three in Central Asia (Uzbekistan, Tajikistan, and Kyrgyzstan), the network liaise research between countries with typical development challenges such as the resource curse or the middle-income trap.
Both regions have water, energy, and food interrelated concerns. Hydropower generation upstream can have, for example, adverse effects on the agricultural water use downstream. The timing of water use throughout the year is a potential conflict in Central Asia, such as in the Syr Darja and Amur Darja basins that discharge into the Aral Sea. The energy demand in winter contradicts the agricultural crop water requirement in summer. In the Amazon basin deforestation likely changes the large-scale water cycle and, therefore, the local to regional the rainfall patterns through a modified moisture recycling. Such changes could result in less rainfall on the eastern side of the Andes and consequently diminishes discharge into the Amazon basin from the Andean headwaters.
Climate change will further increase the pressure on water resources. The glacier-fed headwaters in the Tian Shan mountain in Asia and the Andes systems are suspected of undergoing dramatic changes in the near future. While an increased runoff in summer due to the rapid melting of the glaciers is expected initially, runoff will decrease due to the loss of the glacier as an intermediate water reservoir in the long term.
Overall, the SDGnexus network will build bridges between water-related science, education, as well as development. It supports the identification of potential areas of intervention for decisionmakers, and reduce the research gap in inter-linkages between SDG goals and targets. Furthermore, the network aims at developing alternative land use options under climate change conditions to sustain environmental flows in both world regions.
How to cite: Weeser, B. and Breuer, L.: Water-related synergists and antagonists in the SDGnexus Network , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4238, https://doi.org/10.5194/egusphere-egu2020-4238, 2020.
Funded by the German Academic Exchange Service (DAAD) as a Higher Education Excellence in Development Cooperation (exceed), the SDGnexus Network is a global community of universities, research centers and stakeholders committed to promoting the Agenda 2030 for sustainable development. Supported for five years starting in 2020, the network will establish a common research framework related to the inter-linkages, trade-offs, and synergies between the Sustainable Development Goals (SDGs). As part of this endeavor, we will focus on water-related SDGs and how they interact, support, and counteract with other SDGs. We will particularly investigate the interaction between SDGs related to land use, food provision, and energy production.
Consisting of seven university core partners with four of them in Latin America (two each in Ecuador and Columbia) and three in Central Asia (Uzbekistan, Tajikistan, and Kyrgyzstan), the network liaise research between countries with typical development challenges such as the resource curse or the middle-income trap.
Both regions have water, energy, and food interrelated concerns. Hydropower generation upstream can have, for example, adverse effects on the agricultural water use downstream. The timing of water use throughout the year is a potential conflict in Central Asia, such as in the Syr Darja and Amur Darja basins that discharge into the Aral Sea. The energy demand in winter contradicts the agricultural crop water requirement in summer. In the Amazon basin deforestation likely changes the large-scale water cycle and, therefore, the local to regional the rainfall patterns through a modified moisture recycling. Such changes could result in less rainfall on the eastern side of the Andes and consequently diminishes discharge into the Amazon basin from the Andean headwaters.
Climate change will further increase the pressure on water resources. The glacier-fed headwaters in the Tian Shan mountain in Asia and the Andes systems are suspected of undergoing dramatic changes in the near future. While an increased runoff in summer due to the rapid melting of the glaciers is expected initially, runoff will decrease due to the loss of the glacier as an intermediate water reservoir in the long term.
Overall, the SDGnexus network will build bridges between water-related science, education, as well as development. It supports the identification of potential areas of intervention for decisionmakers, and reduce the research gap in inter-linkages between SDG goals and targets. Furthermore, the network aims at developing alternative land use options under climate change conditions to sustain environmental flows in both world regions.
How to cite: Weeser, B. and Breuer, L.: Water-related synergists and antagonists in the SDGnexus Network , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4238, https://doi.org/10.5194/egusphere-egu2020-4238, 2020.
EGU2020-4576 | Displays | ITS1.1/ERE7.1
Effects of urbanization on food-energy-water systems in mega-urban regions: a case study of the Bohai MUR, ChinaCaiyun Deng, Hongrui Wang, Shuxin Gong, Jie Zhang, Bo Yang, and Ziyang Zhao
The security of food-energy-water systems (FEW systems) is an issue of worldwide concern, especially in mega-urban regions (MURs) with high-density populations, industries and carbon emissions. To better understand the hidden linkages between urbanization and FEW systems, the pressure on FEW systems is quantified in a typical rapid urbanizing region—the Bohai MUR. The correlation between urbanization indicators and the pressure on FEW systems is analyzed and the mechanism of the impact of urbanization on FEW systems is further investigated. Results show that approximately 23% of croplands is lost, 61% of which is converted to construction lands and the urban areas expand by 132.2% in the Bohai MUR during 1980-2015. The pressure on FEW systems has an upward trend with the stress index of the pressure on FEW systems (FEW_SI) exhibiting ranging from 80.49 to 134.82% and dominant pressure consisting of that has converted from water system pressure to energy system pressure since 2004. The FEW_SI in the Bohai MUR is enhanced with cropland loss and the increase in urbanization indicators. Additionally, land use, populations, incomes, policies and innovation are the main ways urbanization impacted FEW systems in MURs. This study enhances our understanding of the pressure variation on FEW systems in MURs and the effects of urbanization on FEW systems, which helps stakeholders to enhance the resilience of FEW systems and promote sustainable regional development.
Keywords: urbanization, food-energy-water system pressure, linkages, MURs
How to cite: Deng, C., Wang, H., Gong, S., Zhang, J., Yang, B., and Zhao, Z.: Effects of urbanization on food-energy-water systems in mega-urban regions: a case study of the Bohai MUR, China, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4576, https://doi.org/10.5194/egusphere-egu2020-4576, 2020.
The security of food-energy-water systems (FEW systems) is an issue of worldwide concern, especially in mega-urban regions (MURs) with high-density populations, industries and carbon emissions. To better understand the hidden linkages between urbanization and FEW systems, the pressure on FEW systems is quantified in a typical rapid urbanizing region—the Bohai MUR. The correlation between urbanization indicators and the pressure on FEW systems is analyzed and the mechanism of the impact of urbanization on FEW systems is further investigated. Results show that approximately 23% of croplands is lost, 61% of which is converted to construction lands and the urban areas expand by 132.2% in the Bohai MUR during 1980-2015. The pressure on FEW systems has an upward trend with the stress index of the pressure on FEW systems (FEW_SI) exhibiting ranging from 80.49 to 134.82% and dominant pressure consisting of that has converted from water system pressure to energy system pressure since 2004. The FEW_SI in the Bohai MUR is enhanced with cropland loss and the increase in urbanization indicators. Additionally, land use, populations, incomes, policies and innovation are the main ways urbanization impacted FEW systems in MURs. This study enhances our understanding of the pressure variation on FEW systems in MURs and the effects of urbanization on FEW systems, which helps stakeholders to enhance the resilience of FEW systems and promote sustainable regional development.
Keywords: urbanization, food-energy-water system pressure, linkages, MURs
How to cite: Deng, C., Wang, H., Gong, S., Zhang, J., Yang, B., and Zhao, Z.: Effects of urbanization on food-energy-water systems in mega-urban regions: a case study of the Bohai MUR, China, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4576, https://doi.org/10.5194/egusphere-egu2020-4576, 2020.
EGU2020-9965 | Displays | ITS1.1/ERE7.1
A Systematic Review of Linkages and Trends in Water-Food-Energy/Urban Nexus ResearchTailin Huang, Min-Che Hu, and I-Chun Tsai
The water-food-energy (WFE) nexus are intertwined with urbanization, land use, and population growth and is rapidly expanding in scholarly literature and research projects as a novel way to address complex resource and development challenges. The nexus-related research aims to identify tradeoffs and synergies of water, energy, and food systems, internalize mutual impacts between the nexus and the urban systems, and guide the development of sustainable solutions. However, while the WFE nexus offers a promising conceptual approach, limited research focuses on systematically mapping the water, food, and energy interlinkages and evaluate the research trends and issues that we are facing in this field.
Water, food, and energy are the basis for human livelihoods and economic activities; they are also closely interrelated: Agriculture, forestry, and the energy sector simultaneously depend heavily on and affect water resources. Energy is essential for water management, but also agricultural production, processing, and marketing. Land is needed for the production of food, fodder, and renewable energy, as well as for water resource protection. Demographic trends – such as population growth, progressive urbanization, and globalization, changing lifestyles and consumer habits – are increasing pressure on already limited natural resources. A sustainable urban system requires the achievement of mitigating human impact on natural ecosystems while fulfilling our need for development.
Previous studies have discussed the research trends and nexus assessment tools (e.g., Endo et al. 2015;2017). Despite the increasing use of the WFE nexus in scholarly literature and research projects, few studies have systematically reviewed the broad range of linkages in the body of nexus literature. There is a need for a comprehensive review of, and critical reflection on, existing nexus linkages and issues to gain the big picture, improve clarity, and promote further advances in research for WFE nexus.
This paper reviews current WFE nexus linkages and issues to promote further development of tools and methods that align with nexus thinking and address the complexity of multi-sectoral resource interactions. As a conceptual framework, the nexus approach leverages an understanding of WEF linkages to promote coherence in policy-making and enhance sustainability. A summary of the most frequently used nexus linkages, issues, and keywords obtained from journal articles provides the clues to discover the current research emphases. Findings will provide a better understanding of trends in this line of research, which will serve as a useful reference for future studies.
How to cite: Huang, T., Hu, M.-C., and Tsai, I.-C.: A Systematic Review of Linkages and Trends in Water-Food-Energy/Urban Nexus Research, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9965, https://doi.org/10.5194/egusphere-egu2020-9965, 2020.
The water-food-energy (WFE) nexus are intertwined with urbanization, land use, and population growth and is rapidly expanding in scholarly literature and research projects as a novel way to address complex resource and development challenges. The nexus-related research aims to identify tradeoffs and synergies of water, energy, and food systems, internalize mutual impacts between the nexus and the urban systems, and guide the development of sustainable solutions. However, while the WFE nexus offers a promising conceptual approach, limited research focuses on systematically mapping the water, food, and energy interlinkages and evaluate the research trends and issues that we are facing in this field.
Water, food, and energy are the basis for human livelihoods and economic activities; they are also closely interrelated: Agriculture, forestry, and the energy sector simultaneously depend heavily on and affect water resources. Energy is essential for water management, but also agricultural production, processing, and marketing. Land is needed for the production of food, fodder, and renewable energy, as well as for water resource protection. Demographic trends – such as population growth, progressive urbanization, and globalization, changing lifestyles and consumer habits – are increasing pressure on already limited natural resources. A sustainable urban system requires the achievement of mitigating human impact on natural ecosystems while fulfilling our need for development.
Previous studies have discussed the research trends and nexus assessment tools (e.g., Endo et al. 2015;2017). Despite the increasing use of the WFE nexus in scholarly literature and research projects, few studies have systematically reviewed the broad range of linkages in the body of nexus literature. There is a need for a comprehensive review of, and critical reflection on, existing nexus linkages and issues to gain the big picture, improve clarity, and promote further advances in research for WFE nexus.
This paper reviews current WFE nexus linkages and issues to promote further development of tools and methods that align with nexus thinking and address the complexity of multi-sectoral resource interactions. As a conceptual framework, the nexus approach leverages an understanding of WEF linkages to promote coherence in policy-making and enhance sustainability. A summary of the most frequently used nexus linkages, issues, and keywords obtained from journal articles provides the clues to discover the current research emphases. Findings will provide a better understanding of trends in this line of research, which will serve as a useful reference for future studies.
How to cite: Huang, T., Hu, M.-C., and Tsai, I.-C.: A Systematic Review of Linkages and Trends in Water-Food-Energy/Urban Nexus Research, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9965, https://doi.org/10.5194/egusphere-egu2020-9965, 2020.
EGU2020-11004 | Displays | ITS1.1/ERE7.1
Reservoirs in world’s water towers: Need for appropriate governance processes to reach Sustainable Development GoalsElke Kellner and Manuela I. Brunner
Mountains play an essential role in storing water and providing it to downstream regions and are therefore commonly referred to as ‘water towers of the world’. In particular, they provide runoff in the lowlands’ low flow season by contributing snow- and glacier melt. Mountain runoff thus plays an important role in achieving the UN Sustainable Development Goals (SDGs), in particular regarding water, food, and energy. However, the mountains’ water provision service is strongly challenged by climate change leading to the retreat and volume loss of glaciers, rising snow lines, and changes in precipitation amount and variability. One potential strategy for addressing these changes is the construction of new water reservoirs or the adjustment of current reservoir management strategies. These strategies need to take account of various, eventually competing water uses rooted in different sectors relevant at different scales and governments with different economic interests.
We investigate the governance process related to the planning of a future reservoir in one of the most important water towers of the world, the European Alps. We ask why and how governance processes can lead to a coordination gap between upstream reservoir planning and the development of strategies allowing for the alleviation of downstream water shortage. We show on a case study in the Swiss Alps, that downstream water deficits could potentially be covered through a newly constructed upstream reservoir if management strategies were flexible enough. However, additional water uses than hydropower were not taken into account in the governance processes leading to the provision of a concession for the new reservoir. Instead, the decision-making within a participative process was influenced by (a) a lack of knowledge and data, (b) an interest to increase renewable energy production, (c) a focus on environmental agreements, and (d) economic interests. We conclude that upstream and downstream water demands need to be balanced in governance processes. Such balancing can be achieved by clarifying process design and by evaluating who can lead such complex processes with actors from different governments and sectors under the umbrella of non-uniform and incoherent institutions.
How to cite: Kellner, E. and Brunner, M. I.: Reservoirs in world’s water towers: Need for appropriate governance processes to reach Sustainable Development Goals, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11004, https://doi.org/10.5194/egusphere-egu2020-11004, 2020.
Mountains play an essential role in storing water and providing it to downstream regions and are therefore commonly referred to as ‘water towers of the world’. In particular, they provide runoff in the lowlands’ low flow season by contributing snow- and glacier melt. Mountain runoff thus plays an important role in achieving the UN Sustainable Development Goals (SDGs), in particular regarding water, food, and energy. However, the mountains’ water provision service is strongly challenged by climate change leading to the retreat and volume loss of glaciers, rising snow lines, and changes in precipitation amount and variability. One potential strategy for addressing these changes is the construction of new water reservoirs or the adjustment of current reservoir management strategies. These strategies need to take account of various, eventually competing water uses rooted in different sectors relevant at different scales and governments with different economic interests.
We investigate the governance process related to the planning of a future reservoir in one of the most important water towers of the world, the European Alps. We ask why and how governance processes can lead to a coordination gap between upstream reservoir planning and the development of strategies allowing for the alleviation of downstream water shortage. We show on a case study in the Swiss Alps, that downstream water deficits could potentially be covered through a newly constructed upstream reservoir if management strategies were flexible enough. However, additional water uses than hydropower were not taken into account in the governance processes leading to the provision of a concession for the new reservoir. Instead, the decision-making within a participative process was influenced by (a) a lack of knowledge and data, (b) an interest to increase renewable energy production, (c) a focus on environmental agreements, and (d) economic interests. We conclude that upstream and downstream water demands need to be balanced in governance processes. Such balancing can be achieved by clarifying process design and by evaluating who can lead such complex processes with actors from different governments and sectors under the umbrella of non-uniform and incoherent institutions.
How to cite: Kellner, E. and Brunner, M. I.: Reservoirs in world’s water towers: Need for appropriate governance processes to reach Sustainable Development Goals, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11004, https://doi.org/10.5194/egusphere-egu2020-11004, 2020.
EGU2020-11142 | Displays | ITS1.1/ERE7.1
An Input-Output Approach to Thailand's Energy Transition: Effects on the Land, Water and FoodIpsita Kumar, Kuishuang Feng, Varaprasad Bandaru, and Laixiang Sun
Population and economic growth have increased demand for food, energy, and other resources. At the same time, there is competition from those sectors on limited water and land resources. Thailand faces similar challenges as they transition towards energy independence by increasing renewable energy production for energy security, and to become future exporters of energy. Thailand implemented the Alternative Energy Development Policy (AEDP) in 2012, which led to shifting land use from rice for food to sugarcane for energy production, especially from crop residue. Currently, crop residue use for electricity production is well below its potential. In 2017, 1.06% and 4.44% of total potential of paddy husk and sugarcane bagasse respectively were being used for electricity generation (DEDE, 2017). The AEDP looks to increase energy production from residue use, by targeting future growth in demand, technological changes, and potential areas for renewable energy production. This policy will also impact food supply, water and land use. The sugarcane act in Thailand sets minimum internal prices, in line with international sugar prices, to safeguard the industry, and farmers. However, this safeguard does not apply to sales for energy production, thus discouraging farmers to sell sugarcane to power plants. The study uses an input-output model to understand the economic effects of using crop residue for electricity on the economy, land, labour, etc. The study runs two future scenarios and two historical years (2011 and 2014) to assess these impacts. The first scenario looks at the policy from the Ministry of Industry to stop sugarcane residue burning by 2022. The second scenario looks at the AEDP, which seeks to rapidly increase the generation of electricity from biomass by 2036. The results demonstrate that in the first scenario, where the entire potential of sugarcane bagasse is used for electricity production, electricity generated from all other sources remains nearly the same. Therefore, reliance on non-renewable sources do not change from 2014 to 2022. Similar results are seen for water use, labour and capital, where there is no change over time. The second scenario shows that while the AEDP increases production from biomass, it is not capturing the full potential and therefore electricity production is much lower from crop residues than in scenario 1. This leads to increasing production of electricity from other non renewable sources. We also see a reduction in paddy production and a rise in cane production before the implementation of the AEDP to the future. We conclude that while Thailand is moving towards energy security, policies should target technological development and mechanization at the farm level. The subsidies targeting farmers selling cane for sugar production should also reach those used for energy production, as well as to rice. To ensure reliability of energy supply, irrigation would also be required, as droughts and flooding are both common in different regions of Thailand. Another solution would be to increase the AEDP target, where a larger potential of sugarcane and rice residues are being used for electricity generation.
How to cite: Kumar, I., Feng, K., Bandaru, V., and Sun, L.: An Input-Output Approach to Thailand's Energy Transition: Effects on the Land, Water and Food, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11142, https://doi.org/10.5194/egusphere-egu2020-11142, 2020.
Population and economic growth have increased demand for food, energy, and other resources. At the same time, there is competition from those sectors on limited water and land resources. Thailand faces similar challenges as they transition towards energy independence by increasing renewable energy production for energy security, and to become future exporters of energy. Thailand implemented the Alternative Energy Development Policy (AEDP) in 2012, which led to shifting land use from rice for food to sugarcane for energy production, especially from crop residue. Currently, crop residue use for electricity production is well below its potential. In 2017, 1.06% and 4.44% of total potential of paddy husk and sugarcane bagasse respectively were being used for electricity generation (DEDE, 2017). The AEDP looks to increase energy production from residue use, by targeting future growth in demand, technological changes, and potential areas for renewable energy production. This policy will also impact food supply, water and land use. The sugarcane act in Thailand sets minimum internal prices, in line with international sugar prices, to safeguard the industry, and farmers. However, this safeguard does not apply to sales for energy production, thus discouraging farmers to sell sugarcane to power plants. The study uses an input-output model to understand the economic effects of using crop residue for electricity on the economy, land, labour, etc. The study runs two future scenarios and two historical years (2011 and 2014) to assess these impacts. The first scenario looks at the policy from the Ministry of Industry to stop sugarcane residue burning by 2022. The second scenario looks at the AEDP, which seeks to rapidly increase the generation of electricity from biomass by 2036. The results demonstrate that in the first scenario, where the entire potential of sugarcane bagasse is used for electricity production, electricity generated from all other sources remains nearly the same. Therefore, reliance on non-renewable sources do not change from 2014 to 2022. Similar results are seen for water use, labour and capital, where there is no change over time. The second scenario shows that while the AEDP increases production from biomass, it is not capturing the full potential and therefore electricity production is much lower from crop residues than in scenario 1. This leads to increasing production of electricity from other non renewable sources. We also see a reduction in paddy production and a rise in cane production before the implementation of the AEDP to the future. We conclude that while Thailand is moving towards energy security, policies should target technological development and mechanization at the farm level. The subsidies targeting farmers selling cane for sugar production should also reach those used for energy production, as well as to rice. To ensure reliability of energy supply, irrigation would also be required, as droughts and flooding are both common in different regions of Thailand. Another solution would be to increase the AEDP target, where a larger potential of sugarcane and rice residues are being used for electricity generation.
How to cite: Kumar, I., Feng, K., Bandaru, V., and Sun, L.: An Input-Output Approach to Thailand's Energy Transition: Effects on the Land, Water and Food, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11142, https://doi.org/10.5194/egusphere-egu2020-11142, 2020.
EGU2020-11722 | Displays | ITS1.1/ERE7.1
Integrating hydrological constraints for hydropower in energy models: the case of the Zambesi River Basin in the Southern African Power PoolMartina Daddi, Alessandro Barbieri, Andrea Castelletti, Matteo Giuliani, Emanuela Colombo, Matteo Rocco, and Nicolò Stevanato
Ensuring reliable supplies of energy and water are two important Sustainable Development Goals, particularly for Sub-Saharan African countries. The energy and water challenges are however not independent, and the interlinkages between them are increasingly recognized and studied using water-energy nexus approaches. Yet, most of existing modeling tools are not accurately reproducing this nexus and thus provide limited support to the design of sustainable development plans.
In this work, we contribute an integrated modeling approach by embedding the hydrological description of the Zambesi River Basin into an energy model of the Southern African Power Pool (SAPP). The SAPP is the largest African power pool in terms of installed capacity and coordinates the planning and operation of the electric power system among the twelve member countries (Angola, Botswana, DRC, Lesotho, Malawi, Mozambique, Namibia, South Africa, Swaziland, Tanzania, Zambia, Zimbabwe). Specifically, we use the Calliope energy model, which allows to form internally coherent scenarios of how energy is extracted, converted, transported and used, setting arbitrary spatial and temporal resolution and time series input data. As in many state-of-the-art energy models, hydropower production is poorly described by neglecting the water availability constraints and assuming hydropower plant produce at their nominal capacity in each timestep. Exploiting Calliope existing modeling components, we improved the hydrological description of the main reservoirs in the Zambezi River Basin as part of the overall SAPP model, namely Ithezithezi (120 MW), Kafue Gorge (990 MW), Kariba (1.8 GW) and Cahora Bassa (2 GW). Our improvements include the most relevant hydrological constraints, such as time-varying water availability as determined by inflow patterns, time-varying hydraulic head, evaporation losses, cascade releases and minimum and maximum storage value. The model outcomes, such as the storage timeseries of each reservoir and the power production by source of each country, are then evaluated for different hydrologic scenarios. Our results are expected to demonstrate the value of advancing the hydropower characterization in energy models by capturing reservoir dynamics and water resource availability. These improvements will be particularly valuable to support hydropower expansion in African countries that rely mostly on hydropower to satisfy their growing energy demand.
How to cite: Daddi, M., Barbieri, A., Castelletti, A., Giuliani, M., Colombo, E., Rocco, M., and Stevanato, N.: Integrating hydrological constraints for hydropower in energy models: the case of the Zambesi River Basin in the Southern African Power Pool, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11722, https://doi.org/10.5194/egusphere-egu2020-11722, 2020.
Ensuring reliable supplies of energy and water are two important Sustainable Development Goals, particularly for Sub-Saharan African countries. The energy and water challenges are however not independent, and the interlinkages between them are increasingly recognized and studied using water-energy nexus approaches. Yet, most of existing modeling tools are not accurately reproducing this nexus and thus provide limited support to the design of sustainable development plans.
In this work, we contribute an integrated modeling approach by embedding the hydrological description of the Zambesi River Basin into an energy model of the Southern African Power Pool (SAPP). The SAPP is the largest African power pool in terms of installed capacity and coordinates the planning and operation of the electric power system among the twelve member countries (Angola, Botswana, DRC, Lesotho, Malawi, Mozambique, Namibia, South Africa, Swaziland, Tanzania, Zambia, Zimbabwe). Specifically, we use the Calliope energy model, which allows to form internally coherent scenarios of how energy is extracted, converted, transported and used, setting arbitrary spatial and temporal resolution and time series input data. As in many state-of-the-art energy models, hydropower production is poorly described by neglecting the water availability constraints and assuming hydropower plant produce at their nominal capacity in each timestep. Exploiting Calliope existing modeling components, we improved the hydrological description of the main reservoirs in the Zambezi River Basin as part of the overall SAPP model, namely Ithezithezi (120 MW), Kafue Gorge (990 MW), Kariba (1.8 GW) and Cahora Bassa (2 GW). Our improvements include the most relevant hydrological constraints, such as time-varying water availability as determined by inflow patterns, time-varying hydraulic head, evaporation losses, cascade releases and minimum and maximum storage value. The model outcomes, such as the storage timeseries of each reservoir and the power production by source of each country, are then evaluated for different hydrologic scenarios. Our results are expected to demonstrate the value of advancing the hydropower characterization in energy models by capturing reservoir dynamics and water resource availability. These improvements will be particularly valuable to support hydropower expansion in African countries that rely mostly on hydropower to satisfy their growing energy demand.
How to cite: Daddi, M., Barbieri, A., Castelletti, A., Giuliani, M., Colombo, E., Rocco, M., and Stevanato, N.: Integrating hydrological constraints for hydropower in energy models: the case of the Zambesi River Basin in the Southern African Power Pool, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11722, https://doi.org/10.5194/egusphere-egu2020-11722, 2020.
EGU2020-11780 | Displays | ITS1.1/ERE7.1
The Impact of Charcoal Production for Energy on Tropical Rainforest Resources in NigeriaAngelique Lansu, Jaap Bos, and Wilfried Ivens
In Sub Saharan Africa, many people depend on biomass for their household energy. Charcoal production is a common technique for converting biomass into a useful energy source. Nigeria is the biggest charcoal producer in Sub Saharan Africa. A large amount of wood is harvested from Nigerian forests for this charcoal production for energy. The Nexus of charcoal-land use change-energy imposes a considerable burden on the amount of wood that must be extracted from the forest for charcoal production. Therefore, charcoal production is linked to deforestation and forest degradation. However, it is not clear to what extent the demand for charcoal in Nigeria contributes to deforestation by land use change, and degradation of forests by selected wood logging. In this study, an attempt was made to provide an answer to this and to state which situation could occur by 2030, following the expected population growth in Nigeria. To achieve this, literature and open data on charcoal production, deforestation, forest degradation and population growth in Nigeria have been collected and analysed. Subsequently, calculations were carried out to determine to what extent charcoal production contributed to deforestation in the period 1990-2015. In this period, the share of deforestation due to charcoal production increased from 6% to 14%. If the expected charcoal production in 2030 were to apply to the current situation, this share would be around 20%. The quantity of wood required can also be expressed in numbers of hectares with biomass. In that case, around 80,000 ha would be required in 2030. To validate the findings, further research is needed on the amount of biomass per hectare in Nigerian forests, and on the amount of charcoal exported, not only as source of household energy but also globally as barbecue fuel. A more extensive analysis of open data on the nexus charcoal-land use change-energy at multiple scales will help to project future interlinkages.
How to cite: Lansu, A., Bos, J., and Ivens, W.: The Impact of Charcoal Production for Energy on Tropical Rainforest Resources in Nigeria, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11780, https://doi.org/10.5194/egusphere-egu2020-11780, 2020.
In Sub Saharan Africa, many people depend on biomass for their household energy. Charcoal production is a common technique for converting biomass into a useful energy source. Nigeria is the biggest charcoal producer in Sub Saharan Africa. A large amount of wood is harvested from Nigerian forests for this charcoal production for energy. The Nexus of charcoal-land use change-energy imposes a considerable burden on the amount of wood that must be extracted from the forest for charcoal production. Therefore, charcoal production is linked to deforestation and forest degradation. However, it is not clear to what extent the demand for charcoal in Nigeria contributes to deforestation by land use change, and degradation of forests by selected wood logging. In this study, an attempt was made to provide an answer to this and to state which situation could occur by 2030, following the expected population growth in Nigeria. To achieve this, literature and open data on charcoal production, deforestation, forest degradation and population growth in Nigeria have been collected and analysed. Subsequently, calculations were carried out to determine to what extent charcoal production contributed to deforestation in the period 1990-2015. In this period, the share of deforestation due to charcoal production increased from 6% to 14%. If the expected charcoal production in 2030 were to apply to the current situation, this share would be around 20%. The quantity of wood required can also be expressed in numbers of hectares with biomass. In that case, around 80,000 ha would be required in 2030. To validate the findings, further research is needed on the amount of biomass per hectare in Nigerian forests, and on the amount of charcoal exported, not only as source of household energy but also globally as barbecue fuel. A more extensive analysis of open data on the nexus charcoal-land use change-energy at multiple scales will help to project future interlinkages.
How to cite: Lansu, A., Bos, J., and Ivens, W.: The Impact of Charcoal Production for Energy on Tropical Rainforest Resources in Nigeria, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11780, https://doi.org/10.5194/egusphere-egu2020-11780, 2020.
EGU2020-11937 | Displays | ITS1.1/ERE7.1
Assessing the impacts of shale gas development on the water-energy nexus across the semiarid Mexico’s northeastSaul Arciniega-Esparza, Agustín Breña-Naranjo, Antonio Hernández-Espriú, and Adrián Pedrozo-Acuña
An intensification of water use for hydraulic fracturing (HF) to extract oil and gas from deep shale formations has been observed during the last years across the USA, increasing concerns about water resources management in water-limited regions around the world. At the same time, HF has been associated to several environmental and water quality/quantity impacts in many developed plays of USA, China and Canada, nevertheless, assessing impacts on emergent plays involves several difficulties since future development of HF is generally unknown and because of the lack of local data to evaluate water resources baselines.
In this work, we present a framework that combines the use of remote sensing derived data to assess the baseline of water resources and the development and application of a statistical model to project the development of HF activities. Remote sensing and global land surface model products of precipitation (CHIRPS), evapotranspiration (MODIS), recharge (WaterGAP model), infiltration and runoff (MERRA) and water storage (GRACE) were used to estimate water availability and the hydrological response of watersheds and aquifers. Scenarios of HF were generated using a statistical model that simulates HF water requirements, hydrocarbon production, flowback-produced water and economic trends, among others factors that influence the HF development.
The proposed framework was applied to evaluate the impacts of HF development on the water energy-nexus at the transboundary Eagle Ford play, located across Mexico’s northeast, a water-limited region that contains substantial reserves of shale gas.
Scenarios were generated following two economic projections and assuming water use trends and historical HF development from the Eagle Ford, Barnett and Haynesville plays, in Texas, which are geologically similar to the Mexican Eagle Ford play.
Results suggested that the higher impacts on the water-energy nexus in Mexico resulted from reported trends in Eagle Ford, Texas, with ~14,000 wells drilled in ten years and an accumulative water use volume of ~450 millions cubic meters, representing about ~69% of the annual groundwater concessions for municipal use.
The framework presented in this work can be used in other plays around the world to assess the impacts of HF on water resources and their implications in its water-energy nexus.
How to cite: Arciniega-Esparza, S., Breña-Naranjo, A., Hernández-Espriú, A., and Pedrozo-Acuña, A.: Assessing the impacts of shale gas development on the water-energy nexus across the semiarid Mexico’s northeast, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11937, https://doi.org/10.5194/egusphere-egu2020-11937, 2020.
An intensification of water use for hydraulic fracturing (HF) to extract oil and gas from deep shale formations has been observed during the last years across the USA, increasing concerns about water resources management in water-limited regions around the world. At the same time, HF has been associated to several environmental and water quality/quantity impacts in many developed plays of USA, China and Canada, nevertheless, assessing impacts on emergent plays involves several difficulties since future development of HF is generally unknown and because of the lack of local data to evaluate water resources baselines.
In this work, we present a framework that combines the use of remote sensing derived data to assess the baseline of water resources and the development and application of a statistical model to project the development of HF activities. Remote sensing and global land surface model products of precipitation (CHIRPS), evapotranspiration (MODIS), recharge (WaterGAP model), infiltration and runoff (MERRA) and water storage (GRACE) were used to estimate water availability and the hydrological response of watersheds and aquifers. Scenarios of HF were generated using a statistical model that simulates HF water requirements, hydrocarbon production, flowback-produced water and economic trends, among others factors that influence the HF development.
The proposed framework was applied to evaluate the impacts of HF development on the water energy-nexus at the transboundary Eagle Ford play, located across Mexico’s northeast, a water-limited region that contains substantial reserves of shale gas.
Scenarios were generated following two economic projections and assuming water use trends and historical HF development from the Eagle Ford, Barnett and Haynesville plays, in Texas, which are geologically similar to the Mexican Eagle Ford play.
Results suggested that the higher impacts on the water-energy nexus in Mexico resulted from reported trends in Eagle Ford, Texas, with ~14,000 wells drilled in ten years and an accumulative water use volume of ~450 millions cubic meters, representing about ~69% of the annual groundwater concessions for municipal use.
The framework presented in this work can be used in other plays around the world to assess the impacts of HF on water resources and their implications in its water-energy nexus.
How to cite: Arciniega-Esparza, S., Breña-Naranjo, A., Hernández-Espriú, A., and Pedrozo-Acuña, A.: Assessing the impacts of shale gas development on the water-energy nexus across the semiarid Mexico’s northeast, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11937, https://doi.org/10.5194/egusphere-egu2020-11937, 2020.
EGU2020-13430 | Displays | ITS1.1/ERE7.1
Investigation of the effect of backwater on the propagation of thermal pollution during operation of a thermal power plantYanina Parshakova, Tatyana Lyubimova, Anatoliy Lepikhin, and Yuriy Lyakhin
For operation of large thermal power plants, reservoirs-receivers are the most common type of cooler. Depending on the capacity of the power plants and the size of the water bodies used as reservoirs-receivers, the organization of the cooling system may be direct-flow or reverse. The main task of the effective operation of the cooling system is to ensure the stability of its functioning under conditions of significant variability of both hydrological and meteorological parameters. For the solution of this problem, the development of technological operation schemes based on computational experiments is of fundamental importance. It is also important to take into account the effect of thermal pollution on changes in the ice-thermal regime, hydrobiological processes in the area of the influence of the discharge of heated water. At the same time, it is important to take into account both technological and environmental criteria when assessing the parameters of temperature fields created during the discharge of heated water, depending on the complex of technological and hydrometeorological parameters.
In the present paper, we considered the scenarios of the impact of the Perm Power Plant on the Kama reservoir using a direct-flow cooling system, which are of the great interest from an environmental and technological points of view. Three-dimensional numerical simulation was carried out for different operating modes of the Kama reservoir. Since significant vertical temperature heterogeneity is observed in reservoirs-receivers, in order to achieve sufficient correctness, calculations should be conducted in the general case using 3D models. However, the implementation of such calculations for large water bodies in the conditions of the extremely limited current monitoring network encounters very significant difficulties due to the limited computing resources. In this regard, a combined calculation scheme is proposed and is being implemented, including models in 1D, 2D, 3D formulations. 1D model was built for the entire reservoir, 2D model for 30 km-length section adjacent to the Perm Power Plant, and for 10 km-length section that includes the supply and discharge channels of the Perm Power Plant, 3D model was created.
The calculations have shown that under conditions of strong wind in a direction opposite to the direction of the river flow, large-scale three-dimensional vortex is formed within several hours, the horizontal size of which is equal to the distance between the junctions of the supply and discharge channels with the reservoir, and the vertical size is equal to the depth of the river. The presence of backwater from the Kama hydroelectric station leads to the active movement of warm water in the surface layer against the river flow. In this case, in a few hours, warm water reaches the water intake point of the cooling channel, which is extremely undesirable from a technological point of view. Significant temperature heterogeneity also arises in depth, with the temperature gradient being greatest near the bottom of the river.
The study was supported by Russian Science Foundation (grant 17-77-20093).
How to cite: Parshakova, Y., Lyubimova, T., Lepikhin, A., and Lyakhin, Y.: Investigation of the effect of backwater on the propagation of thermal pollution during operation of a thermal power plant, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13430, https://doi.org/10.5194/egusphere-egu2020-13430, 2020.
For operation of large thermal power plants, reservoirs-receivers are the most common type of cooler. Depending on the capacity of the power plants and the size of the water bodies used as reservoirs-receivers, the organization of the cooling system may be direct-flow or reverse. The main task of the effective operation of the cooling system is to ensure the stability of its functioning under conditions of significant variability of both hydrological and meteorological parameters. For the solution of this problem, the development of technological operation schemes based on computational experiments is of fundamental importance. It is also important to take into account the effect of thermal pollution on changes in the ice-thermal regime, hydrobiological processes in the area of the influence of the discharge of heated water. At the same time, it is important to take into account both technological and environmental criteria when assessing the parameters of temperature fields created during the discharge of heated water, depending on the complex of technological and hydrometeorological parameters.
In the present paper, we considered the scenarios of the impact of the Perm Power Plant on the Kama reservoir using a direct-flow cooling system, which are of the great interest from an environmental and technological points of view. Three-dimensional numerical simulation was carried out for different operating modes of the Kama reservoir. Since significant vertical temperature heterogeneity is observed in reservoirs-receivers, in order to achieve sufficient correctness, calculations should be conducted in the general case using 3D models. However, the implementation of such calculations for large water bodies in the conditions of the extremely limited current monitoring network encounters very significant difficulties due to the limited computing resources. In this regard, a combined calculation scheme is proposed and is being implemented, including models in 1D, 2D, 3D formulations. 1D model was built for the entire reservoir, 2D model for 30 km-length section adjacent to the Perm Power Plant, and for 10 km-length section that includes the supply and discharge channels of the Perm Power Plant, 3D model was created.
The calculations have shown that under conditions of strong wind in a direction opposite to the direction of the river flow, large-scale three-dimensional vortex is formed within several hours, the horizontal size of which is equal to the distance between the junctions of the supply and discharge channels with the reservoir, and the vertical size is equal to the depth of the river. The presence of backwater from the Kama hydroelectric station leads to the active movement of warm water in the surface layer against the river flow. In this case, in a few hours, warm water reaches the water intake point of the cooling channel, which is extremely undesirable from a technological point of view. Significant temperature heterogeneity also arises in depth, with the temperature gradient being greatest near the bottom of the river.
The study was supported by Russian Science Foundation (grant 17-77-20093).
How to cite: Parshakova, Y., Lyubimova, T., Lepikhin, A., and Lyakhin, Y.: Investigation of the effect of backwater on the propagation of thermal pollution during operation of a thermal power plant, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13430, https://doi.org/10.5194/egusphere-egu2020-13430, 2020.
EGU2020-16784 | Displays | ITS1.1/ERE7.1
Integrating reservoirs in a landscape-based hydrological model to understand the impact of the reservoir on flow regime in the Cauvery river basin, IndiaAnjana Ekka, Saket Kesav, Saket Pande, Pieter van der Zaag, and Yong Jiang
As economic development continues to expand, rivers resources are exploited for power generation, flood control, and irrigation, which substantially impacts the river hydrology and surrounding ecosystem. Reservoir construction is one of the major contributors to such changes. Around the world, the long free-flowing rivers are impaired due to reservoirs and their downstream propagation of fragmentation and flow regulation, which impacts the structural and functional connectivities of the entire basin. The extent of interdependence and interactions of biophysical, social, and economic characteristics determine hydrological behaviour and thus define the sustainability of the river ecosystem. In this regard, the topography driven rainfall-runoff modeling (Flex-Topo model) approximates the river landscape hydrological behaviour by delineating the catchment into three functional hydrological units (HRUs). However, these HRUs are natural and do not take anthropogenic factors into account. Therefore, the present study aims to understand the effects of the integration of reservoirs into a Flex-Topo model to assess model transferability in predicting the river flow regime in ungauged basins.
The Cauvery river basin in India is chosen as a case study. The construction of reservoirs in the Cauvery basin helped to expand irrigated areas, securing water availability during water stress conditions. Nevertheless, it aggravates the water allocation between upstream and downstream states leading to conflict among states sharing the river basin. Based on size and storage capacity, four large reservoirs are selected for the study. At first, the watershed area is delineated based on the gauge location. For adding reservoirs, two different flex-models are created for the watershed’s areas upstream and downstream of the reservoirs. A separate reservoir model is created for each reservoir. The reservoir model is integrated into the flex-model following operation rule curves to simulate the reservoir based on different reservoir yield. It is assumed that the response of the upstream catchment will serve as an input to the reservoir, and the outflow of the reservoir will be an input to the downstream catchment. These three subunits are connected, and river flow is simulated at the gauge station located at the downstream of the reservoir. Three different procedures are adopted to calibrate the model. First, the integrated flex reservoir model is calibrated using the downstream gauging station. In the second calibration method the reservoir is calibrated first, then keeping the parameters of the reservoir fixed the integrated model is calibrated using downstream gauging station. Third, both the reservoir model and flex model are calibrated separately. The modelled runoff from each parameter sets are compared using Nash-Sutcliffe Model Efficiency and Mean Absolute Error with the observed.
Results indicate that the second calibration method performed the best and improved the overall performance of the Flex-Topo model. Further, results are compared across the four reservoirs in order to develop a generalized understanding of transferring a integrated flex model to basins where data on reservoirs is unavailable. The proposed method therefore provides a way to simulate both biophysical constraint and anthropogenic modifications simultaneously in river landscape and enhance understanding of impact of reservoirs on river flow regime.
How to cite: Ekka, A., Kesav, S., Pande, S., Zaag, P. V. D., and Jiang, Y.: Integrating reservoirs in a landscape-based hydrological model to understand the impact of the reservoir on flow regime in the Cauvery river basin, India, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-16784, https://doi.org/10.5194/egusphere-egu2020-16784, 2020.
As economic development continues to expand, rivers resources are exploited for power generation, flood control, and irrigation, which substantially impacts the river hydrology and surrounding ecosystem. Reservoir construction is one of the major contributors to such changes. Around the world, the long free-flowing rivers are impaired due to reservoirs and their downstream propagation of fragmentation and flow regulation, which impacts the structural and functional connectivities of the entire basin. The extent of interdependence and interactions of biophysical, social, and economic characteristics determine hydrological behaviour and thus define the sustainability of the river ecosystem. In this regard, the topography driven rainfall-runoff modeling (Flex-Topo model) approximates the river landscape hydrological behaviour by delineating the catchment into three functional hydrological units (HRUs). However, these HRUs are natural and do not take anthropogenic factors into account. Therefore, the present study aims to understand the effects of the integration of reservoirs into a Flex-Topo model to assess model transferability in predicting the river flow regime in ungauged basins.
The Cauvery river basin in India is chosen as a case study. The construction of reservoirs in the Cauvery basin helped to expand irrigated areas, securing water availability during water stress conditions. Nevertheless, it aggravates the water allocation between upstream and downstream states leading to conflict among states sharing the river basin. Based on size and storage capacity, four large reservoirs are selected for the study. At first, the watershed area is delineated based on the gauge location. For adding reservoirs, two different flex-models are created for the watershed’s areas upstream and downstream of the reservoirs. A separate reservoir model is created for each reservoir. The reservoir model is integrated into the flex-model following operation rule curves to simulate the reservoir based on different reservoir yield. It is assumed that the response of the upstream catchment will serve as an input to the reservoir, and the outflow of the reservoir will be an input to the downstream catchment. These three subunits are connected, and river flow is simulated at the gauge station located at the downstream of the reservoir. Three different procedures are adopted to calibrate the model. First, the integrated flex reservoir model is calibrated using the downstream gauging station. In the second calibration method the reservoir is calibrated first, then keeping the parameters of the reservoir fixed the integrated model is calibrated using downstream gauging station. Third, both the reservoir model and flex model are calibrated separately. The modelled runoff from each parameter sets are compared using Nash-Sutcliffe Model Efficiency and Mean Absolute Error with the observed.
Results indicate that the second calibration method performed the best and improved the overall performance of the Flex-Topo model. Further, results are compared across the four reservoirs in order to develop a generalized understanding of transferring a integrated flex model to basins where data on reservoirs is unavailable. The proposed method therefore provides a way to simulate both biophysical constraint and anthropogenic modifications simultaneously in river landscape and enhance understanding of impact of reservoirs on river flow regime.
How to cite: Ekka, A., Kesav, S., Pande, S., Zaag, P. V. D., and Jiang, Y.: Integrating reservoirs in a landscape-based hydrological model to understand the impact of the reservoir on flow regime in the Cauvery river basin, India, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-16784, https://doi.org/10.5194/egusphere-egu2020-16784, 2020.
EGU2020-16867 | Displays | ITS1.1/ERE7.1 | Highlight
The Water-Land-Energy-Food-Climate Nexus In SardiniaAntonio Trabucco, Sara Masia, Janez Sušnik, Donatella Spano, and Simone Mereu
Water use in the Mediterranean has been often pushed beyond sustainability, leading to water degradation and deterioration of ecosystem services. Different factors are interlinked with water management within a dynamically complex system (i.e. the Nexus) characterized by many feedbacks, trade-offs and high complexity of socioeconomic and environmental agents inducing non-linear responses hard to predict. Understanding such nexus systems requires innovative methodologies able to integrate different domains (e.g. hydrology, economics, planning, environmental and social sciences) and potential feedbacks, to support effective and targeted adaptation measures, taking into consideration uncertainty of climate change forecasts and associated impacts. Within the H2020 SIM4NEXUS project, water-land-energy-food-climate nexus links for Sardinia Island were represented with system dynamics modelling, together with relevant policy objectives, goals and measures. Sardinia, as many other Mediterranean regions, must implement a sustainable approach to water management, taking into account an equitable distribution of water resources between different sectors, economic needs, social priorities and ecology of freshwater ecosystems.
For the Sardinia case study, the main focus was the representation of the reservoir water balance for the island, accounting predominantly for water supply and for water demand related to agricultural, hydro-power production, domestic/tourist consumption and environmental flows. With irrigated agriculture being the largest water consumer, this sector was modelled in more detail with crop specific distribution and projections. While water is the central focus, links with other nexus sectors including energy, climate, food and land use are included. Energy generation and consumption were also important along with the mode of generation and sector of consumption, as was modelling the change in crop types (i.e. land use and food production changes) and the crop water requirements associated with potential crop and cropped area changes, and in response to change in the local climate. Energy production is modelled from sources including oil, coal and methane, solar, wind and hydropower, while energy demand comes from the agricultural, domestic, industrial and service sectors (including transportation). The use of energy from the different sectors and using different energy sources, either renewable and not renewable, have different implication on GHG and climate change.
While driven by strong interests to secure food provisions, an increase in irrigation in the Mediterranean may not be totally sustainable. Irrigation requirements of crops are projected to increase between 4 and 18% for 2050 compared to present conditions, limiting expansion of irrigated agriculture in Sardinia. Over the same period the inflow in the reservoirs can decrease between 5 and 20% and evaporation losses from reservoir surface bodies increase by 10%. Policy rules are tested and highlight how optimal allocation should be enforced in order to safeguard sustainability of natural resources over time, especially when considering climate variability. Natural resources are better preserved avoiding conflicts with strong seasonal peaks (i.e. summer). To meet these criticalities, new infrastructures and investments should increase use efficiency, All this would require changes in institutional and market conditions with a more cautious water management that includes prices and recycling policies.
How to cite: Trabucco, A., Masia, S., Sušnik, J., Spano, D., and Mereu, S.: The Water-Land-Energy-Food-Climate Nexus In Sardinia , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-16867, https://doi.org/10.5194/egusphere-egu2020-16867, 2020.
Water use in the Mediterranean has been often pushed beyond sustainability, leading to water degradation and deterioration of ecosystem services. Different factors are interlinked with water management within a dynamically complex system (i.e. the Nexus) characterized by many feedbacks, trade-offs and high complexity of socioeconomic and environmental agents inducing non-linear responses hard to predict. Understanding such nexus systems requires innovative methodologies able to integrate different domains (e.g. hydrology, economics, planning, environmental and social sciences) and potential feedbacks, to support effective and targeted adaptation measures, taking into consideration uncertainty of climate change forecasts and associated impacts. Within the H2020 SIM4NEXUS project, water-land-energy-food-climate nexus links for Sardinia Island were represented with system dynamics modelling, together with relevant policy objectives, goals and measures. Sardinia, as many other Mediterranean regions, must implement a sustainable approach to water management, taking into account an equitable distribution of water resources between different sectors, economic needs, social priorities and ecology of freshwater ecosystems.
For the Sardinia case study, the main focus was the representation of the reservoir water balance for the island, accounting predominantly for water supply and for water demand related to agricultural, hydro-power production, domestic/tourist consumption and environmental flows. With irrigated agriculture being the largest water consumer, this sector was modelled in more detail with crop specific distribution and projections. While water is the central focus, links with other nexus sectors including energy, climate, food and land use are included. Energy generation and consumption were also important along with the mode of generation and sector of consumption, as was modelling the change in crop types (i.e. land use and food production changes) and the crop water requirements associated with potential crop and cropped area changes, and in response to change in the local climate. Energy production is modelled from sources including oil, coal and methane, solar, wind and hydropower, while energy demand comes from the agricultural, domestic, industrial and service sectors (including transportation). The use of energy from the different sectors and using different energy sources, either renewable and not renewable, have different implication on GHG and climate change.
While driven by strong interests to secure food provisions, an increase in irrigation in the Mediterranean may not be totally sustainable. Irrigation requirements of crops are projected to increase between 4 and 18% for 2050 compared to present conditions, limiting expansion of irrigated agriculture in Sardinia. Over the same period the inflow in the reservoirs can decrease between 5 and 20% and evaporation losses from reservoir surface bodies increase by 10%. Policy rules are tested and highlight how optimal allocation should be enforced in order to safeguard sustainability of natural resources over time, especially when considering climate variability. Natural resources are better preserved avoiding conflicts with strong seasonal peaks (i.e. summer). To meet these criticalities, new infrastructures and investments should increase use efficiency, All this would require changes in institutional and market conditions with a more cautious water management that includes prices and recycling policies.
How to cite: Trabucco, A., Masia, S., Sušnik, J., Spano, D., and Mereu, S.: The Water-Land-Energy-Food-Climate Nexus In Sardinia , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-16867, https://doi.org/10.5194/egusphere-egu2020-16867, 2020.
EGU2020-17840 | Displays | ITS1.1/ERE7.1
Quantifying Water-Energy-food Nexus based on CO2 emission in farm-landMarzieh Hasanzadeh Saray, Ali Torabi Haghighi, Nasim Fazel, and Björn Klöve
Water, energy, and food security in today's world have been hampered by high population and economic growth, pressures on limited resources, and climate change. Accordingly, balancing the various critical components of biomass in the form of a water-energy-food (WEF) Nexus approach is one of the essential pillars of water resources management, which will enhance the long-term sustainability of water resources by promoting sustainable development. Assessing the WEF Nexus based on CO2 emissions leads to quantify the role of each component of WEF. This work aims to quantify WEF Nexus in a pilot study in the North West of Iran based on analyzing the CO2 emission of the involved sectors. Gathering all require data that are involved in different activities in water, energy, and food sectors is the main challenge in this regard. Sahand Agro-Industry CO2, established in 1996 and expanded in an area about 200 ha to produce alfalfa, maize, potato, rapeseed, sugar beet, and wheat. The area with an average annual temperature of 10.1 °C and bout 356 mm precipitation is located in a warm, dry-summer continental climate (Dsb climate, according to köppen climate classification). A detailed dataset including labor, machinery, diesel oil, fertilizer (nitrogen, potassium, and phosphorus), biocide (pesticide, fungicide, and herbicide), irrigation water (groundwater and surface water), and output per unit area per product has been collected for 2008-2017. We evaluated the WEF Nexus by estimating CO2 emission based on the water and energy equivalent and food production per unit area of crop production systems. In this regard, we applied several indices, including the WEF Nexus, water, and energy consumption, mass, and economic productivity, to estimate the CO2 emitted during a ten-year time period, besides the effect of changing the cropping pattern on the amount of CO2 emission. Furthermore, we developed an approach to achieve optimal cropping pattern to minimize water and energy consumption and maximize productivity. Because of the detail calculation of mentioned indices and existing operational limitations, first, two margin scenarios were developed: 1- crop pattern with the lowest CO2 emission and 2- Crop pattern with the maximum net benefit. For each pattern, we calculated the area for different crops. Then by combining these two marginal patterns and using dynamic programming, we developed 128 different patterns between the two mentioned margins. The results showed that as the differentiation in the amount of CO2 equivalent for each crop, different cultivation patterns would have a different effect on the carbon dioxide emission. Water withdrawal (extraction, displacement, and distribution of water in the field) requires energy consumption, which varies depending on the source used for irrigation. Also, water productivity per kcal per m3 will vary depending on the type of crop, cropping system, and agricultural management. Finally, we clustered scenarios based on CO2 emission and net benefit and suggested the optimum condition.
Keywords: CO2 emission, economic productivity, optimization, sustainable development, water-energy-food Nexus
How to cite: Hasanzadeh Saray, M., Torabi Haghighi, A., Fazel, N., and Klöve, B.: Quantifying Water-Energy-food Nexus based on CO2 emission in farm-land, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-17840, https://doi.org/10.5194/egusphere-egu2020-17840, 2020.
Water, energy, and food security in today's world have been hampered by high population and economic growth, pressures on limited resources, and climate change. Accordingly, balancing the various critical components of biomass in the form of a water-energy-food (WEF) Nexus approach is one of the essential pillars of water resources management, which will enhance the long-term sustainability of water resources by promoting sustainable development. Assessing the WEF Nexus based on CO2 emissions leads to quantify the role of each component of WEF. This work aims to quantify WEF Nexus in a pilot study in the North West of Iran based on analyzing the CO2 emission of the involved sectors. Gathering all require data that are involved in different activities in water, energy, and food sectors is the main challenge in this regard. Sahand Agro-Industry CO2, established in 1996 and expanded in an area about 200 ha to produce alfalfa, maize, potato, rapeseed, sugar beet, and wheat. The area with an average annual temperature of 10.1 °C and bout 356 mm precipitation is located in a warm, dry-summer continental climate (Dsb climate, according to köppen climate classification). A detailed dataset including labor, machinery, diesel oil, fertilizer (nitrogen, potassium, and phosphorus), biocide (pesticide, fungicide, and herbicide), irrigation water (groundwater and surface water), and output per unit area per product has been collected for 2008-2017. We evaluated the WEF Nexus by estimating CO2 emission based on the water and energy equivalent and food production per unit area of crop production systems. In this regard, we applied several indices, including the WEF Nexus, water, and energy consumption, mass, and economic productivity, to estimate the CO2 emitted during a ten-year time period, besides the effect of changing the cropping pattern on the amount of CO2 emission. Furthermore, we developed an approach to achieve optimal cropping pattern to minimize water and energy consumption and maximize productivity. Because of the detail calculation of mentioned indices and existing operational limitations, first, two margin scenarios were developed: 1- crop pattern with the lowest CO2 emission and 2- Crop pattern with the maximum net benefit. For each pattern, we calculated the area for different crops. Then by combining these two marginal patterns and using dynamic programming, we developed 128 different patterns between the two mentioned margins. The results showed that as the differentiation in the amount of CO2 equivalent for each crop, different cultivation patterns would have a different effect on the carbon dioxide emission. Water withdrawal (extraction, displacement, and distribution of water in the field) requires energy consumption, which varies depending on the source used for irrigation. Also, water productivity per kcal per m3 will vary depending on the type of crop, cropping system, and agricultural management. Finally, we clustered scenarios based on CO2 emission and net benefit and suggested the optimum condition.
Keywords: CO2 emission, economic productivity, optimization, sustainable development, water-energy-food Nexus
How to cite: Hasanzadeh Saray, M., Torabi Haghighi, A., Fazel, N., and Klöve, B.: Quantifying Water-Energy-food Nexus based on CO2 emission in farm-land, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-17840, https://doi.org/10.5194/egusphere-egu2020-17840, 2020.
EGU2020-18378 | Displays | ITS1.1/ERE7.1
Dynamic Energy-Water-Land hotspots at variable spatial scales across the United StatesZarrar Khan, Thomas Wild, Chris Vernon, Mohamad Hejazi, Gokul Iyer, and Neal Graham
Energy, water, and land (EWL) resource planning at regional (e.g. large river basins, states and provinces, balancing authorities) and sub-regional (e.g. sub-basins, counties, Agro-Ecological Zones (AEZ)) scales has commonly been conducted in relative isolation by institutions focused on individual sectors, such as water supply or electricity. The effectiveness of this traditional approach is increasingly being strained by rapid integration among sectors as well as by a range of regional and global forces, such as climate, technological and socioeconomic change. In this study we explore regional and sub-regional implications of these changes across the United States for a suite of scenarios representing a range of socio-economic and climate pathways. We couple a global integrated assessment model with a suite of sectoral downscaling tools to analyze the evolution of EWL hotspots at variable spatial scales. The ability to flexibly telescope into regions to identify the evolution of dynamic EWL hotspots allows planners to capitalize on synergistic opportunities as well as avoid potential conflicts across sectors at stakeholder specific jurisdictional boundaries as well as in the context of the larger region.
How to cite: Khan, Z., Wild, T., Vernon, C., Hejazi, M., Iyer, G., and Graham, N.: Dynamic Energy-Water-Land hotspots at variable spatial scales across the United States, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18378, https://doi.org/10.5194/egusphere-egu2020-18378, 2020.
Energy, water, and land (EWL) resource planning at regional (e.g. large river basins, states and provinces, balancing authorities) and sub-regional (e.g. sub-basins, counties, Agro-Ecological Zones (AEZ)) scales has commonly been conducted in relative isolation by institutions focused on individual sectors, such as water supply or electricity. The effectiveness of this traditional approach is increasingly being strained by rapid integration among sectors as well as by a range of regional and global forces, such as climate, technological and socioeconomic change. In this study we explore regional and sub-regional implications of these changes across the United States for a suite of scenarios representing a range of socio-economic and climate pathways. We couple a global integrated assessment model with a suite of sectoral downscaling tools to analyze the evolution of EWL hotspots at variable spatial scales. The ability to flexibly telescope into regions to identify the evolution of dynamic EWL hotspots allows planners to capitalize on synergistic opportunities as well as avoid potential conflicts across sectors at stakeholder specific jurisdictional boundaries as well as in the context of the larger region.
How to cite: Khan, Z., Wild, T., Vernon, C., Hejazi, M., Iyer, G., and Graham, N.: Dynamic Energy-Water-Land hotspots at variable spatial scales across the United States, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18378, https://doi.org/10.5194/egusphere-egu2020-18378, 2020.
EGU2020-19986 | Displays | ITS1.1/ERE7.1
Environmental sustainability of increasing silk demand in IndiaLivia Ricciardi, Seda Karatas, Davide Danilo Chiarelli, and Maria Cristina Rulli
Natural resources competition between food and cash crops is a current challenge in many developing countries that are experiencing both lack of food availability and a fast growing economy, such as India. Silk industry has always been significant for the Indian economy since it provides high profits and employment. Almost 90% of the world commercial silk production is mulberry silk. Recently, to the aim of increasing silk production in the Country, the Central Silk Board of the Indian Ministry of Textile and the Indian Space Research Organization have identified potential suitable areas for mulberry cultivation through horizontal expansion in wastelands. Here, taking India as a case study, we analyse if the current cultivation of mulberry silk and the horizontal expansion of moriculture is environmentally sustainable. To this end, using the present land cover, we use a dynamic spatially distributed crop water balance model evaluating mulberry water requirement, the green and blue water provision and analysing both water scarcity at pixel scale and the impact of present and future moriculture on its increase.
Results show in the baseline scenario some States (e.g. West Bengal, Bihar, Tamil Nadu, Madhya Pradesh, Uttar Pradesh, Karnataka, Telangana) suitable for mulberry horizontal expansion already experiencing water scarcity conditions and high prevalence of malnutrition that will be exacerbated, both on yearly and monthly scale, by increasing moriculture. Other States (i.e. Orissa, Chhattisgarh, Mizoram, Assam, Manipur, Tripura, Meghalaya and Nagaland) show Mulberry expansion as the triggering factor of water scarcity condition. Particularly affected by water scarcity will be the North-Eastern Indian districts where potential mulberry areas are clustered.
The analysis of the population exposure to water scarcity due to mulberry horizontal expansion shows 11 million people potentially affected in India, where more than 65% living in the North-Eastern States. Compared to the total North-Eastern Region inhabitants, affected population accounts for more than the 15%.
How to cite: Ricciardi, L., Karatas, S., Chiarelli, D. D., and Rulli, M. C.: Environmental sustainability of increasing silk demand in India, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19986, https://doi.org/10.5194/egusphere-egu2020-19986, 2020.
Natural resources competition between food and cash crops is a current challenge in many developing countries that are experiencing both lack of food availability and a fast growing economy, such as India. Silk industry has always been significant for the Indian economy since it provides high profits and employment. Almost 90% of the world commercial silk production is mulberry silk. Recently, to the aim of increasing silk production in the Country, the Central Silk Board of the Indian Ministry of Textile and the Indian Space Research Organization have identified potential suitable areas for mulberry cultivation through horizontal expansion in wastelands. Here, taking India as a case study, we analyse if the current cultivation of mulberry silk and the horizontal expansion of moriculture is environmentally sustainable. To this end, using the present land cover, we use a dynamic spatially distributed crop water balance model evaluating mulberry water requirement, the green and blue water provision and analysing both water scarcity at pixel scale and the impact of present and future moriculture on its increase.
Results show in the baseline scenario some States (e.g. West Bengal, Bihar, Tamil Nadu, Madhya Pradesh, Uttar Pradesh, Karnataka, Telangana) suitable for mulberry horizontal expansion already experiencing water scarcity conditions and high prevalence of malnutrition that will be exacerbated, both on yearly and monthly scale, by increasing moriculture. Other States (i.e. Orissa, Chhattisgarh, Mizoram, Assam, Manipur, Tripura, Meghalaya and Nagaland) show Mulberry expansion as the triggering factor of water scarcity condition. Particularly affected by water scarcity will be the North-Eastern Indian districts where potential mulberry areas are clustered.
The analysis of the population exposure to water scarcity due to mulberry horizontal expansion shows 11 million people potentially affected in India, where more than 65% living in the North-Eastern States. Compared to the total North-Eastern Region inhabitants, affected population accounts for more than the 15%.
How to cite: Ricciardi, L., Karatas, S., Chiarelli, D. D., and Rulli, M. C.: Environmental sustainability of increasing silk demand in India, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19986, https://doi.org/10.5194/egusphere-egu2020-19986, 2020.
EGU2020-20021 | Displays | ITS1.1/ERE7.1
Urban-Nexus: Dependency of urban agglomeration, Hyderabad, India, on external water resources in developing economy.Koteswara Rao Dagani and Satish Kumar Regonda
Self-sufficiency in water, food, and energy become major concerns of cities in the global urbanization era. To reach the self-sufficiency goals of cities, they depend more on external water resources, in the form of trade and imports to satisfy the water demands, which came into the focus with rapid urbanization. In this scenario, cities must measure their consumption, to know their dependence on external resources, and to draft their trade policies. But, it is tough to scale the dependency of cities on external resources at the city scale, in scarce of city-level trade data.
Here we are proposing a framework using the consumer-centric approach to scale dependency of an urban agglomeration, from consumption and production perspectives when there is no city-level trade data. In the consumption perspective, we used survey data provided by the National Sample Survey organization of India to asses the consumption footprints. In the production perspective, we used production statistics of the study area to assess the production footprints. The difference between the consumption and production WF will give the dependency of agglomeration on external resources. From the consumption perspective, the consumption WF of the study area is 1041 m3/cap/year.
This framework is flexible and can be switched between any two or more entities to know the dependency of cities on external resources for their resources. Moreover, this assessment plays a key role in trade policy decisions and also in scaling the consumption and dependency of cities to achieve self-sufficiency and sustainability goals of smart cities.
How to cite: Dagani, K. R. and Regonda, S. K.: Urban-Nexus: Dependency of urban agglomeration, Hyderabad, India, on external water resources in developing economy., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20021, https://doi.org/10.5194/egusphere-egu2020-20021, 2020.
Self-sufficiency in water, food, and energy become major concerns of cities in the global urbanization era. To reach the self-sufficiency goals of cities, they depend more on external water resources, in the form of trade and imports to satisfy the water demands, which came into the focus with rapid urbanization. In this scenario, cities must measure their consumption, to know their dependence on external resources, and to draft their trade policies. But, it is tough to scale the dependency of cities on external resources at the city scale, in scarce of city-level trade data.
Here we are proposing a framework using the consumer-centric approach to scale dependency of an urban agglomeration, from consumption and production perspectives when there is no city-level trade data. In the consumption perspective, we used survey data provided by the National Sample Survey organization of India to asses the consumption footprints. In the production perspective, we used production statistics of the study area to assess the production footprints. The difference between the consumption and production WF will give the dependency of agglomeration on external resources. From the consumption perspective, the consumption WF of the study area is 1041 m3/cap/year.
This framework is flexible and can be switched between any two or more entities to know the dependency of cities on external resources for their resources. Moreover, this assessment plays a key role in trade policy decisions and also in scaling the consumption and dependency of cities to achieve self-sufficiency and sustainability goals of smart cities.
How to cite: Dagani, K. R. and Regonda, S. K.: Urban-Nexus: Dependency of urban agglomeration, Hyderabad, India, on external water resources in developing economy., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20021, https://doi.org/10.5194/egusphere-egu2020-20021, 2020.
EGU2020-20951 | Displays | ITS1.1/ERE7.1
Indirect land use risk modelling with System Dynamics: the case of bioplasticsDiego Marazza, Enrico Balugani, and Eva Merloni
Indirect Land Use Change (ILUC) is a land use process driven by increase in land demand and mediated by the global market: for example, the increase in demand for a certain crop in a specific country due to its use for the production of bio-materials drives up the global crop price, eventually resulting in land use change in some other country. Since land demand is already high for food/feed production, ILUC often defines if the production of a bio-material is sustainable or not. ILUC is very difficult to observe and therefore it is usually estimated through models rather measured; different models depends on which part of the complex problem is taken into account: economic equilibrium models (partial, general), causal-descriptive models, normative models. Most of these models are static, i.e. time is not directly factored in the model. A study of the JRC showed that ILUC models have high levels of uncertainty, both within and among models, due to uncertainty in input data, different assumptions and modelling frameworks. The (i) lack of model transparency, (ii) lack of dynamic effects and (iii) high model uncertainties make it difficult to include ILUC in sustainable policies.
Here, we present a dynamic causal-descriptive model to estimate changes in land demand as a proxy of the ILUC risk, and test it when increasing the production of bioplastic materials on a global scale. We used a system dynamic framework to (i) maintain the model easy to understand and (ii) account for dynamic effects like delays and feedback loops. We also addressed the (iii) uncertainty problem by: (a) considering ILUC on a global scale only, (b) use yearly time step to avoid short-term economic effects, (c) identifying control variables to use for model validation, (d) modelling only the projected change in land demand and translate it into global risk classes in line with the approach pursued in Europe by the Renewable Energy Directive. The model includes the relevant processes that literature identify as influential for ILUC: use of co-products, competition with the feed sector, price effect on agricultural production (intensive margin), expansion on less suitable land (extensive margin), use of agricultural residues, soil erosion, and increase in agricultural yields. The model was, then, calibrated and validated using the extensive FAOSTAT dataset and then studied using different sensitivity analysis techniques.
The validation shows that the model 10 years projections are reliable (~8% error). Both local and global sensitivity analysis show that that the most relevant factor influencing ILUC risk is the trend of agricultural yields which, at the global level and contrary to what is usually assumed in other models, is insensitive to crop prices. Other relevant factors, interesting for policy makers, are the yields of bioplastics and the use of co-products. The analysis shows there are levels of production that have negligible risk in the next 30 years for specific biomasses and at specific growth and processing conditions. However, a full shift of use from fossil-based plastics to bio-based plastics would result in a 200-300 Mha land conversion globally.
How to cite: Marazza, D., Balugani, E., and Merloni, E.: Indirect land use risk modelling with System Dynamics: the case of bioplastics, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20951, https://doi.org/10.5194/egusphere-egu2020-20951, 2020.
Indirect Land Use Change (ILUC) is a land use process driven by increase in land demand and mediated by the global market: for example, the increase in demand for a certain crop in a specific country due to its use for the production of bio-materials drives up the global crop price, eventually resulting in land use change in some other country. Since land demand is already high for food/feed production, ILUC often defines if the production of a bio-material is sustainable or not. ILUC is very difficult to observe and therefore it is usually estimated through models rather measured; different models depends on which part of the complex problem is taken into account: economic equilibrium models (partial, general), causal-descriptive models, normative models. Most of these models are static, i.e. time is not directly factored in the model. A study of the JRC showed that ILUC models have high levels of uncertainty, both within and among models, due to uncertainty in input data, different assumptions and modelling frameworks. The (i) lack of model transparency, (ii) lack of dynamic effects and (iii) high model uncertainties make it difficult to include ILUC in sustainable policies.
Here, we present a dynamic causal-descriptive model to estimate changes in land demand as a proxy of the ILUC risk, and test it when increasing the production of bioplastic materials on a global scale. We used a system dynamic framework to (i) maintain the model easy to understand and (ii) account for dynamic effects like delays and feedback loops. We also addressed the (iii) uncertainty problem by: (a) considering ILUC on a global scale only, (b) use yearly time step to avoid short-term economic effects, (c) identifying control variables to use for model validation, (d) modelling only the projected change in land demand and translate it into global risk classes in line with the approach pursued in Europe by the Renewable Energy Directive. The model includes the relevant processes that literature identify as influential for ILUC: use of co-products, competition with the feed sector, price effect on agricultural production (intensive margin), expansion on less suitable land (extensive margin), use of agricultural residues, soil erosion, and increase in agricultural yields. The model was, then, calibrated and validated using the extensive FAOSTAT dataset and then studied using different sensitivity analysis techniques.
The validation shows that the model 10 years projections are reliable (~8% error). Both local and global sensitivity analysis show that that the most relevant factor influencing ILUC risk is the trend of agricultural yields which, at the global level and contrary to what is usually assumed in other models, is insensitive to crop prices. Other relevant factors, interesting for policy makers, are the yields of bioplastics and the use of co-products. The analysis shows there are levels of production that have negligible risk in the next 30 years for specific biomasses and at specific growth and processing conditions. However, a full shift of use from fossil-based plastics to bio-based plastics would result in a 200-300 Mha land conversion globally.
How to cite: Marazza, D., Balugani, E., and Merloni, E.: Indirect land use risk modelling with System Dynamics: the case of bioplastics, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20951, https://doi.org/10.5194/egusphere-egu2020-20951, 2020.
EGU2020-21594 | Displays | ITS1.1/ERE7.1
Characterization of economic and ecological advantages and challenges in development of conventional and unconventional hydrocarbon, non-hydrocarbon and renewable energy sources for resource-based economy in KazakhstanAleksandr Ivakhnenko and Beibarys Bakytzhan
In global socioeconomic development facing climate change challenges to minimize the output of greenhouse gas (GHG) emissions and moving to a more low-carbon economy (LCE) the major driving force for success in achieving Sustainable Development Goals (SDGs) is the cost of energy generation. One of the main factors for energy source selection in the power supply and energy type generation process is the price parameters often influenced at different degree by government policies incentives, technological and demographic challenges in different countries. We research the energy sources situation and possible development trends for developing country Kazakhstan with resource-based economy. In general, the economic aspects affect the quality and quantity of energy generated from different sources with incentives for environmental concern. Traditional energy sources in Kazakhstan, such as coal, oil and natural gas remain low-cost in production due to high reserve base, which leads to steady growth in this area. In general, the cost for generating 1 kWh of energy from the cheapest carbon source of energy sub-bituminous coal is about 0.0024 $, for natural gas 0.0057 $, conventional oil 0.0152 $ (conventional diesel is 0.0664 $) and for expensive unconventional oil 0.0361 $, whereas renewable hydrocarbons could potentially become more competitive with unconventional oil production (methanol 0.0540 $, biodiesel 0.0837 $, bioethanol 0.1933 $ for generating 1 kWh). Furthermore, we consider the main non-traditional and renewable energy sources of energy from the sun, wind, water, and biofuels, hydrogen, methane, gasoline, uranium, and others. There is a difference between the breakeven prices of conventional gas and biomethane (0.0057 $ and 0.047 - 0.15 $ respectively averaging 0.0675 $ per 1 kWh for biomethane) which is often related to the difference in their production methods. The main advantage of biomethane is environmentally friendly production. We also propose an assessment of fuel by environmental characteristics, where one of the hazardous sources Uranium is forth cheap 0.0069 $ per kWh, but the environmental damage caused by its waste is the greatest. At the same time hydropower is seven times more expensive than uranium, but it does not cause direct health damage issues, however influencing significantly ecosystem balance. Hydrogen fuel is the most expensive among others. Overall in Kazakhstan energy-producing from the sun, wind and biogas is more expensive comparing with global trends from 0.4 to 5.5 cents per 1 kWh, but remains cheaper for hydropower. In addition, based on the research findings we analyzed the potential for sustainable non-renewable and renewable energy development in the future for the case of the resource-based economy in Kazakhstan.
How to cite: Ivakhnenko, A. and Bakytzhan, B.: Characterization of economic and ecological advantages and challenges in development of conventional and unconventional hydrocarbon, non-hydrocarbon and renewable energy sources for resource-based economy in Kazakhstan, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21594, https://doi.org/10.5194/egusphere-egu2020-21594, 2020.
In global socioeconomic development facing climate change challenges to minimize the output of greenhouse gas (GHG) emissions and moving to a more low-carbon economy (LCE) the major driving force for success in achieving Sustainable Development Goals (SDGs) is the cost of energy generation. One of the main factors for energy source selection in the power supply and energy type generation process is the price parameters often influenced at different degree by government policies incentives, technological and demographic challenges in different countries. We research the energy sources situation and possible development trends for developing country Kazakhstan with resource-based economy. In general, the economic aspects affect the quality and quantity of energy generated from different sources with incentives for environmental concern. Traditional energy sources in Kazakhstan, such as coal, oil and natural gas remain low-cost in production due to high reserve base, which leads to steady growth in this area. In general, the cost for generating 1 kWh of energy from the cheapest carbon source of energy sub-bituminous coal is about 0.0024 $, for natural gas 0.0057 $, conventional oil 0.0152 $ (conventional diesel is 0.0664 $) and for expensive unconventional oil 0.0361 $, whereas renewable hydrocarbons could potentially become more competitive with unconventional oil production (methanol 0.0540 $, biodiesel 0.0837 $, bioethanol 0.1933 $ for generating 1 kWh). Furthermore, we consider the main non-traditional and renewable energy sources of energy from the sun, wind, water, and biofuels, hydrogen, methane, gasoline, uranium, and others. There is a difference between the breakeven prices of conventional gas and biomethane (0.0057 $ and 0.047 - 0.15 $ respectively averaging 0.0675 $ per 1 kWh for biomethane) which is often related to the difference in their production methods. The main advantage of biomethane is environmentally friendly production. We also propose an assessment of fuel by environmental characteristics, where one of the hazardous sources Uranium is forth cheap 0.0069 $ per kWh, but the environmental damage caused by its waste is the greatest. At the same time hydropower is seven times more expensive than uranium, but it does not cause direct health damage issues, however influencing significantly ecosystem balance. Hydrogen fuel is the most expensive among others. Overall in Kazakhstan energy-producing from the sun, wind and biogas is more expensive comparing with global trends from 0.4 to 5.5 cents per 1 kWh, but remains cheaper for hydropower. In addition, based on the research findings we analyzed the potential for sustainable non-renewable and renewable energy development in the future for the case of the resource-based economy in Kazakhstan.
How to cite: Ivakhnenko, A. and Bakytzhan, B.: Characterization of economic and ecological advantages and challenges in development of conventional and unconventional hydrocarbon, non-hydrocarbon and renewable energy sources for resource-based economy in Kazakhstan, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21594, https://doi.org/10.5194/egusphere-egu2020-21594, 2020.
EGU2020-10933 | Displays | ITS1.1/ERE7.1
Interdisciplinary collaboration in the development of IPCC report glossariesRobin Matthews, Renee van Diemen, Nora Marie Weyer, and Jesbin Baidya
The Intergovernmental Panel on Climate Change (IPCC) produces assessment reports on climate change, spanning physical climate science, climate impacts and adaptation, and mitigation. These reports draw upon scientific, technical and socio-economic information and are produced by interdisciplinary and international author teams. The reports, including their glossaries, are used by diverse audiences across the natural and social sciences, policy and practice, and education. IPCC report glossaries are an invaluable resource in their own right, covering the domains of each report and providing rigorous definitions for terms that are oft-used in public discourse.
The IPCC is currently in its Sixth Assessment Cycle (AR6), for which it has already released three Special Reports, and is currently preparing three Working Group (WG) Reports and a Synthesis Report to be released in 2021/22. Since each report and report chapter is written by a different author team, ensuring consistency in approach and conclusions across and within each report represents a key challenge. An important contribution towards achieving consistency is the development of single definitions for terms to be used across several reports. To facilitate the development of such definitions, the IPCC Secretariat and Technical Support Units have created custom software for internal author use, termed the Collaborative Online Glossary System (COGS). In addition, a public portal for IPCC glossaries (https://apps.ipcc.ch/glossary/) has been developed, where AR5 and approved AR6 report glossaries are hosted and can be readily searched. Here we discuss these tools within the context of interdisciplinary collaboration in climate change assessment. We also highlight the benefits of having consistent definitions when working more broadly at the water-energy-land nexus.
How to cite: Matthews, R., van Diemen, R., Weyer, N. M., and Baidya, J.: Interdisciplinary collaboration in the development of IPCC report glossaries, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10933, https://doi.org/10.5194/egusphere-egu2020-10933, 2020.
The Intergovernmental Panel on Climate Change (IPCC) produces assessment reports on climate change, spanning physical climate science, climate impacts and adaptation, and mitigation. These reports draw upon scientific, technical and socio-economic information and are produced by interdisciplinary and international author teams. The reports, including their glossaries, are used by diverse audiences across the natural and social sciences, policy and practice, and education. IPCC report glossaries are an invaluable resource in their own right, covering the domains of each report and providing rigorous definitions for terms that are oft-used in public discourse.
The IPCC is currently in its Sixth Assessment Cycle (AR6), for which it has already released three Special Reports, and is currently preparing three Working Group (WG) Reports and a Synthesis Report to be released in 2021/22. Since each report and report chapter is written by a different author team, ensuring consistency in approach and conclusions across and within each report represents a key challenge. An important contribution towards achieving consistency is the development of single definitions for terms to be used across several reports. To facilitate the development of such definitions, the IPCC Secretariat and Technical Support Units have created custom software for internal author use, termed the Collaborative Online Glossary System (COGS). In addition, a public portal for IPCC glossaries (https://apps.ipcc.ch/glossary/) has been developed, where AR5 and approved AR6 report glossaries are hosted and can be readily searched. Here we discuss these tools within the context of interdisciplinary collaboration in climate change assessment. We also highlight the benefits of having consistent definitions when working more broadly at the water-energy-land nexus.
How to cite: Matthews, R., van Diemen, R., Weyer, N. M., and Baidya, J.: Interdisciplinary collaboration in the development of IPCC report glossaries, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10933, https://doi.org/10.5194/egusphere-egu2020-10933, 2020.
EGU2020-21784 | Displays | ITS1.1/ERE7.1
Social and environmental dynamics in a charcoal producing area: The case of Central Pokot, KenyaMaike Petersen, Christoph Bergmann, Paul Roden, and Marcus Nüsser
Wood charcoal ranks amongst the most commercialized but least regulated commodities in sub-Saharan Africa. Despite its prevalence as an energy source for cooking and heating, the localized environmental and livelihood impacts of charcoal production are poorly understood. This research deficit is amplified by widespread negative views of this activity as a poverty-driven cause of deforestation and land-degradation. However, the charcoal-degradation nexus is apparently more complicated, not least because the extraction of biomass from already degraded woodlands can be sustainable under various management regimes. In a case study in Central Pokot, Kenya, where charcoal production began in earnest in the early 1990’s we have investigated the social and environmental dynamics that are interlinked with the production of charcoal. Our methodological approach integrates remote sensing techniques with empirically based social scientific analyses across multiple spatial and temporal scales. Our results show that the area has undergone significant changes, both in the human and in the physical sphere. While the public opinion suggests a close connection between charcoal production and land degradation, a detailed Landsat-based land use and land cover change detection could not reveal a causal connection. In addition, a high-resolution analysis using an unmanned aerial system showed only minor effects of charcoal production on the vegetation. Our data indicates that rural small-scale production of charcoal has the potential to be transformed into a sustainable livelihood. Therefore, however, policy makers need to include their specific situation into the legal frameworks.
How to cite: Petersen, M., Bergmann, C., Roden, P., and Nüsser, M.: Social and environmental dynamics in a charcoal producing area: The case of Central Pokot, Kenya, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21784, https://doi.org/10.5194/egusphere-egu2020-21784, 2020.
Wood charcoal ranks amongst the most commercialized but least regulated commodities in sub-Saharan Africa. Despite its prevalence as an energy source for cooking and heating, the localized environmental and livelihood impacts of charcoal production are poorly understood. This research deficit is amplified by widespread negative views of this activity as a poverty-driven cause of deforestation and land-degradation. However, the charcoal-degradation nexus is apparently more complicated, not least because the extraction of biomass from already degraded woodlands can be sustainable under various management regimes. In a case study in Central Pokot, Kenya, where charcoal production began in earnest in the early 1990’s we have investigated the social and environmental dynamics that are interlinked with the production of charcoal. Our methodological approach integrates remote sensing techniques with empirically based social scientific analyses across multiple spatial and temporal scales. Our results show that the area has undergone significant changes, both in the human and in the physical sphere. While the public opinion suggests a close connection between charcoal production and land degradation, a detailed Landsat-based land use and land cover change detection could not reveal a causal connection. In addition, a high-resolution analysis using an unmanned aerial system showed only minor effects of charcoal production on the vegetation. Our data indicates that rural small-scale production of charcoal has the potential to be transformed into a sustainable livelihood. Therefore, however, policy makers need to include their specific situation into the legal frameworks.
How to cite: Petersen, M., Bergmann, C., Roden, P., and Nüsser, M.: Social and environmental dynamics in a charcoal producing area: The case of Central Pokot, Kenya, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21784, https://doi.org/10.5194/egusphere-egu2020-21784, 2020.
ITS1.2/CL5.9 – How to make weather and climate services more efficient in developing countries
EGU2020-21847 | Displays | ITS1.2/CL5.9
Climate services: The product or the user, which came first?Stefan Liersch, Holger Hoff, and Seyni Salack
From our experience in West Africa it is obvious that the concept of climate services is not yet well understood or established in all user groups. Also some scientists still wonder if they have not been working on generating knowledge and information about climate change impacts for decades anyway. In some climate services projects, scientists find themselves in a new role, "selling" their products to users who are not necessarily aware of the existence of the product, where an attempt is made to create a demand. In other projects the demand is clear from the beginning. However, the introduction of the term or the concept of climate services has the potential to add a new dimension to the world of climate impact research and especially its application. It influences the attitude of scientists towards the applicability of their results in the direction of more targeted and demand-driven or ideally even co-produced information and services. Understanding scientific information as a service rather than as self-sufficient information for the scientific community, helps to better meet the needs of users. To improve the production and particularly the use of climate services both parties (producer and user) are challenged. To a certain extent, the scientist has to rethink and see the results as a valuable product that can be easily understood and used by others. This often requires a redesign, not necessarily of the product itself but the way it is presented. The user, in turn, must formulate precisely which information is useful to support her or his daily work, e.g. integrating climate change information into development plans for natural resources, sustainable energy planning or adaptation and mitigation strategies. This part in particular poses a real challenge, as the user does not always urgently need the information that a project intends to provide (bad timing) or is not in a position to adequately formulate the type of information required by the institution where she or he is employed. In this case, scientists occasionally face situations where they try to anticipate what kind of information is really useful for the user. Hence, communication between producer and user is key, but is normally not trivial, because of different backgrounds, expertise, language etc. It’s a process that requires facilitation by skilled staff.In the CIREG project in West Africa we elicited the stakeholder’s information demand in a first workshop. Apparently, the greatest need was formulated as capacity building for planning instruments for water and energy management in the context of climate change. By training on these tools, we gain access to the stakeholders and gain insight into their actual information needs. The willingness to share data and information also increases with this kind of cooperation and can lead to real co-production. However, data availability and the willingness to share is a challenge in many developing countries. Research projects are usually too short to identify the need for information, to jointly develop information and at the same time to guarantee and observe its uptake.
How to cite: Liersch, S., Hoff, H., and Salack, S.: Climate services: The product or the user, which came first?, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21847, https://doi.org/10.5194/egusphere-egu2020-21847, 2020.
From our experience in West Africa it is obvious that the concept of climate services is not yet well understood or established in all user groups. Also some scientists still wonder if they have not been working on generating knowledge and information about climate change impacts for decades anyway. In some climate services projects, scientists find themselves in a new role, "selling" their products to users who are not necessarily aware of the existence of the product, where an attempt is made to create a demand. In other projects the demand is clear from the beginning. However, the introduction of the term or the concept of climate services has the potential to add a new dimension to the world of climate impact research and especially its application. It influences the attitude of scientists towards the applicability of their results in the direction of more targeted and demand-driven or ideally even co-produced information and services. Understanding scientific information as a service rather than as self-sufficient information for the scientific community, helps to better meet the needs of users. To improve the production and particularly the use of climate services both parties (producer and user) are challenged. To a certain extent, the scientist has to rethink and see the results as a valuable product that can be easily understood and used by others. This often requires a redesign, not necessarily of the product itself but the way it is presented. The user, in turn, must formulate precisely which information is useful to support her or his daily work, e.g. integrating climate change information into development plans for natural resources, sustainable energy planning or adaptation and mitigation strategies. This part in particular poses a real challenge, as the user does not always urgently need the information that a project intends to provide (bad timing) or is not in a position to adequately formulate the type of information required by the institution where she or he is employed. In this case, scientists occasionally face situations where they try to anticipate what kind of information is really useful for the user. Hence, communication between producer and user is key, but is normally not trivial, because of different backgrounds, expertise, language etc. It’s a process that requires facilitation by skilled staff.In the CIREG project in West Africa we elicited the stakeholder’s information demand in a first workshop. Apparently, the greatest need was formulated as capacity building for planning instruments for water and energy management in the context of climate change. By training on these tools, we gain access to the stakeholders and gain insight into their actual information needs. The willingness to share data and information also increases with this kind of cooperation and can lead to real co-production. However, data availability and the willingness to share is a challenge in many developing countries. Research projects are usually too short to identify the need for information, to jointly develop information and at the same time to guarantee and observe its uptake.
How to cite: Liersch, S., Hoff, H., and Salack, S.: Climate services: The product or the user, which came first?, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21847, https://doi.org/10.5194/egusphere-egu2020-21847, 2020.
EGU2020-4587 | Displays | ITS1.2/CL5.9
Create weather ecosystems to make weather and climate services more efficient in developing countriespascal Venzac, christine David, and morgane Lovat
Create weather ecosystems to make weather and climate services more efficient in developing countries
Pascal Venzac, Christine David, Morgane Lovat
WeatherForce – France
Over the last decade, extreme events are more and more frequent and/or intensive. 85% of the world's population is affected by these events. But, 75% of the most vulnerable countries has no or little reliable, accurate and effective weather information. Effective forecasts and early warnings could however make the difference between life and death in those countries. Weather data are crucial for local populations and governments who can exploit it to optimize their economic development and prevent major social and health crises.
By international agreement, National Meteorology and Hydrology Services (NMHS) are the government's authoritative source of weather, climate and water information. But, some NMHS in developing countries have difficulties to deploy and maintain operational infrastructure like rain gauge recorder for example. In addition, rain gauges provide only local information, measuring rainfall level in the specific geographic location.
WeatherForce was created in August 2016, by two experts from Météo-France Group (French National Meteorological Service) to help meet the challenges of national weather services in developing countries.
WeatherForce works in close partnerships with NMHS to strengthen their fundamental role and implement weather ecosystems for local development with a sustainable business model.
The WeatherForce platform, first weather collaborative platform is designed to help:
- public institutions that need accurate weather data or predictive indicators to help them make informed decisions to protect local populations and infrastructures.
- universities or research institutes that need a platform to easily access data to code, modify and share their algorithms.
- startups incubators that look for reliable data to create innovative applications to help local populations cope with climate change
- private companies that need custom weather services to improve their performance.
Our platform aggregates global data (satellite images, global forecasts, etc.) transposed into a local geographic context (IoT sensors, local stations, field expertise). It is opened to local research and innovation ecosystems to offer them access to its qualified data and develop new weather indicators contributing to the creation of a meteorological common.
WeatherForce aims to increase local sustainability by making weather data available to all through a weather ecosystem.
Regarding the business model, it is based on revenue sharing, the NMHS receives a commission payment in relation to the revenue generated. WeatherForce sells services to private companies (agribusiness...) and shares the part dedicated with NMHS. The contribution from NMHS is based on the local expertise and data. We do not ask the NMHS to pay a subscription fee for the platform.
To summarize, we create through Public Partner Engagement (PPE) weather ecosystems that promote dialogue between private actors and public authorities; collaboration for better policies, new business opportunities and sustainable business model.
The WeatherForce solution connects local actors to each other but also to the rest of the world thanks to our open-source platform designed to allow collaborations between other weather ecosystems worldwide.
How to cite: Venzac, P., David, C., and Lovat, M.: Create weather ecosystems to make weather and climate services more efficient in developing countries, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4587, https://doi.org/10.5194/egusphere-egu2020-4587, 2020.
Create weather ecosystems to make weather and climate services more efficient in developing countries
Pascal Venzac, Christine David, Morgane Lovat
WeatherForce – France
Over the last decade, extreme events are more and more frequent and/or intensive. 85% of the world's population is affected by these events. But, 75% of the most vulnerable countries has no or little reliable, accurate and effective weather information. Effective forecasts and early warnings could however make the difference between life and death in those countries. Weather data are crucial for local populations and governments who can exploit it to optimize their economic development and prevent major social and health crises.
By international agreement, National Meteorology and Hydrology Services (NMHS) are the government's authoritative source of weather, climate and water information. But, some NMHS in developing countries have difficulties to deploy and maintain operational infrastructure like rain gauge recorder for example. In addition, rain gauges provide only local information, measuring rainfall level in the specific geographic location.
WeatherForce was created in August 2016, by two experts from Météo-France Group (French National Meteorological Service) to help meet the challenges of national weather services in developing countries.
WeatherForce works in close partnerships with NMHS to strengthen their fundamental role and implement weather ecosystems for local development with a sustainable business model.
The WeatherForce platform, first weather collaborative platform is designed to help:
- public institutions that need accurate weather data or predictive indicators to help them make informed decisions to protect local populations and infrastructures.
- universities or research institutes that need a platform to easily access data to code, modify and share their algorithms.
- startups incubators that look for reliable data to create innovative applications to help local populations cope with climate change
- private companies that need custom weather services to improve their performance.
Our platform aggregates global data (satellite images, global forecasts, etc.) transposed into a local geographic context (IoT sensors, local stations, field expertise). It is opened to local research and innovation ecosystems to offer them access to its qualified data and develop new weather indicators contributing to the creation of a meteorological common.
WeatherForce aims to increase local sustainability by making weather data available to all through a weather ecosystem.
Regarding the business model, it is based on revenue sharing, the NMHS receives a commission payment in relation to the revenue generated. WeatherForce sells services to private companies (agribusiness...) and shares the part dedicated with NMHS. The contribution from NMHS is based on the local expertise and data. We do not ask the NMHS to pay a subscription fee for the platform.
To summarize, we create through Public Partner Engagement (PPE) weather ecosystems that promote dialogue between private actors and public authorities; collaboration for better policies, new business opportunities and sustainable business model.
The WeatherForce solution connects local actors to each other but also to the rest of the world thanks to our open-source platform designed to allow collaborations between other weather ecosystems worldwide.
How to cite: Venzac, P., David, C., and Lovat, M.: Create weather ecosystems to make weather and climate services more efficient in developing countries, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4587, https://doi.org/10.5194/egusphere-egu2020-4587, 2020.
EGU2020-20842 | Displays | ITS1.2/CL5.9
Future change in renewable energy availability in West Africa: a time of emergence approachMarco Gaetani, Benjamin Sultan, Serge Janicot Serge Janicot, Mathieu Vrac, Robert Vautard, Adjoua Moise Famien, Roberto Buizza, and Mario Martina
Independence in energy production is a key aspect of development in West African countries, which are facing fast population growth and climate change. Sustainable development is based on the availability of renewable energy sources, which are tightly tied to climate variability and change. In the context of current and projected climate change, development plans need reliable assessment of future availability of renewable resources.
In this study, the change in the availability of photovoltaic (PV) and wind energy in West Africa in the next decades is assessed. Specifically, the time of emergence (TOE) of climate change in PV and wind potential is estimated in 29 CMIP5 climate projections.
The ensemble robustly simulates a shift into a warmer climate in West Africa, which already occurred, and projects a decrease in solar radiation at the surface to occur by the 70s. The reduction in solar radiation is associated with a projected increase in the monsoonal precipitation in the 21st century. It results a likely change into climate conditions less favourable for PV energy production by the 40s. On the other hand, the projected change in the monsoonal dynamics will drive the increase in low level winds over the coast, which in turn will result in a robustly simulated shift into climate conditions favourable to wind power production by mid-century. Results show that climate model projections are skilful at providing usable information for adaptation measures to be taken in the energy sector.
How to cite: Gaetani, M., Sultan, B., Serge Janicot, S. J., Vrac, M., Vautard, R., Famien, A. M., Buizza, R., and Martina, M.: Future change in renewable energy availability in West Africa: a time of emergence approach, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20842, https://doi.org/10.5194/egusphere-egu2020-20842, 2020.
Independence in energy production is a key aspect of development in West African countries, which are facing fast population growth and climate change. Sustainable development is based on the availability of renewable energy sources, which are tightly tied to climate variability and change. In the context of current and projected climate change, development plans need reliable assessment of future availability of renewable resources.
In this study, the change in the availability of photovoltaic (PV) and wind energy in West Africa in the next decades is assessed. Specifically, the time of emergence (TOE) of climate change in PV and wind potential is estimated in 29 CMIP5 climate projections.
The ensemble robustly simulates a shift into a warmer climate in West Africa, which already occurred, and projects a decrease in solar radiation at the surface to occur by the 70s. The reduction in solar radiation is associated with a projected increase in the monsoonal precipitation in the 21st century. It results a likely change into climate conditions less favourable for PV energy production by the 40s. On the other hand, the projected change in the monsoonal dynamics will drive the increase in low level winds over the coast, which in turn will result in a robustly simulated shift into climate conditions favourable to wind power production by mid-century. Results show that climate model projections are skilful at providing usable information for adaptation measures to be taken in the energy sector.
How to cite: Gaetani, M., Sultan, B., Serge Janicot, S. J., Vrac, M., Vautard, R., Famien, A. M., Buizza, R., and Martina, M.: Future change in renewable energy availability in West Africa: a time of emergence approach, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20842, https://doi.org/10.5194/egusphere-egu2020-20842, 2020.
EGU2020-677 | Displays | ITS1.2/CL5.9
Probabilistic forecasts of the onset of the rainy season using global seasonal forecastsManuel Rauch, Jan Bliefernicht, Patrick Laux, Seyni Salack, Moussa Waongo, and Harald Kunstmann
Seasonal forecasts for monsoonal rainfall characteristics like the onset of the rainy season (ORS) are crucial in semi-arid regions to better support decision-making in water resources management, rain-fed agriculture and other socio-economic sectors. However, forecasts for these variables are rarely produced by weather services in a quantitative way. To overcome this problem, we developed an approach for seasonal forecasting of the ORS using global seasonal forecasts. The approach is not computationally intensive and is therefore operational applicable for forecasting centers in developing countries. It consists of a quantile-quantile-transformation for eliminating systematic differences between ensemble forecasts and observations, a fuzzy-rule based method for estimating the ORS date and a graphical method for an improved visualization of probabilistic ORS forecasts, called the onset of the rainy season index (ORSI). The performance of the approach is evaluated from 2000 to 2010 for several climate zones (Sahel, Sudan and Guinean zone) in West Africa, using hindcasts from the Seasonal Forecasting System 4 of ECMWF. Our studies show that seasonal ORS forecasts can be skillful for individual years and specific regions like the Guinean coasts, but also associated with large uncertainties, in particular for longer lead times. The spatial verification of the ORS fields emphasizes the importance of selecting appropriate performance measures to avoid an overestimation of the forecast skill. The ORSI delivers crucial information about an early, mean and late onset of the rainy season and it is much easier to interpret for users compared to the common categorical formats used in seasonal forecasting. Moreover, the new index can be transferred to other seasonal forecast variables, providing an important alternative to the common forecast formats used in seasonal forecasting. In this presentation we show (i) the operational practice of seasonal forecasting of ORS and other monsoonal precipitation characteristics, (ii) the methodology and results of the new ORS approach published in Rauch et al. (2019) and (iii) first results of an advanced statistical algorithm using ECMW-SYS5 hindcasts over a period of 30 years (1981-2010) in combination with an improved observational database.
Rauch, M., Bliefernicht, J., Laux, P., Salack, S., Waongo, M., & Kunstmann, H. (2019). Seasonal forecasting of the onset of the rainy season in West Africa. Atmosphere, 10(9), 528.
How to cite: Rauch, M., Bliefernicht, J., Laux, P., Salack, S., Waongo, M., and Kunstmann, H.: Probabilistic forecasts of the onset of the rainy season using global seasonal forecasts, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-677, https://doi.org/10.5194/egusphere-egu2020-677, 2020.
Seasonal forecasts for monsoonal rainfall characteristics like the onset of the rainy season (ORS) are crucial in semi-arid regions to better support decision-making in water resources management, rain-fed agriculture and other socio-economic sectors. However, forecasts for these variables are rarely produced by weather services in a quantitative way. To overcome this problem, we developed an approach for seasonal forecasting of the ORS using global seasonal forecasts. The approach is not computationally intensive and is therefore operational applicable for forecasting centers in developing countries. It consists of a quantile-quantile-transformation for eliminating systematic differences between ensemble forecasts and observations, a fuzzy-rule based method for estimating the ORS date and a graphical method for an improved visualization of probabilistic ORS forecasts, called the onset of the rainy season index (ORSI). The performance of the approach is evaluated from 2000 to 2010 for several climate zones (Sahel, Sudan and Guinean zone) in West Africa, using hindcasts from the Seasonal Forecasting System 4 of ECMWF. Our studies show that seasonal ORS forecasts can be skillful for individual years and specific regions like the Guinean coasts, but also associated with large uncertainties, in particular for longer lead times. The spatial verification of the ORS fields emphasizes the importance of selecting appropriate performance measures to avoid an overestimation of the forecast skill. The ORSI delivers crucial information about an early, mean and late onset of the rainy season and it is much easier to interpret for users compared to the common categorical formats used in seasonal forecasting. Moreover, the new index can be transferred to other seasonal forecast variables, providing an important alternative to the common forecast formats used in seasonal forecasting. In this presentation we show (i) the operational practice of seasonal forecasting of ORS and other monsoonal precipitation characteristics, (ii) the methodology and results of the new ORS approach published in Rauch et al. (2019) and (iii) first results of an advanced statistical algorithm using ECMW-SYS5 hindcasts over a period of 30 years (1981-2010) in combination with an improved observational database.
Rauch, M., Bliefernicht, J., Laux, P., Salack, S., Waongo, M., & Kunstmann, H. (2019). Seasonal forecasting of the onset of the rainy season in West Africa. Atmosphere, 10(9), 528.
How to cite: Rauch, M., Bliefernicht, J., Laux, P., Salack, S., Waongo, M., and Kunstmann, H.: Probabilistic forecasts of the onset of the rainy season using global seasonal forecasts, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-677, https://doi.org/10.5194/egusphere-egu2020-677, 2020.
EGU2020-19654 | Displays | ITS1.2/CL5.9
Big Data for flood management: Realising the benefits for developing countriesNamrata Bhattacharya Mis
Agenda 2030 goal 11 commits towards making disaster risk reduction an integral part of sustainable social and economic development. Flooding poses some of the most serious challenges in front of developing nations by hitting hardest to the most vulnerable. Focussing on the urban poor, frequently at highest risk are characterised by inadequate housing, lack of services and infrastructure with high population growth and spatial expansion in dense, lower quality urban structures. Use of big data from within these low-quality urban settlement areas can be a useful step forward in generating information to have a better understanding of their vulnerabilities. Big data for resilience is a recent field of research which offers tremendous potential for increasing disaster resilience especially in the context of social resilience. This research focusses to unleash the unrealised opportunities of big data through the differential social and economic frames that can contribute towards better-targeted information generation in disaster management. The scoping study aims to contribute to the understanding of the potential of big data in developing particularly in low-income countries to empower the vulnerable population against natural hazards such as floods. Recognising the potential of providing real-time and long-term information for emergency management in flood-affected large urban settlements this research concentrates on flood hazard and use of remotely sensed data (NASA, TRMM, LANDSAT) as the big data source for quick disaster response (and recovery) in targeted areas. The research question for the scoping study is: Can big data source provide real-time and long- term information to improve emergency disaster management in urban settlements against floods in developing countries? Previous research has identified several potentials that big data has on faster response to the affected population but few attempts have been made to integrate the factors to develop an aggregated conceptual output . An international review of multi-discipline research, grey literature, grass-root projects, and emerging online social discourse will appraise the concepts and scope of big data to highlight the four objectives of the research and answer the specific questions around existing and future potentials of big data, operationalising and capacity building by agencies, risk associated and prospects of maximising impact. The research proposes a concept design for undertaking a thematic review of existing secondary data sources which will be used to provide a holistic picture of how big data can support in resilience through technological change within the specific scope of social and environmental contexts of developing countries. The implications of the study lie in the system integration and understanding of the socio-economics, political, legal and ethical contexts essential for investment decision making for strategic impact and resilience-building in developing nations.
How to cite: Bhattacharya Mis, N.: Big Data for flood management: Realising the benefits for developing countries, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19654, https://doi.org/10.5194/egusphere-egu2020-19654, 2020.
Agenda 2030 goal 11 commits towards making disaster risk reduction an integral part of sustainable social and economic development. Flooding poses some of the most serious challenges in front of developing nations by hitting hardest to the most vulnerable. Focussing on the urban poor, frequently at highest risk are characterised by inadequate housing, lack of services and infrastructure with high population growth and spatial expansion in dense, lower quality urban structures. Use of big data from within these low-quality urban settlement areas can be a useful step forward in generating information to have a better understanding of their vulnerabilities. Big data for resilience is a recent field of research which offers tremendous potential for increasing disaster resilience especially in the context of social resilience. This research focusses to unleash the unrealised opportunities of big data through the differential social and economic frames that can contribute towards better-targeted information generation in disaster management. The scoping study aims to contribute to the understanding of the potential of big data in developing particularly in low-income countries to empower the vulnerable population against natural hazards such as floods. Recognising the potential of providing real-time and long-term information for emergency management in flood-affected large urban settlements this research concentrates on flood hazard and use of remotely sensed data (NASA, TRMM, LANDSAT) as the big data source for quick disaster response (and recovery) in targeted areas. The research question for the scoping study is: Can big data source provide real-time and long- term information to improve emergency disaster management in urban settlements against floods in developing countries? Previous research has identified several potentials that big data has on faster response to the affected population but few attempts have been made to integrate the factors to develop an aggregated conceptual output . An international review of multi-discipline research, grey literature, grass-root projects, and emerging online social discourse will appraise the concepts and scope of big data to highlight the four objectives of the research and answer the specific questions around existing and future potentials of big data, operationalising and capacity building by agencies, risk associated and prospects of maximising impact. The research proposes a concept design for undertaking a thematic review of existing secondary data sources which will be used to provide a holistic picture of how big data can support in resilience through technological change within the specific scope of social and environmental contexts of developing countries. The implications of the study lie in the system integration and understanding of the socio-economics, political, legal and ethical contexts essential for investment decision making for strategic impact and resilience-building in developing nations.
How to cite: Bhattacharya Mis, N.: Big Data for flood management: Realising the benefits for developing countries, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19654, https://doi.org/10.5194/egusphere-egu2020-19654, 2020.
EGU2020-6735 | Displays | ITS1.2/CL5.9
Using global remote sensing and weather data efficiently for agricultural hotspots monitoring anywhere anytime: the ASAP online systemMichele Meroni, Felix Rembold, Ferdinando Urbano, Guido Lemoine, Hervé Kerdiles, Ana Perez-Hoyos, Gabor Csak, Maria Dimou, and Petar Vojnovic
Monitoring agricultural production in vulnerable developing countries is important for food security assessment and requires near real-time (NRT) information on crop growing conditions for early detection of possible production deficits. The public online ASAP system (Anomaly hot Spots of Agricultural Production) is an early warning decision support tool based on weather data and direct observation of crop status as provided by remote sensing. Although decision makers and food security analysts are the main targeted user groups, all the information is fully made available to the public in a simple and well documented online platform. The information further contributes to multi-agency early warning products such as the GEOGLAM Crop Monitor for Early Warning and food security assessments following the IPC-Cadre Harmonisé framework.
Low resolution remote sensing (1 km) and meteorological (5-25 km) data are processed automatically every 10 days and vegetation anomaly warnings are triggered at the first sub-national administrative level. The severity of the warnings is based on the observed land surface phenology and three main derived indicators computed at the 1 km grid level: a proxy of the current season biomass production (the cumulative value of the Normalized Difference Vegetation index from the start of season); an indicator of precipitation deficit (the Standardized Precipitation Index at the 3 month scale); and a water-balance model output (the Water Requirement Satisfaction Index).Warning maps and summary information are published on a web GIS every ten days and then further analyzed by analysts every month. This results in the identification of hotspot countries with potentially critical crop or rangelands production conditions.
In addition to the hotspots analysis and the warning explorer, users can also zoom in to the parcel level thank to the so called High Resolution Viewer, a web interface based on Google Earth Engine that allows to visualize Sentinels (1 and 2) and Landsat imagery, plot temporal profiles and perform basic anomaly operation (e.g. current year NDVI anomaly with respect to a reference year).
In the near future it is planned to make the anomaly warnings available also at the second sub-national level and to integrate meteorological forecasts in the warning system.
How to cite: Meroni, M., Rembold, F., Urbano, F., Lemoine, G., Kerdiles, H., Perez-Hoyos, A., Csak, G., Dimou, M., and Vojnovic, P.: Using global remote sensing and weather data efficiently for agricultural hotspots monitoring anywhere anytime: the ASAP online system, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6735, https://doi.org/10.5194/egusphere-egu2020-6735, 2020.
Monitoring agricultural production in vulnerable developing countries is important for food security assessment and requires near real-time (NRT) information on crop growing conditions for early detection of possible production deficits. The public online ASAP system (Anomaly hot Spots of Agricultural Production) is an early warning decision support tool based on weather data and direct observation of crop status as provided by remote sensing. Although decision makers and food security analysts are the main targeted user groups, all the information is fully made available to the public in a simple and well documented online platform. The information further contributes to multi-agency early warning products such as the GEOGLAM Crop Monitor for Early Warning and food security assessments following the IPC-Cadre Harmonisé framework.
Low resolution remote sensing (1 km) and meteorological (5-25 km) data are processed automatically every 10 days and vegetation anomaly warnings are triggered at the first sub-national administrative level. The severity of the warnings is based on the observed land surface phenology and three main derived indicators computed at the 1 km grid level: a proxy of the current season biomass production (the cumulative value of the Normalized Difference Vegetation index from the start of season); an indicator of precipitation deficit (the Standardized Precipitation Index at the 3 month scale); and a water-balance model output (the Water Requirement Satisfaction Index).Warning maps and summary information are published on a web GIS every ten days and then further analyzed by analysts every month. This results in the identification of hotspot countries with potentially critical crop or rangelands production conditions.
In addition to the hotspots analysis and the warning explorer, users can also zoom in to the parcel level thank to the so called High Resolution Viewer, a web interface based on Google Earth Engine that allows to visualize Sentinels (1 and 2) and Landsat imagery, plot temporal profiles and perform basic anomaly operation (e.g. current year NDVI anomaly with respect to a reference year).
In the near future it is planned to make the anomaly warnings available also at the second sub-national level and to integrate meteorological forecasts in the warning system.
How to cite: Meroni, M., Rembold, F., Urbano, F., Lemoine, G., Kerdiles, H., Perez-Hoyos, A., Csak, G., Dimou, M., and Vojnovic, P.: Using global remote sensing and weather data efficiently for agricultural hotspots monitoring anywhere anytime: the ASAP online system, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6735, https://doi.org/10.5194/egusphere-egu2020-6735, 2020.
EGU2020-7476 | Displays | ITS1.2/CL5.9
The experience of Iberoamerican Meteorological Cooperation in the improvement of the provision of Weather and Climate ServicesJorge Tamayo
The cooperation between Iberoamerican National Meteorological and Hydrological Services (NMHS) it is coordinated through the Conference of Directors of Iberoamerican NMHS (CIMHET), who takes advantage of the unique framework that provides the cultural and idiomatic heritage in the region. It is constituted by all 21 NMHSs of Iberoamerica, including Spain and Portugal. CIMHET provides a forum for dialogue between Iberoamerican NMHSs, recognized by World Meteorological Organization (WMO) as an example of cooperation and operability.
The Conference approves, at its annual meetings from 2003, an action plan over three strategic lines: Institutional strengthening and resource mobilization; provision of meteorological, climatic and hydrological services; education and training
Among the activities carried out in the latest action plans related to a better provision of Weather and Climate Services (WCS) includes the support for the creation and operation of Virtual Regional Centers for the Prevention of Severe Events, the development of a free database management system, namely MCH, which has been donated to WMO for distribution among interested NMHS, the implementation of a regional lightning detection network in Central America, or the development of downscaling climate change scenarios for Central America, with access to information and view via web.
In order to carry out the proper provision of WCS, it is also necessary to have sufficient and properly trained NMHS staff. Therefore, this activity, both for technical and management personnel, has been one of the fundamental elements in the activities carried out by CIMHET, with more than 60 courses and workshops from 2004, most of which have been face-to-face, attended by more than 1500 students.
It is also important to have the appropriate infrastructure and human resources so that NMHS can provide their services to society in a reliable and timely manner. For this, several modernization projects have been developed, mainly considering the needs of the different user sectors and showing their potential of NMHS for the different national social and economic sectors in case of solving their shortcomings.
Finally, intersectoral coordination mechanisms have been established with other Iberomerican networks with common interests, such as the Iberoamerican Network of Climate Change Offices (RIOCC) and the Conference of Iberoamerican Directors of Water (CODIA). A number of priority activities related to climate change adaptation issues linked to extreme hydrometeorological phenomena have been identified and started its development.
How to cite: Tamayo, J.: The experience of Iberoamerican Meteorological Cooperation in the improvement of the provision of Weather and Climate Services, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7476, https://doi.org/10.5194/egusphere-egu2020-7476, 2020.
The cooperation between Iberoamerican National Meteorological and Hydrological Services (NMHS) it is coordinated through the Conference of Directors of Iberoamerican NMHS (CIMHET), who takes advantage of the unique framework that provides the cultural and idiomatic heritage in the region. It is constituted by all 21 NMHSs of Iberoamerica, including Spain and Portugal. CIMHET provides a forum for dialogue between Iberoamerican NMHSs, recognized by World Meteorological Organization (WMO) as an example of cooperation and operability.
The Conference approves, at its annual meetings from 2003, an action plan over three strategic lines: Institutional strengthening and resource mobilization; provision of meteorological, climatic and hydrological services; education and training
Among the activities carried out in the latest action plans related to a better provision of Weather and Climate Services (WCS) includes the support for the creation and operation of Virtual Regional Centers for the Prevention of Severe Events, the development of a free database management system, namely MCH, which has been donated to WMO for distribution among interested NMHS, the implementation of a regional lightning detection network in Central America, or the development of downscaling climate change scenarios for Central America, with access to information and view via web.
In order to carry out the proper provision of WCS, it is also necessary to have sufficient and properly trained NMHS staff. Therefore, this activity, both for technical and management personnel, has been one of the fundamental elements in the activities carried out by CIMHET, with more than 60 courses and workshops from 2004, most of which have been face-to-face, attended by more than 1500 students.
It is also important to have the appropriate infrastructure and human resources so that NMHS can provide their services to society in a reliable and timely manner. For this, several modernization projects have been developed, mainly considering the needs of the different user sectors and showing their potential of NMHS for the different national social and economic sectors in case of solving their shortcomings.
Finally, intersectoral coordination mechanisms have been established with other Iberomerican networks with common interests, such as the Iberoamerican Network of Climate Change Offices (RIOCC) and the Conference of Iberoamerican Directors of Water (CODIA). A number of priority activities related to climate change adaptation issues linked to extreme hydrometeorological phenomena have been identified and started its development.
How to cite: Tamayo, J.: The experience of Iberoamerican Meteorological Cooperation in the improvement of the provision of Weather and Climate Services, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7476, https://doi.org/10.5194/egusphere-egu2020-7476, 2020.
EGU2020-10714 | Displays | ITS1.2/CL5.9
Water, Weather and Climate Services for Africa: the case of Ghana and KenyaFrank Ohene Annor, Nick van de Giesen, and Marie-Claire ten Veldhuis
Close to 80% of Sub-Saharan African farmers rely on rainfed agriculture. This makes it important that the weather and climate in this region is well understood, since it accounts for more than 15% of the GDP for instance in Ghana and Kenya. However, uncertainties in weather forecast and climate projections are very high in particular for this region, which leads to poor weather and climate services for agriculture production. One of the underlying factors among many is the poor conditions of weather and climate infrastructure in Sub-Saharan Africa. The Trans-African Hydro-Meteorological Observatory (TAHMO) together with some National Meteorological and Hydrological Services (NMHSs) in Africa and other partners through the TWIGA project (http://twiga-h2020.eu/) are building a network of weather and hydrological stations to address this need. This network builds on the over 500 TAHMO stations in countries of interest like Ghana, Kenya, Uganda, South Africa, and Mozambique.
The observation network includes automatic weather stations, soil moisture sensors, Global Navigation Satellite System (GNSS) receivers, distributed temperature sensing (DTS), lightning sensors, neutron counters, evaporometers, laser speckle scintillometers, accelerometers for tree weighing, intervalometer rain gauges, flood mapper using citizen science mobile applications (Apps) and crop doctor using drones and Apps. The project has accelerated the Technology Readiness Levels (TRLs) of these innovations with some already set up for operational purposes delivering the first set of TWIGA services such as “How humid is my environment?; Crop detection and condition monitoring; Weather-based alerts for citizens/farmers; Area-specific near real-time weather forecast for farmers; Crop insurance based on soil index; Plastic accumulation monitor; Short-term prediction for solar energy; and Precipitable water vapour monitoring with TWIGA GNSS stations. These new innovations and the services developed using the value chain approach is a game changer for Sub-Saharan Africa.
How to cite: Annor, F. O., van de Giesen, N., and ten Veldhuis, M.-C.: Water, Weather and Climate Services for Africa: the case of Ghana and Kenya, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10714, https://doi.org/10.5194/egusphere-egu2020-10714, 2020.
Close to 80% of Sub-Saharan African farmers rely on rainfed agriculture. This makes it important that the weather and climate in this region is well understood, since it accounts for more than 15% of the GDP for instance in Ghana and Kenya. However, uncertainties in weather forecast and climate projections are very high in particular for this region, which leads to poor weather and climate services for agriculture production. One of the underlying factors among many is the poor conditions of weather and climate infrastructure in Sub-Saharan Africa. The Trans-African Hydro-Meteorological Observatory (TAHMO) together with some National Meteorological and Hydrological Services (NMHSs) in Africa and other partners through the TWIGA project (http://twiga-h2020.eu/) are building a network of weather and hydrological stations to address this need. This network builds on the over 500 TAHMO stations in countries of interest like Ghana, Kenya, Uganda, South Africa, and Mozambique.
The observation network includes automatic weather stations, soil moisture sensors, Global Navigation Satellite System (GNSS) receivers, distributed temperature sensing (DTS), lightning sensors, neutron counters, evaporometers, laser speckle scintillometers, accelerometers for tree weighing, intervalometer rain gauges, flood mapper using citizen science mobile applications (Apps) and crop doctor using drones and Apps. The project has accelerated the Technology Readiness Levels (TRLs) of these innovations with some already set up for operational purposes delivering the first set of TWIGA services such as “How humid is my environment?; Crop detection and condition monitoring; Weather-based alerts for citizens/farmers; Area-specific near real-time weather forecast for farmers; Crop insurance based on soil index; Plastic accumulation monitor; Short-term prediction for solar energy; and Precipitable water vapour monitoring with TWIGA GNSS stations. These new innovations and the services developed using the value chain approach is a game changer for Sub-Saharan Africa.
How to cite: Annor, F. O., van de Giesen, N., and ten Veldhuis, M.-C.: Water, Weather and Climate Services for Africa: the case of Ghana and Kenya, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10714, https://doi.org/10.5194/egusphere-egu2020-10714, 2020.
EGU2020-15094 | Displays | ITS1.2/CL5.9
Building synergies in regional climate services for Southeast Asia: The ASEAN Regional Climate Data, Analysis and Projections (ARCDAP) workshop series.Gerald Lim, Aurel Moise, Raizan Rahmat, and Bertrand Timbal
Southeast Asia (SEA) is a rapidly developing and densely populated region that is home to over 600 million people. This, together with the region’s high sensitivity, exposure and low adaptive capacities, makes it particularly vulnerable to climate change and extremes such as floods, droughts and tropical cyclones. While the last decade saw some countries in SEA develop their own climate change projections, studies were largely uncoordinated and most countries still lack the capability to independently produce robust future climate information. Following a proposal from the World Meteorological Organisation (WMO) Regional Association (RA) V working group on climate services, the ASEAN Regional Climate Data, Analysis and Projections (ARCDAP) workshop series was conceived in 2017 to bridge these gaps in regional synergies. The ARCDAP series has been organised annually since 2018 by the ASEAN Specialised Meteorological Centre (hosted by Meteorological Service Singapore) with support from WMO through the Canada-funded Climate Risk and Early Warning Systems (Canada-CREWS) initiative.
This presentation will cover the activities and outcomes from the first two workshops, as well as the third which will be held in February 2020. The ARCDAP series has so far brought together representatives from ASEAN National Meteorological and Hydrological Services (NMHSs), climate scientists and end-users from policy-making and a variety of vulnerability and impact assessment (VIA) sectors, to discuss and identify best practices regarding the delivery of climate change information, data usage and management, advancing the science etc. Notable outputs include two comprehensive workshop reports and a significant regional contribution to the HadEX3 global land in-situ-based dataset of temperature and precipitation extremes, motivated by work done with the ClimPACT2 software.
The upcoming third workshop will endeavour to encourage the uptake of the latest ensemble of climate simulations from the Coupled Model Intercomparison Project (CMIP6) using CMIP-endorsed tools such as ESMValTool. This will address the need for ASEAN climate change practitioners to upgrade their knowledge of the latest global climate model database. It is anticipated that with continued support from WMO, the series will continue with the Fourth workshop targeting the assessment of downscaling experiments in 2021.
How to cite: Lim, G., Moise, A., Rahmat, R., and Timbal, B.: Building synergies in regional climate services for Southeast Asia: The ASEAN Regional Climate Data, Analysis and Projections (ARCDAP) workshop series. , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-15094, https://doi.org/10.5194/egusphere-egu2020-15094, 2020.
Southeast Asia (SEA) is a rapidly developing and densely populated region that is home to over 600 million people. This, together with the region’s high sensitivity, exposure and low adaptive capacities, makes it particularly vulnerable to climate change and extremes such as floods, droughts and tropical cyclones. While the last decade saw some countries in SEA develop their own climate change projections, studies were largely uncoordinated and most countries still lack the capability to independently produce robust future climate information. Following a proposal from the World Meteorological Organisation (WMO) Regional Association (RA) V working group on climate services, the ASEAN Regional Climate Data, Analysis and Projections (ARCDAP) workshop series was conceived in 2017 to bridge these gaps in regional synergies. The ARCDAP series has been organised annually since 2018 by the ASEAN Specialised Meteorological Centre (hosted by Meteorological Service Singapore) with support from WMO through the Canada-funded Climate Risk and Early Warning Systems (Canada-CREWS) initiative.
This presentation will cover the activities and outcomes from the first two workshops, as well as the third which will be held in February 2020. The ARCDAP series has so far brought together representatives from ASEAN National Meteorological and Hydrological Services (NMHSs), climate scientists and end-users from policy-making and a variety of vulnerability and impact assessment (VIA) sectors, to discuss and identify best practices regarding the delivery of climate change information, data usage and management, advancing the science etc. Notable outputs include two comprehensive workshop reports and a significant regional contribution to the HadEX3 global land in-situ-based dataset of temperature and precipitation extremes, motivated by work done with the ClimPACT2 software.
The upcoming third workshop will endeavour to encourage the uptake of the latest ensemble of climate simulations from the Coupled Model Intercomparison Project (CMIP6) using CMIP-endorsed tools such as ESMValTool. This will address the need for ASEAN climate change practitioners to upgrade their knowledge of the latest global climate model database. It is anticipated that with continued support from WMO, the series will continue with the Fourth workshop targeting the assessment of downscaling experiments in 2021.
How to cite: Lim, G., Moise, A., Rahmat, R., and Timbal, B.: Building synergies in regional climate services for Southeast Asia: The ASEAN Regional Climate Data, Analysis and Projections (ARCDAP) workshop series. , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-15094, https://doi.org/10.5194/egusphere-egu2020-15094, 2020.
EGU2020-17108 | Displays | ITS1.2/CL5.9
The Drought & Flood Mitigation Service in Uganda – First ResultsHermen Westerbeeke, Deus Bamanya, and George Gibson
Since 2017, the governments of Uganda and the United Kingdom have been taking an innovative approach to mitigating the impacts of drought and floods on Ugandan society in the DFMS Project. Recognising both that the only sustainable solution to this issue is the continued capacity development in Uganda’s National Meteorological and Hydrological Services, and that it will take time for this capacity development to deliver results, the Drought & Flood Mitigation Service Project developed DFMS, bringing together meteorological, hydrological, and Earth observation information products and making these available to decision-makers in Uganda.
After the DFMS Platform was designed and developed in cooperation between a group of UK organisations that includes the Met Office and is led by the REA Group and five Ugandan government agencies including UNMA, led by the Ministry of Water and Environment (MWE), 2020 saw the start of a 2.5-year Demonstration Phase in which UNMA, MWE, and the other agencies will trial DFMS and DFMS will be fine-tuned to their needs. We will be presenting the first experiences with DFMS, including how it is being used related to SDG monitoring, and will showcase the platform itself in what we hope will be a very interactive session.
DFMS is a suite of information products and access only requires an Internet-connected device (e.g. PC, laptop, tablet, smart phone). Data and information are provided as maps or in graphs and tables, and several analysis tools allow for bespoke data processing and visualisation. Alarms can be tailored to indicate when observed or forecast parameters exceed user-defined thresholds. DFMS also comes with automatic programmable interfaces allowing it to be integrated with other automatic systems. The DFMS Platform is built using Open Source software, including Open Data Cube technology for storing and analysing Earth Observation data. It extensively uses (free) satellite remote sensing data, but also takes in data gathered in situ. By making the platform scalable and replicable, DFMS can be extended to contain additional features (e.g. related to landslides or crop diseases) or be rolled out in other countries in the region and beyond.
How to cite: Westerbeeke, H., Bamanya, D., and Gibson, G.: The Drought & Flood Mitigation Service in Uganda – First Results, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-17108, https://doi.org/10.5194/egusphere-egu2020-17108, 2020.
Since 2017, the governments of Uganda and the United Kingdom have been taking an innovative approach to mitigating the impacts of drought and floods on Ugandan society in the DFMS Project. Recognising both that the only sustainable solution to this issue is the continued capacity development in Uganda’s National Meteorological and Hydrological Services, and that it will take time for this capacity development to deliver results, the Drought & Flood Mitigation Service Project developed DFMS, bringing together meteorological, hydrological, and Earth observation information products and making these available to decision-makers in Uganda.
After the DFMS Platform was designed and developed in cooperation between a group of UK organisations that includes the Met Office and is led by the REA Group and five Ugandan government agencies including UNMA, led by the Ministry of Water and Environment (MWE), 2020 saw the start of a 2.5-year Demonstration Phase in which UNMA, MWE, and the other agencies will trial DFMS and DFMS will be fine-tuned to their needs. We will be presenting the first experiences with DFMS, including how it is being used related to SDG monitoring, and will showcase the platform itself in what we hope will be a very interactive session.
DFMS is a suite of information products and access only requires an Internet-connected device (e.g. PC, laptop, tablet, smart phone). Data and information are provided as maps or in graphs and tables, and several analysis tools allow for bespoke data processing and visualisation. Alarms can be tailored to indicate when observed or forecast parameters exceed user-defined thresholds. DFMS also comes with automatic programmable interfaces allowing it to be integrated with other automatic systems. The DFMS Platform is built using Open Source software, including Open Data Cube technology for storing and analysing Earth Observation data. It extensively uses (free) satellite remote sensing data, but also takes in data gathered in situ. By making the platform scalable and replicable, DFMS can be extended to contain additional features (e.g. related to landslides or crop diseases) or be rolled out in other countries in the region and beyond.
How to cite: Westerbeeke, H., Bamanya, D., and Gibson, G.: The Drought & Flood Mitigation Service in Uganda – First Results, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-17108, https://doi.org/10.5194/egusphere-egu2020-17108, 2020.
EGU2020-5712 * | Displays | ITS1.2/CL5.9 | Highlight
WaterApps: co-producing tailor-made water and weather information services with and for farmers for sustainable agriculture in peri-urban delta areas in Ghana and BangladeshSpyridon Paparrizos, Talardia Gbangou, Uthpal Kumar, Rebecca Sarku, Joreen Merks, Saskia Werners, Art Dewulf, Fulco Ludwig, and Eric van Slobbe
Water for agriculture in peri-urban areas is vital to safeguard sustainable food production. Due to the dynamics of urbanization in deltas as well as climate change, water availability (too much, not enough, too late or early) is becoming erratic and farmers cannot rely only on their own experience anymore for agricultural decision-making. The WaterApps project develops tailor made water and weather information services with and for farmers in peri-urban areas in the urbanizing deltas of Accra, Ghana and Khulna, Bangladesh to improve water and food security and contribute towards sustainable agriculture.
The project’s design framework initially focuses on the farmers that are involved and supported during its course in the study areas and assesses their needs. Based on the baseline needs assessment study and along with the farmers in a co-producing mode Climate Information Services are being developed that provide tailor-made water and weather information and are continuously monitored and evaluated to ensure their effectiveness.
WaterApps combines the latest information technology such as Apps, social media, etc. on knowledge sharing that are enhanced with the local farmers’ information needs, demands and preferences to produce tailor-made Climate Information Services.
It deals with the technical part & design aspects of the water and climate information services, such as: the skill of the provided information on different spatio-temporal scales and the role of Local Forecasting Knowledge in the study areas.
Currently, an APP is being developed which, besides displaying scientific forecast gives the possibility to farmers to provide their own indigenous forecast. Additionally, scientific and indigenous forecast are being integrated providing a hybrid forecast.
In Bangladesh, Farmers’ Fields Schools (FFS) have been initiated together with meetings and trainings. The objective was to engage with farmers on a weekly basis by providing long term weather forecast and discuss the relevance in relation to upcoming agricultural activities. Social media are employed to inform agricultural extension officers and stakeholders on a daily basis.
Both cases in Bangladesh and Ghana show the importance of two-way communication and co-production with and for farmers. The co-production of water and weather information services empowers and improves livelihoods of small/medium farmers and builds capacity for enhancing sustainable food production. Finally, it lays the ground for upscaling in other urban-rural delta zones in the developing world.
How to cite: Paparrizos, S., Gbangou, T., Kumar, U., Sarku, R., Merks, J., Werners, S., Dewulf, A., Ludwig, F., and van Slobbe, E.: WaterApps: co-producing tailor-made water and weather information services with and for farmers for sustainable agriculture in peri-urban delta areas in Ghana and Bangladesh, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5712, https://doi.org/10.5194/egusphere-egu2020-5712, 2020.
Water for agriculture in peri-urban areas is vital to safeguard sustainable food production. Due to the dynamics of urbanization in deltas as well as climate change, water availability (too much, not enough, too late or early) is becoming erratic and farmers cannot rely only on their own experience anymore for agricultural decision-making. The WaterApps project develops tailor made water and weather information services with and for farmers in peri-urban areas in the urbanizing deltas of Accra, Ghana and Khulna, Bangladesh to improve water and food security and contribute towards sustainable agriculture.
The project’s design framework initially focuses on the farmers that are involved and supported during its course in the study areas and assesses their needs. Based on the baseline needs assessment study and along with the farmers in a co-producing mode Climate Information Services are being developed that provide tailor-made water and weather information and are continuously monitored and evaluated to ensure their effectiveness.
WaterApps combines the latest information technology such as Apps, social media, etc. on knowledge sharing that are enhanced with the local farmers’ information needs, demands and preferences to produce tailor-made Climate Information Services.
It deals with the technical part & design aspects of the water and climate information services, such as: the skill of the provided information on different spatio-temporal scales and the role of Local Forecasting Knowledge in the study areas.
Currently, an APP is being developed which, besides displaying scientific forecast gives the possibility to farmers to provide their own indigenous forecast. Additionally, scientific and indigenous forecast are being integrated providing a hybrid forecast.
In Bangladesh, Farmers’ Fields Schools (FFS) have been initiated together with meetings and trainings. The objective was to engage with farmers on a weekly basis by providing long term weather forecast and discuss the relevance in relation to upcoming agricultural activities. Social media are employed to inform agricultural extension officers and stakeholders on a daily basis.
Both cases in Bangladesh and Ghana show the importance of two-way communication and co-production with and for farmers. The co-production of water and weather information services empowers and improves livelihoods of small/medium farmers and builds capacity for enhancing sustainable food production. Finally, it lays the ground for upscaling in other urban-rural delta zones in the developing world.
How to cite: Paparrizos, S., Gbangou, T., Kumar, U., Sarku, R., Merks, J., Werners, S., Dewulf, A., Ludwig, F., and van Slobbe, E.: WaterApps: co-producing tailor-made water and weather information services with and for farmers for sustainable agriculture in peri-urban delta areas in Ghana and Bangladesh, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5712, https://doi.org/10.5194/egusphere-egu2020-5712, 2020.
EGU2020-21637 | Displays | ITS1.2/CL5.9
The Climate Information platform: A climate science basis for climate adaptation and mitigation activities in developing countriesFrida Gyllensvärd, Christiana Photiadou, Berit Arheimer, Lorna Little, Elin Sjökvist, Katharina Klehmet, Thomas Bosshard, Léonard Santos, Maria Elenius, René Capell, and Isabel Ribeiro
The World Meteorological Organization (WMO), the Green Climate Fund (GCF) and the Swedish Meteorological and Hydrological Institute (SMHI) are collaborating on a project providing expert services for enhancing the climate science basis of GCF-funded activities. The goal is to ensure that the causal links between climate and climate impacts, and between climate action and societal benefits, are fully grounded in the best available climate data and science. Five pilot countries are participating in this phase of the project: St Lucia, Democratic Republic of Congo, Cape Verde, Cambodia, and Paraguay, with an audience of national experts, international stakeholders, and policy and decision makers.
The scientific framework which we follow here is a compendium of available data, methods and tools for analysing and documenting the past, present and potential future climate conditions which a GCF-funded project or adaptation plan might seek to address. Through the WMO-GCF-SMHI project, the methodology, scientific framework, data, methods and tools to link global to local data are complemented by hands-on support, backed by access to relevant data and tools through a structured access platform.
In this presentation we elaborate on the lessons learnt from a number of workshops that were designed for the five pilot countries. The main focus of the workshops was a hands-on opportunity of national experts and international stakeholders to work with the WMO methodology in order to develop a GCF proposal for future funding. The participants in each country worked intensively during a five-day workshop on each step of the methodology: Problem definition, Identification of climate science basis, Interpretation of data analysis, selection of best adaptation/mitigation options, and assessment of adaptation/mitigation effectiveness.
Assessing past and current climate and climate projections is the basis for inferring real and potential climate change and related impacts. For this, SMHI has developed a new interactive online platform/service (https://climateinformation.org/) to facilitate the communication between the GCF and developing countries and provide access to state of the art climate data to be used in impact assessment planning. The new service provides data for robust climate analysis to underpin decision-making when planning measures for climate adaptation or mitigation. Readily available climate indicators will help defining future problems, assess climatic stressors, and analyse current and future risks. This makes a climate case, which is the basis for developing interventions and propose investments. In particular the service provides:
- Easy access to many climate indicators, based on state-of-the-art climate science.
- Instant summary reports of climate change for any site on the globe.
- Guidance on how to link global changes to local observations.
How to cite: Gyllensvärd, F., Photiadou, C., Arheimer, B., Little, L., Sjökvist, E., Klehmet, K., Bosshard, T., Santos, L., Elenius, M., Capell, R., and Ribeiro, I.: The Climate Information platform: A climate science basis for climate adaptation and mitigation activities in developing countries , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21637, https://doi.org/10.5194/egusphere-egu2020-21637, 2020.
The World Meteorological Organization (WMO), the Green Climate Fund (GCF) and the Swedish Meteorological and Hydrological Institute (SMHI) are collaborating on a project providing expert services for enhancing the climate science basis of GCF-funded activities. The goal is to ensure that the causal links between climate and climate impacts, and between climate action and societal benefits, are fully grounded in the best available climate data and science. Five pilot countries are participating in this phase of the project: St Lucia, Democratic Republic of Congo, Cape Verde, Cambodia, and Paraguay, with an audience of national experts, international stakeholders, and policy and decision makers.
The scientific framework which we follow here is a compendium of available data, methods and tools for analysing and documenting the past, present and potential future climate conditions which a GCF-funded project or adaptation plan might seek to address. Through the WMO-GCF-SMHI project, the methodology, scientific framework, data, methods and tools to link global to local data are complemented by hands-on support, backed by access to relevant data and tools through a structured access platform.
In this presentation we elaborate on the lessons learnt from a number of workshops that were designed for the five pilot countries. The main focus of the workshops was a hands-on opportunity of national experts and international stakeholders to work with the WMO methodology in order to develop a GCF proposal for future funding. The participants in each country worked intensively during a five-day workshop on each step of the methodology: Problem definition, Identification of climate science basis, Interpretation of data analysis, selection of best adaptation/mitigation options, and assessment of adaptation/mitigation effectiveness.
Assessing past and current climate and climate projections is the basis for inferring real and potential climate change and related impacts. For this, SMHI has developed a new interactive online platform/service (https://climateinformation.org/) to facilitate the communication between the GCF and developing countries and provide access to state of the art climate data to be used in impact assessment planning. The new service provides data for robust climate analysis to underpin decision-making when planning measures for climate adaptation or mitigation. Readily available climate indicators will help defining future problems, assess climatic stressors, and analyse current and future risks. This makes a climate case, which is the basis for developing interventions and propose investments. In particular the service provides:
- Easy access to many climate indicators, based on state-of-the-art climate science.
- Instant summary reports of climate change for any site on the globe.
- Guidance on how to link global changes to local observations.
How to cite: Gyllensvärd, F., Photiadou, C., Arheimer, B., Little, L., Sjökvist, E., Klehmet, K., Bosshard, T., Santos, L., Elenius, M., Capell, R., and Ribeiro, I.: The Climate Information platform: A climate science basis for climate adaptation and mitigation activities in developing countries , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21637, https://doi.org/10.5194/egusphere-egu2020-21637, 2020.
EGU2020-8127 | Displays | ITS1.2/CL5.9
Co-designing a flood forecasting and alert system in West Africa with decision-making methods: the transdisciplinary project FANFARJudit Lienert, Jafet Andersson, and Francisco Silva Pinto
Floods are a serious concern in West Africa, and their severity will likely increase with climate change. The European Union-financed, inter- and transdisciplinary project FANFAR (https://fanfar.eu/) aims at providing an operational flood forecast and alert pilot system for West Africa, based on an open-source hydrological model employed in a cloud-based Information and Communications Technology (ICT) environment. To achieve this, an existing pilot ICT system is co-designed and co-adapted to meet needs and preferences of West African users. Four workshops are carried out in West Africa from 2018 to 2020, each with around 40 representatives from hydrological and emergency management agencies from 17 West African countries.
To better understand the stakeholders’ needs and preferences, and to prioritize the development of the FANFAR ICT flood forecasting and alert system, we use Multi-Criteria Decision Analysis (MCDA). This MCDA framework guides through a stepwise procedure to develop the FANFAR ICT system such that it best fulfils those objectives that are fundamentally important to stakeholders. The first steps of MCDA are problem structuring; starting with a stakeholder analysis to identify the most important participants for the co-design workshops. In the first co-design workshop (Niamey, Niger, 2018), we then used different problem structuring methods (PSMs) to brainstorm which objectives are fundamentally important to West African stakeholders, and which options (ICT system configurations) might achieve these objectives. To generate objectives, we used online and pen-and-paper surveys, group brainstorming, and plenary discussions. To generate options, we used a strategy generation table and the brainwriting-635 method. Between workshops, the FANFAR consortium post-processed the objectives and options. We also interviewed experts to predict how well an option achieves each objective; including the uncertainty, which is later propagated to the MCDA results with Monte Carlo simulation.
The refined objectives were again discussed in plenary sessions in co-design workshop 2 (Accra, Ghana, 2019), and we elicited the participants’ preferences in small group sessions. Weight elicitation captures the trade-offs stakeholders are willing to make regarding achieving objectives, if not all objectives can be fully fulfilled. We used the card procedure to elicit weights (Simos revised procedure), and the popular swing method. As additional preference information for the MCDA modelling, we elicited the shape of the most-important marginal value functions, which “translate” the objectives’ measurement-units to a neutral value between 0 (objective is not achieved) and 1 (fully achieved). To give one example: for the objective “high accuracy of information”, the best case is “100% accuracy”, translated to the value v=1; the worst case “0% accuracy” translates to v=0. Furthermore, we asked whether stakeholders agree with the implications of the commonly used (linear) additive aggregation model in MCDA (weighted average).
We will present and discuss main results of the MCDA-modeling. Our main aim is to give some insights into the participatory co-design process employed in FANFAR, and recommendations for other projects. We will discuss the problem structuring and preference elicitation methods, and how well they worked in this interesting West African context.
How to cite: Lienert, J., Andersson, J., and Silva Pinto, F.: Co-designing a flood forecasting and alert system in West Africa with decision-making methods: the transdisciplinary project FANFAR, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8127, https://doi.org/10.5194/egusphere-egu2020-8127, 2020.
Floods are a serious concern in West Africa, and their severity will likely increase with climate change. The European Union-financed, inter- and transdisciplinary project FANFAR (https://fanfar.eu/) aims at providing an operational flood forecast and alert pilot system for West Africa, based on an open-source hydrological model employed in a cloud-based Information and Communications Technology (ICT) environment. To achieve this, an existing pilot ICT system is co-designed and co-adapted to meet needs and preferences of West African users. Four workshops are carried out in West Africa from 2018 to 2020, each with around 40 representatives from hydrological and emergency management agencies from 17 West African countries.
To better understand the stakeholders’ needs and preferences, and to prioritize the development of the FANFAR ICT flood forecasting and alert system, we use Multi-Criteria Decision Analysis (MCDA). This MCDA framework guides through a stepwise procedure to develop the FANFAR ICT system such that it best fulfils those objectives that are fundamentally important to stakeholders. The first steps of MCDA are problem structuring; starting with a stakeholder analysis to identify the most important participants for the co-design workshops. In the first co-design workshop (Niamey, Niger, 2018), we then used different problem structuring methods (PSMs) to brainstorm which objectives are fundamentally important to West African stakeholders, and which options (ICT system configurations) might achieve these objectives. To generate objectives, we used online and pen-and-paper surveys, group brainstorming, and plenary discussions. To generate options, we used a strategy generation table and the brainwriting-635 method. Between workshops, the FANFAR consortium post-processed the objectives and options. We also interviewed experts to predict how well an option achieves each objective; including the uncertainty, which is later propagated to the MCDA results with Monte Carlo simulation.
The refined objectives were again discussed in plenary sessions in co-design workshop 2 (Accra, Ghana, 2019), and we elicited the participants’ preferences in small group sessions. Weight elicitation captures the trade-offs stakeholders are willing to make regarding achieving objectives, if not all objectives can be fully fulfilled. We used the card procedure to elicit weights (Simos revised procedure), and the popular swing method. As additional preference information for the MCDA modelling, we elicited the shape of the most-important marginal value functions, which “translate” the objectives’ measurement-units to a neutral value between 0 (objective is not achieved) and 1 (fully achieved). To give one example: for the objective “high accuracy of information”, the best case is “100% accuracy”, translated to the value v=1; the worst case “0% accuracy” translates to v=0. Furthermore, we asked whether stakeholders agree with the implications of the commonly used (linear) additive aggregation model in MCDA (weighted average).
We will present and discuss main results of the MCDA-modeling. Our main aim is to give some insights into the participatory co-design process employed in FANFAR, and recommendations for other projects. We will discuss the problem structuring and preference elicitation methods, and how well they worked in this interesting West African context.
How to cite: Lienert, J., Andersson, J., and Silva Pinto, F.: Co-designing a flood forecasting and alert system in West Africa with decision-making methods: the transdisciplinary project FANFAR, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8127, https://doi.org/10.5194/egusphere-egu2020-8127, 2020.
EGU2020-22322 | Displays | ITS1.2/CL5.9
Combining Indigenous and Scientific Forecast for Improved Climate Services in GhanaEmmanuel Nyadzi, Saskia Werners, Robbert Biesbroek, and Fulco Ludwig
Extreme weather events and climate change are affecting the livelihoods of farmers across the globe. Accessible and actionable weather and seasonal climate information can be used as an adaptation tool to support farmers to take adaptive farming decisions. There are increasing calls to integrate scientific forecast with indigenous forecast to improve weather and seasonal climate information at local scale. In Northern Ghana, farmers lament about the quality of scientific forecast information thereby depending on their own indigenous forecast for taking adaptive decisions. To improve this, we developed an integrated probability forecast (IPF) method to combine scientific and indigenous forecast into a single forecast and tested its reliability using binary forecast verification method as a proof of concept. We also evaluated the acceptability of IPF among farmers by computing an index from multiple-response questions with good internal consistency check. Results show that, for reliability, IPF on average performed better than indigenous and scientific forecast at a daily timescale. At seasonal timescale, indigenous forecast overall performed better followed by IPF and then scientific forecast. However, IPF has far greater acceptability potential. About 93% of farmers prefer IPF method as this provides a reliable forecast, requires less time and at the same time helps deal with contradicting forecast information. Results also show that farmers already use insights from both forecasts (complementary) to inform their farm decisions. However, their complementary method does not resolve the issues of contradicting forecast information. We conclude that, as a proof of concept, integrating indigenous and scientific forecast has high acceptability and can potentially increase forecast reliability and uptake.
How to cite: Nyadzi, E., Werners, S., Biesbroek, R., and Ludwig, F.: Combining Indigenous and Scientific Forecast for Improved Climate Services in Ghana, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22322, https://doi.org/10.5194/egusphere-egu2020-22322, 2020.
Extreme weather events and climate change are affecting the livelihoods of farmers across the globe. Accessible and actionable weather and seasonal climate information can be used as an adaptation tool to support farmers to take adaptive farming decisions. There are increasing calls to integrate scientific forecast with indigenous forecast to improve weather and seasonal climate information at local scale. In Northern Ghana, farmers lament about the quality of scientific forecast information thereby depending on their own indigenous forecast for taking adaptive decisions. To improve this, we developed an integrated probability forecast (IPF) method to combine scientific and indigenous forecast into a single forecast and tested its reliability using binary forecast verification method as a proof of concept. We also evaluated the acceptability of IPF among farmers by computing an index from multiple-response questions with good internal consistency check. Results show that, for reliability, IPF on average performed better than indigenous and scientific forecast at a daily timescale. At seasonal timescale, indigenous forecast overall performed better followed by IPF and then scientific forecast. However, IPF has far greater acceptability potential. About 93% of farmers prefer IPF method as this provides a reliable forecast, requires less time and at the same time helps deal with contradicting forecast information. Results also show that farmers already use insights from both forecasts (complementary) to inform their farm decisions. However, their complementary method does not resolve the issues of contradicting forecast information. We conclude that, as a proof of concept, integrating indigenous and scientific forecast has high acceptability and can potentially increase forecast reliability and uptake.
How to cite: Nyadzi, E., Werners, S., Biesbroek, R., and Ludwig, F.: Combining Indigenous and Scientific Forecast for Improved Climate Services in Ghana, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22322, https://doi.org/10.5194/egusphere-egu2020-22322, 2020.
EGU2020-22514 | Displays | ITS1.2/CL5.9
Success of the co-production and delivery of local and scientific weather forecasts information with and for smallholder farmers in GhanaTalardia Gbangou, Rebecca Sarku, Erik Vanslobbe, Fulco Ludwig, Gordana Kranjac-Berisavljevic, Spyridon Paparrizos, and Art Dewulf
Many West African farmers struggle to cope with changing weather and climatic conditions that keep them from making optimal decisions and meeting food and income security. The development of more accessible and credible weather and climate services (WCIS) can help local farmers improve their adaptive capacity. Such adequate WCIS often requires a joined collaboration between farmers and scientists to co-create an integrated local and scientific forecasting knowledge. We examine (i) the design requirements (i.e. Both technical and non-technical tools) and (ii) evaluate the outcomes of a successful implementation of the co-production and delivery of WCIS in Ada East district, Ghana. We implemented a user-driven design approach in a citizen science experiment involving prototype design and testing, training workshops, and interviews with farmers, agricultural and meteorological extension agents from 2018 to 2019. Farmers were handed with digital tools (i.e. Smart phones with web and mobile applications) and rain gauges as research instruments to collect and receive weather forecast data, and interact with scientists.
Our results show that farmers’ engagement increased over time and is associated with the trainings and the improvement of the design features of the applications used. The evaluation shows an increase in the usability of tools, the reach or networking with other farmers, and the understanding of uncertainty (probabilistic) aspect of the forecasts over time. Local farmers evaluated both the local and scientific forecasts as accurate enough and useful for their daily farming decisions. We concluded that using modern technology in a co-production process, with targeted training, can improve the access and use of weather forecasts information.
How to cite: Gbangou, T., Sarku, R., Vanslobbe, E., Ludwig, F., Kranjac-Berisavljevic, G., Paparrizos, S., and Dewulf, A.: Success of the co-production and delivery of local and scientific weather forecasts information with and for smallholder farmers in Ghana, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22514, https://doi.org/10.5194/egusphere-egu2020-22514, 2020.
Many West African farmers struggle to cope with changing weather and climatic conditions that keep them from making optimal decisions and meeting food and income security. The development of more accessible and credible weather and climate services (WCIS) can help local farmers improve their adaptive capacity. Such adequate WCIS often requires a joined collaboration between farmers and scientists to co-create an integrated local and scientific forecasting knowledge. We examine (i) the design requirements (i.e. Both technical and non-technical tools) and (ii) evaluate the outcomes of a successful implementation of the co-production and delivery of WCIS in Ada East district, Ghana. We implemented a user-driven design approach in a citizen science experiment involving prototype design and testing, training workshops, and interviews with farmers, agricultural and meteorological extension agents from 2018 to 2019. Farmers were handed with digital tools (i.e. Smart phones with web and mobile applications) and rain gauges as research instruments to collect and receive weather forecast data, and interact with scientists.
Our results show that farmers’ engagement increased over time and is associated with the trainings and the improvement of the design features of the applications used. The evaluation shows an increase in the usability of tools, the reach or networking with other farmers, and the understanding of uncertainty (probabilistic) aspect of the forecasts over time. Local farmers evaluated both the local and scientific forecasts as accurate enough and useful for their daily farming decisions. We concluded that using modern technology in a co-production process, with targeted training, can improve the access and use of weather forecasts information.
How to cite: Gbangou, T., Sarku, R., Vanslobbe, E., Ludwig, F., Kranjac-Berisavljevic, G., Paparrizos, S., and Dewulf, A.: Success of the co-production and delivery of local and scientific weather forecasts information with and for smallholder farmers in Ghana, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22514, https://doi.org/10.5194/egusphere-egu2020-22514, 2020.
ITS1.4/HS4.8 – Reducing the impacts of natural hazards through forecast-based action: from early warning to early action
EGU2020-11979 | Displays | ITS1.4/HS4.8
The Progress of KMA’s Impact-based ForecastingHyojin Han, YoungYoung Park, Ji-Hyeon Kim, Yongjun Ahn, KyongJun Lee, Ji Ae Song, Yeongseon Kim, and Wonho Kim
The Korea Meteorological Administration (KMA) set a main policy goal as “Impact-based forecasting (IBF) for mitigation of meteorological disaster risks” in 2016. As a first step toward the goal, each regional office of the KMA operated a prototype of impact-based forecast service tailored to major severe weather conditions in each region from 2016 to 2018. As a result, the prototype service was found to contribute to reducing meteorological disasters in those regions. In order to determine quantitative impacts caused by meteorological disasters, a multi-ministerial R&D project was began in 2018 which is aiming to develop the Hazard Impact Models (HIM) for heavy rainfall and heatwave/coldwave. The project will be completed by the end of 2020, and the developed HIM will be operated for the KMA operational IBF.
The KMA officially launched heatwave IBF service from June to September 2019 in order to support effective reduction of heatwave impacts. The KMA provided risk levels in different colors (attention-green, caution-yellow, warning-orange, danger-red), impact information and response tips for seven sectors—health, industry, livestock, aquaculture, agriculture, transportation and electric power—considering the regional characteristics. This information was disseminated to the public on the KMA's website. It was also provided to disaster response related agencies through the Meteorological Information Portal Service System for Disaster Prevention, as well as to local governments’ disaster response managers and officials managing the socially vulnerable people through mobile text messages. According to user satisfaction survey, a great number of users showed positive responses to the KMA heatwave IBF. Based on the success of heatwave IBF, coldwave IBF trial service was offered from December 2019 to March 2020. In addition, KMA plans to expand IBF to other high-impact weathers such as typhoon, heavy snow, heavy rainfall, and so on.
How to cite: Han, H., Park, Y., Kim, J.-H., Ahn, Y., Lee, K., Song, J. A., Kim, Y., and Kim, W.: The Progress of KMA’s Impact-based Forecasting, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11979, https://doi.org/10.5194/egusphere-egu2020-11979, 2020.
The Korea Meteorological Administration (KMA) set a main policy goal as “Impact-based forecasting (IBF) for mitigation of meteorological disaster risks” in 2016. As a first step toward the goal, each regional office of the KMA operated a prototype of impact-based forecast service tailored to major severe weather conditions in each region from 2016 to 2018. As a result, the prototype service was found to contribute to reducing meteorological disasters in those regions. In order to determine quantitative impacts caused by meteorological disasters, a multi-ministerial R&D project was began in 2018 which is aiming to develop the Hazard Impact Models (HIM) for heavy rainfall and heatwave/coldwave. The project will be completed by the end of 2020, and the developed HIM will be operated for the KMA operational IBF.
The KMA officially launched heatwave IBF service from June to September 2019 in order to support effective reduction of heatwave impacts. The KMA provided risk levels in different colors (attention-green, caution-yellow, warning-orange, danger-red), impact information and response tips for seven sectors—health, industry, livestock, aquaculture, agriculture, transportation and electric power—considering the regional characteristics. This information was disseminated to the public on the KMA's website. It was also provided to disaster response related agencies through the Meteorological Information Portal Service System for Disaster Prevention, as well as to local governments’ disaster response managers and officials managing the socially vulnerable people through mobile text messages. According to user satisfaction survey, a great number of users showed positive responses to the KMA heatwave IBF. Based on the success of heatwave IBF, coldwave IBF trial service was offered from December 2019 to March 2020. In addition, KMA plans to expand IBF to other high-impact weathers such as typhoon, heavy snow, heavy rainfall, and so on.
How to cite: Han, H., Park, Y., Kim, J.-H., Ahn, Y., Lee, K., Song, J. A., Kim, Y., and Kim, W.: The Progress of KMA’s Impact-based Forecasting, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11979, https://doi.org/10.5194/egusphere-egu2020-11979, 2020.
EGU2020-6003 | Displays | ITS1.4/HS4.8
Designing a multi-objective framework for forecast-based action of extreme rains in PeruJonathan Lala, Juan Bazo, and Paul Block
The last few years have seen a major innovation within disaster management and financing through the emergence of standardized forecast-based action protocols. Given sufficient forecasting skill and lead time, financial resources can be shifted from disaster response to disaster preparedness, potentially saving both lives and property. Short-term (hours to days) early warning systems are common worldwide; however, longer-term (months to seasons) early actions are still relatively under-studied. Seeking to address both, the Peruvian Red Cross has developed an Early Action Protocol (EAP) for El Niño-related extreme precipitation and floods. The EAP has well-defined risk metrics, forecast triggers, and early actions ranging from 5 days to 3 months before a forecasted disaster. Changes in climate regimes, forecast technology, or institutional and financial constraints, however, may significantly alter expected impacts of these early actions. A robust sensitivity analysis of situational and technological constraints is thus conducted to identify benefits and tradeoffs of various actions given various future scenarios, ensuring an adaptive and effective protocol that can be used for a wide range of changing circumstances.
How to cite: Lala, J., Bazo, J., and Block, P.: Designing a multi-objective framework for forecast-based action of extreme rains in Peru, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6003, https://doi.org/10.5194/egusphere-egu2020-6003, 2020.
The last few years have seen a major innovation within disaster management and financing through the emergence of standardized forecast-based action protocols. Given sufficient forecasting skill and lead time, financial resources can be shifted from disaster response to disaster preparedness, potentially saving both lives and property. Short-term (hours to days) early warning systems are common worldwide; however, longer-term (months to seasons) early actions are still relatively under-studied. Seeking to address both, the Peruvian Red Cross has developed an Early Action Protocol (EAP) for El Niño-related extreme precipitation and floods. The EAP has well-defined risk metrics, forecast triggers, and early actions ranging from 5 days to 3 months before a forecasted disaster. Changes in climate regimes, forecast technology, or institutional and financial constraints, however, may significantly alter expected impacts of these early actions. A robust sensitivity analysis of situational and technological constraints is thus conducted to identify benefits and tradeoffs of various actions given various future scenarios, ensuring an adaptive and effective protocol that can be used for a wide range of changing circumstances.
How to cite: Lala, J., Bazo, J., and Block, P.: Designing a multi-objective framework for forecast-based action of extreme rains in Peru, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6003, https://doi.org/10.5194/egusphere-egu2020-6003, 2020.
EGU2020-1367 | Displays | ITS1.4/HS4.8
Flood preparedness decisions and stakeholders' perspectives on flood early warning in BangladeshSazzad Hossain, Hannah Cloke, Andrea Ficchì, and Elisabeth Stephens
There is high temporal variability in the occurrence of the monsoon floods in Bangladesh during the South Asian summer monsoon. Detailed flood forecast information about flood timing and duration can play a vital role in flood preparedness decisions. The objective of this study is to understand different stakeholder perceptions about existing forecasting tools and data, and how these can support preparedness and response activities. Forecast users can be divided into three broad categories-national, sub-national and community level. The stakeholders working at national level are involved in policy making while the sub-national level involved in implementation of policies. In order to identify the appropriate lead-time for better flood preparedness and the challenges in communicating probabilistic forecasts to users, semi-structured interviews with key stakeholders involved in various sectors of flood disaster management at national and sub-national level, community level household surveys, focus group discussions and a national consultation workshop were undertaken during the 2019 monsoon.
It was found all major stakeholders working at national and sub-national levels are aware of the availability of forecasts and receive flood forecasts from the Flood Forecasting and Warning Centre (FFWC). However, about 40% of the respondents at the community do not receive forecast information. Before the flood event, policy level stakeholders need to know the availability of resources and preparedness at the sub-national level for better response activities. On the other hand, sub-national level stakeholders of different government agencies act as a bridge between policy level and the local community. Existing short-range forecasts cannot provide information about the potential flood duration which is essential for resources assessment, mobilization and preparedness activities.
People living in the floodplain are aware about the flood seasons as it is an annual phenomenon. However, they can anticipate floods events only 2 to 3 days beforehand based on the available early warning and their risk knowledge. This short-range forecast can be used for some basic household level response activities such as protecting household equipment or moving their livestock to a safer place. It is essential to know the actual duration and flood extent for their agricultural decisions such as understanding when to transplant young crops into the field. The study found that all stakeholders need forecast information with a lead-time between 15 to 20 days for better flood preparedness decisions. People are likely to have seen deterministic forecasts so far and are not used to probabilistic forecasts with multiple scenarios for a same event. However, national forecast bulletins may include probability of flooding events based on a threshold known as flood danger level. Capacity development of the local community is necessary to improve understanding of the probabilistic forecast and overcome communication challenges.
How to cite: Hossain, S., Cloke, H., Ficchì, A., and Stephens, E.: Flood preparedness decisions and stakeholders' perspectives on flood early warning in Bangladesh, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1367, https://doi.org/10.5194/egusphere-egu2020-1367, 2020.
There is high temporal variability in the occurrence of the monsoon floods in Bangladesh during the South Asian summer monsoon. Detailed flood forecast information about flood timing and duration can play a vital role in flood preparedness decisions. The objective of this study is to understand different stakeholder perceptions about existing forecasting tools and data, and how these can support preparedness and response activities. Forecast users can be divided into three broad categories-national, sub-national and community level. The stakeholders working at national level are involved in policy making while the sub-national level involved in implementation of policies. In order to identify the appropriate lead-time for better flood preparedness and the challenges in communicating probabilistic forecasts to users, semi-structured interviews with key stakeholders involved in various sectors of flood disaster management at national and sub-national level, community level household surveys, focus group discussions and a national consultation workshop were undertaken during the 2019 monsoon.
It was found all major stakeholders working at national and sub-national levels are aware of the availability of forecasts and receive flood forecasts from the Flood Forecasting and Warning Centre (FFWC). However, about 40% of the respondents at the community do not receive forecast information. Before the flood event, policy level stakeholders need to know the availability of resources and preparedness at the sub-national level for better response activities. On the other hand, sub-national level stakeholders of different government agencies act as a bridge between policy level and the local community. Existing short-range forecasts cannot provide information about the potential flood duration which is essential for resources assessment, mobilization and preparedness activities.
People living in the floodplain are aware about the flood seasons as it is an annual phenomenon. However, they can anticipate floods events only 2 to 3 days beforehand based on the available early warning and their risk knowledge. This short-range forecast can be used for some basic household level response activities such as protecting household equipment or moving their livestock to a safer place. It is essential to know the actual duration and flood extent for their agricultural decisions such as understanding when to transplant young crops into the field. The study found that all stakeholders need forecast information with a lead-time between 15 to 20 days for better flood preparedness decisions. People are likely to have seen deterministic forecasts so far and are not used to probabilistic forecasts with multiple scenarios for a same event. However, national forecast bulletins may include probability of flooding events based on a threshold known as flood danger level. Capacity development of the local community is necessary to improve understanding of the probabilistic forecast and overcome communication challenges.
How to cite: Hossain, S., Cloke, H., Ficchì, A., and Stephens, E.: Flood preparedness decisions and stakeholders' perspectives on flood early warning in Bangladesh, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1367, https://doi.org/10.5194/egusphere-egu2020-1367, 2020.
EGU2020-507 | Displays | ITS1.4/HS4.8
Towards improving a national flood early warning system with global ensemble flood predictions and local knowledge; a case study on the Lower Shire Valley in Malawi.Thirza Teule, Anaïs Couasnon, Kostas Bischiniotis, Julia Blasch, and Marc van den Homberg
Flood risk, a function of hazard, exposure, and vulnerability, is increasing globally and has led to more and more disastrous flood events. Previous research has shown that taking early action is much more cost-effective than responding once the flood occurs. Such an anticipatory approach requires flood early warning systems (EWS) that provide ample lead time and that have sufficient spatial resolution. However, in developing countries, often the skill of available forecasts is insufficient to create a more effective triggering mechanism as part of a flood EWS.
This research presents an assessment of two methods to improve an existing flood EWS using a case study of the most flood-prone area of Malawi, i.e. the Lower Shire Valley. First, the forecast skill and trigger levels of the medium-term Global Flood Awareness System (GloFAS) model are determined for four gauge locations to assess how they can improve the national EWS. Secondly, an assessment is done on how the process of integrating flood forecasts based on local knowledge with official forecasts, can help to improve the EWS. This is done by semi-structured interviews at the national level and focus group discussions at the community level. The study shows that GloFAS does not predict absolute discharge values precisely, but can be used to predict floods if the correct trigger levels are set per location. The integration of multiple forecast sources is found to be useful at both national and community levels. An integration process is proposed where village stakeholders should take the leading role by using existing disaster management and civil protection coordination mechanisms. Overall, both methods can contribute to improving the flood EWS and decreasing the flood risk in the Lower Shire Valley in Malawi.
How to cite: Teule, T., Couasnon, A., Bischiniotis, K., Blasch, J., and van den Homberg, M.: Towards improving a national flood early warning system with global ensemble flood predictions and local knowledge; a case study on the Lower Shire Valley in Malawi., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-507, https://doi.org/10.5194/egusphere-egu2020-507, 2020.
Flood risk, a function of hazard, exposure, and vulnerability, is increasing globally and has led to more and more disastrous flood events. Previous research has shown that taking early action is much more cost-effective than responding once the flood occurs. Such an anticipatory approach requires flood early warning systems (EWS) that provide ample lead time and that have sufficient spatial resolution. However, in developing countries, often the skill of available forecasts is insufficient to create a more effective triggering mechanism as part of a flood EWS.
This research presents an assessment of two methods to improve an existing flood EWS using a case study of the most flood-prone area of Malawi, i.e. the Lower Shire Valley. First, the forecast skill and trigger levels of the medium-term Global Flood Awareness System (GloFAS) model are determined for four gauge locations to assess how they can improve the national EWS. Secondly, an assessment is done on how the process of integrating flood forecasts based on local knowledge with official forecasts, can help to improve the EWS. This is done by semi-structured interviews at the national level and focus group discussions at the community level. The study shows that GloFAS does not predict absolute discharge values precisely, but can be used to predict floods if the correct trigger levels are set per location. The integration of multiple forecast sources is found to be useful at both national and community levels. An integration process is proposed where village stakeholders should take the leading role by using existing disaster management and civil protection coordination mechanisms. Overall, both methods can contribute to improving the flood EWS and decreasing the flood risk in the Lower Shire Valley in Malawi.
How to cite: Teule, T., Couasnon, A., Bischiniotis, K., Blasch, J., and van den Homberg, M.: Towards improving a national flood early warning system with global ensemble flood predictions and local knowledge; a case study on the Lower Shire Valley in Malawi., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-507, https://doi.org/10.5194/egusphere-egu2020-507, 2020.
EGU2020-19559 | Displays | ITS1.4/HS4.8
On the development and operationalization of an impact-based forecasting system to support early action for river floods in ZambiaStefania Giodini, Aklilu Teklesadik, Jannis Visser, Orla Canavan, Innocent Bwalya, Irene Amuron, and Marc van den Homberg
How to cite: Giodini, S., Teklesadik, A., Visser, J., Canavan, O., Bwalya, I., Amuron, I., and van den Homberg, M.: On the development and operationalization of an impact-based forecasting system to support early action for river floods in Zambia, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19559, https://doi.org/10.5194/egusphere-egu2020-19559, 2020.
How to cite: Giodini, S., Teklesadik, A., Visser, J., Canavan, O., Bwalya, I., Amuron, I., and van den Homberg, M.: On the development and operationalization of an impact-based forecasting system to support early action for river floods in Zambia, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19559, https://doi.org/10.5194/egusphere-egu2020-19559, 2020.
EGU2020-20150 | Displays | ITS1.4/HS4.8
Improving flood damage assessments in data-scarce areas by retrieving building characteristics through automated UAV image processingHans de Moel, Lucas Wouters, Marleen de Ruiter, Anais Couasnon, Marc van den Homberg, Aklilu Teklesadik, and Jacopo Margutti
Reliable information on building stock and its vulnerability is important for understanding societal exposure to flooding and other natural hazards. Unfortunately, this often lacks in developing countries, resulting in flood damage assessments that use aggregated information collected on a national- or district level. In many instances, this information does not provide a representation of the built environment, nor its characteristics. This study aims to improve current assessments of flood damage by extracting structural characteristics on an individual building level and estimating flood damage based on its related susceptibility. An Object-Based Image Analysis (OBIA) of high-resolution drone imagery is carried out, after which a machine learning algorithm is used to classify building types and outline building shapes. This is applied to local stage-dependent damage curves. To estimate damage, the flood impact is based on the flood extent of the 2019 mid-January floods in Malawi, derived from satellite remote sensing. Corresponding water depth is extracted from this inundation map and taken as the damaging hydrological parameter in the model. The approach is applied to three villages in a flood-prone area in the Southern Shire basin in Malawi. By comparing the estimated damage from the individual object approach with an aggregated land-use approach, we highlight the potential for very detailed and local damage assessments using drone imagery in low accessible and dynamic environments. The results show that the different approaches on exposed elements make a significant difference in damage estimation and we make recommendations for future assessments in similar areas and scales.
How to cite: de Moel, H., Wouters, L., de Ruiter, M., Couasnon, A., van den Homberg, M., Teklesadik, A., and Margutti, J.: Improving flood damage assessments in data-scarce areas by retrieving building characteristics through automated UAV image processing, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20150, https://doi.org/10.5194/egusphere-egu2020-20150, 2020.
Reliable information on building stock and its vulnerability is important for understanding societal exposure to flooding and other natural hazards. Unfortunately, this often lacks in developing countries, resulting in flood damage assessments that use aggregated information collected on a national- or district level. In many instances, this information does not provide a representation of the built environment, nor its characteristics. This study aims to improve current assessments of flood damage by extracting structural characteristics on an individual building level and estimating flood damage based on its related susceptibility. An Object-Based Image Analysis (OBIA) of high-resolution drone imagery is carried out, after which a machine learning algorithm is used to classify building types and outline building shapes. This is applied to local stage-dependent damage curves. To estimate damage, the flood impact is based on the flood extent of the 2019 mid-January floods in Malawi, derived from satellite remote sensing. Corresponding water depth is extracted from this inundation map and taken as the damaging hydrological parameter in the model. The approach is applied to three villages in a flood-prone area in the Southern Shire basin in Malawi. By comparing the estimated damage from the individual object approach with an aggregated land-use approach, we highlight the potential for very detailed and local damage assessments using drone imagery in low accessible and dynamic environments. The results show that the different approaches on exposed elements make a significant difference in damage estimation and we make recommendations for future assessments in similar areas and scales.
How to cite: de Moel, H., Wouters, L., de Ruiter, M., Couasnon, A., van den Homberg, M., Teklesadik, A., and Margutti, J.: Improving flood damage assessments in data-scarce areas by retrieving building characteristics through automated UAV image processing, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20150, https://doi.org/10.5194/egusphere-egu2020-20150, 2020.
EGU2020-16179 | Displays | ITS1.4/HS4.8
Proactive Drought and Extreme Event Preparedness: Seasonal Climate Forecasts offer Benefit for Decision Making in Water Management in Semi-arid RegionsTanja Portele, Christof Lorenz, Patrick Laux, and Harald Kunstmann
Semi-arid regions are the regions mostly affected by drought. In these climatically sensitive regions, the frequency and intensity of drought and hot extremes is projected to increase. With increasing precipitation variability in semi-arid regions, sustainable water management is required. Proactive drought and extreme event preparedness, as well as damage mitigation could be provided by the use of seasonal climate forecasts. However, their probabilistic nature, the lack of clear action derivations and institutional conservatism impedes their application in decision making of the water management sector. Using the latest global seasonal climate forecast product (SEAS5) at 35 km resolution and 7 months forecast horizon of the European Centre for Medium-Range Weather Forecasts, we show that seasonal-forecast-based actions offer potential economic benefit and allow for climate proofing in semi-arid regions in the case of drought and extreme events. Our analysis includes 7 semi-arid, in parts highly managed river basins with extents from tens of thousands to millions of square kilometers in Africa, Asia and South America. The value of the forecast-based action is derived from the skill measures of hit (worthy action) and false alarm (action in vain) rate and is related to economic expenses through ratios of associated costs and losses of an early action. For water management policies, forecast probability triggers for early action plans can be offered based on expense minimization and event maximization criteria. Our results show that even high lead times and long accumulation periods attain value for a range of users and cost-loss situations. For example, in the case of extreme wet conditions (monthly precipitation above 90th percentile), seasonal-forecast-based action in 5 out of 7 regions can still achieve more than 50 % of saved expenses of a perfect forecast at 6 months in advance. The utility of seasonal forecasts strongly depends on the user, the cost-loss situation, the region and the concrete application. In general, seasonal forecasts allow decision makers to save expenses, and to adapt to and mitigate damages of extreme events related to climate change.
How to cite: Portele, T., Lorenz, C., Laux, P., and Kunstmann, H.: Proactive Drought and Extreme Event Preparedness: Seasonal Climate Forecasts offer Benefit for Decision Making in Water Management in Semi-arid Regions, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-16179, https://doi.org/10.5194/egusphere-egu2020-16179, 2020.
Semi-arid regions are the regions mostly affected by drought. In these climatically sensitive regions, the frequency and intensity of drought and hot extremes is projected to increase. With increasing precipitation variability in semi-arid regions, sustainable water management is required. Proactive drought and extreme event preparedness, as well as damage mitigation could be provided by the use of seasonal climate forecasts. However, their probabilistic nature, the lack of clear action derivations and institutional conservatism impedes their application in decision making of the water management sector. Using the latest global seasonal climate forecast product (SEAS5) at 35 km resolution and 7 months forecast horizon of the European Centre for Medium-Range Weather Forecasts, we show that seasonal-forecast-based actions offer potential economic benefit and allow for climate proofing in semi-arid regions in the case of drought and extreme events. Our analysis includes 7 semi-arid, in parts highly managed river basins with extents from tens of thousands to millions of square kilometers in Africa, Asia and South America. The value of the forecast-based action is derived from the skill measures of hit (worthy action) and false alarm (action in vain) rate and is related to economic expenses through ratios of associated costs and losses of an early action. For water management policies, forecast probability triggers for early action plans can be offered based on expense minimization and event maximization criteria. Our results show that even high lead times and long accumulation periods attain value for a range of users and cost-loss situations. For example, in the case of extreme wet conditions (monthly precipitation above 90th percentile), seasonal-forecast-based action in 5 out of 7 regions can still achieve more than 50 % of saved expenses of a perfect forecast at 6 months in advance. The utility of seasonal forecasts strongly depends on the user, the cost-loss situation, the region and the concrete application. In general, seasonal forecasts allow decision makers to save expenses, and to adapt to and mitigate damages of extreme events related to climate change.
How to cite: Portele, T., Lorenz, C., Laux, P., and Kunstmann, H.: Proactive Drought and Extreme Event Preparedness: Seasonal Climate Forecasts offer Benefit for Decision Making in Water Management in Semi-arid Regions, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-16179, https://doi.org/10.5194/egusphere-egu2020-16179, 2020.
EGU2020-6647 | Displays | ITS1.4/HS4.8
Forecasting vegetation condition to mitigate the impacts of drought in Kenya.Pedram Rowhani, Adam Barrett, Seb Oliver, James Muthoka, Edward Salakpi, and Steven Duivenvoorden
Droughts are a major threat globally as they can cause substantial damage to society, especially in regions that depend on rain-fed agriculture. Acting early based on alerts provided by early warning systems (EWS) can potentially provide substantial mitigation, reducing the financial and human cost. However, existing EWS tend only to monitor current, rather than forecast future, environmental and socioeconomic indicators of drought, and hence are not always sufficiently timely to be effective in practice. In Kenya, the National Drought Management Authority (NDMA) provides monthly bulletins assessing food security in the 23 arid and semiarid regions using current biophysical (e.g., rainfall, vegetation condition) and socio-economic (production, access, and utilisation) factors. One key biophysical indicator used by the NDMA drought phase classification is based on the Vegetation Condition Index (VCI).
In this study we explore machine-learning techniques to forecast (up to six weeks ahead) the 3-month VCI, commonly used in the pastoral areas of Kenya to monitor droughts. we specifically focus on Gaussian Process modelling and linear autoregressive modelling to forecast this indicator, which are derived from both Landsat (every 16 days at 30m resolution) and the MODerate resolution Imaging Spectroradiometer (MODIS - daily data at 500m resolution).
Our methods provide highly skillful forecasting several weeks ahead. As a benchmark we predicted the drought alert marker used by NDMA (VCI3M< 35). Both of our models were able to predict this alert marker four weeks ahead with a hit rate of around 89% and a false alarm rate of around 4%, or 81% and 6% respectively six weeks ahead.
The forecasts developed here could, for example, help establish a new drought phase classification (`Early Alert') which, along with adequate preparedness actions developed by the disaster risk managers, would minimise the risk of a worsening drought condition.
How to cite: Rowhani, P., Barrett, A., Oliver, S., Muthoka, J., Salakpi, E., and Duivenvoorden, S.: Forecasting vegetation condition to mitigate the impacts of drought in Kenya., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6647, https://doi.org/10.5194/egusphere-egu2020-6647, 2020.
Droughts are a major threat globally as they can cause substantial damage to society, especially in regions that depend on rain-fed agriculture. Acting early based on alerts provided by early warning systems (EWS) can potentially provide substantial mitigation, reducing the financial and human cost. However, existing EWS tend only to monitor current, rather than forecast future, environmental and socioeconomic indicators of drought, and hence are not always sufficiently timely to be effective in practice. In Kenya, the National Drought Management Authority (NDMA) provides monthly bulletins assessing food security in the 23 arid and semiarid regions using current biophysical (e.g., rainfall, vegetation condition) and socio-economic (production, access, and utilisation) factors. One key biophysical indicator used by the NDMA drought phase classification is based on the Vegetation Condition Index (VCI).
In this study we explore machine-learning techniques to forecast (up to six weeks ahead) the 3-month VCI, commonly used in the pastoral areas of Kenya to monitor droughts. we specifically focus on Gaussian Process modelling and linear autoregressive modelling to forecast this indicator, which are derived from both Landsat (every 16 days at 30m resolution) and the MODerate resolution Imaging Spectroradiometer (MODIS - daily data at 500m resolution).
Our methods provide highly skillful forecasting several weeks ahead. As a benchmark we predicted the drought alert marker used by NDMA (VCI3M< 35). Both of our models were able to predict this alert marker four weeks ahead with a hit rate of around 89% and a false alarm rate of around 4%, or 81% and 6% respectively six weeks ahead.
The forecasts developed here could, for example, help establish a new drought phase classification (`Early Alert') which, along with adequate preparedness actions developed by the disaster risk managers, would minimise the risk of a worsening drought condition.
How to cite: Rowhani, P., Barrett, A., Oliver, S., Muthoka, J., Salakpi, E., and Duivenvoorden, S.: Forecasting vegetation condition to mitigate the impacts of drought in Kenya., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6647, https://doi.org/10.5194/egusphere-egu2020-6647, 2020.
EGU2020-16114 | Displays | ITS1.4/HS4.8 | Highlight
Linking Drought Forecast Information to Smallholder Farmer’s Strategies and Local Knowledge in Southern MalawiIleen Streefkerk, Hessel Winsemius, Marc van den Homberg, Micha Werner, Tina Comes, Maurits Ertsen, Neha Mittal, and Gumbi Gumbi
Most people of Malawi are dependent on rainfed agriculture for their livelihoods. This leaves them vulnerable to drought and changing rainfall patterns due to climate change. Farmers have adopted local strategies and knowledge which have evolved over time to help in reducing the overall vulnerability to climate variability shocks. One other option to increase the resilience of rainfed farmers to drought, is providing forecast information on the upcoming rainfall season. Forecast information has the potential to inform farmers in their decisions surrounding agricultural strategies. However, significant challenges remain in the provision of forecast information. Often, the forecast information is not tailored to farmers, resulting in limited uptake of forecast information into their agricultural decision-making.
Therefore, this study explores how drought forecast information can be linked to existing farmers strategies and local knowledge on predicting future rainfall patterns. By using participatory approaches, an understanding is created of what requirements drought forecast information should meet to effectively inform farmers in their decision-making. Based on these requirements a sequential threshold model, using meteorological indicators based on farmer’s local knowledge was developed to predict drought indicators (e.g. late onset of rains and dry spells). Additionally, using interviews among stakeholders and a visualisation of the current information flow, further insights on the current drought information system was developed.
The results of this research show that local knowledge has a predictive value for forecasting drought indicators. The skill of the forecast differs per location with an increased skill for Southern locations. In addition, the results suggest that local knowledge indicators have an increased predictive value in forecasting the locally relevant drought indicators in comparison the currently used ENSO-related indicators. This research argues that the inclusion of local knowledge could potentially improve the current forecast information by tailoring it to farmer's forecast requirements and context. Therefore, the findings of this research could be insightful and relevant for actors or research fields involved in drought forecasting in relation to user-specific needs.
How to cite: Streefkerk, I., Winsemius, H., van den Homberg, M., Werner, M., Comes, T., Ertsen, M., Mittal, N., and Gumbi, G.: Linking Drought Forecast Information to Smallholder Farmer’s Strategies and Local Knowledge in Southern Malawi, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-16114, https://doi.org/10.5194/egusphere-egu2020-16114, 2020.
Most people of Malawi are dependent on rainfed agriculture for their livelihoods. This leaves them vulnerable to drought and changing rainfall patterns due to climate change. Farmers have adopted local strategies and knowledge which have evolved over time to help in reducing the overall vulnerability to climate variability shocks. One other option to increase the resilience of rainfed farmers to drought, is providing forecast information on the upcoming rainfall season. Forecast information has the potential to inform farmers in their decisions surrounding agricultural strategies. However, significant challenges remain in the provision of forecast information. Often, the forecast information is not tailored to farmers, resulting in limited uptake of forecast information into their agricultural decision-making.
Therefore, this study explores how drought forecast information can be linked to existing farmers strategies and local knowledge on predicting future rainfall patterns. By using participatory approaches, an understanding is created of what requirements drought forecast information should meet to effectively inform farmers in their decision-making. Based on these requirements a sequential threshold model, using meteorological indicators based on farmer’s local knowledge was developed to predict drought indicators (e.g. late onset of rains and dry spells). Additionally, using interviews among stakeholders and a visualisation of the current information flow, further insights on the current drought information system was developed.
The results of this research show that local knowledge has a predictive value for forecasting drought indicators. The skill of the forecast differs per location with an increased skill for Southern locations. In addition, the results suggest that local knowledge indicators have an increased predictive value in forecasting the locally relevant drought indicators in comparison the currently used ENSO-related indicators. This research argues that the inclusion of local knowledge could potentially improve the current forecast information by tailoring it to farmer's forecast requirements and context. Therefore, the findings of this research could be insightful and relevant for actors or research fields involved in drought forecasting in relation to user-specific needs.
How to cite: Streefkerk, I., Winsemius, H., van den Homberg, M., Werner, M., Comes, T., Ertsen, M., Mittal, N., and Gumbi, G.: Linking Drought Forecast Information to Smallholder Farmer’s Strategies and Local Knowledge in Southern Malawi, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-16114, https://doi.org/10.5194/egusphere-egu2020-16114, 2020.
EGU2020-19652 | Displays | ITS1.4/HS4.8
A review of the effectiveness of drought warning communication and dissemination in MalawiAlexia Calvel, Micha Werner, Marc van den Homberg, Hans van der Kwast, Andrés Cabrera Flamini, and Neha Mittal
Droughts constitute one of the major and complex natural hazards that may lead to food insecurity due to its long-term and cumulative impact, compounded by the difficulty of drought being predicted. Efforts to improve early warning systems are being conducted to help reduce the impacts caused by drought events, and although significant advances have been made in the forecasting of drought, provision of actionable warning that leads to effective response is challenging due to a range of factors. In this study we aim to improve our understanding of how people-centred warning communication and dissemination is being carried out for drought warning in Malawi. Our methodology is based on five focus group discussions with farmers and 25 semi-structured interviews with various government officials, as well as with representatives from UNDP, WFP and NGOs. The analysis of these interviews and discussions is based on a qualitative approach using the concept of grounded theory and content analysis; to better understand the organisational structure, communication processes and the ability of warnings triggering actions by farmers and NGOs.
Our results identified that both within the farming communities as with the NGO’s and working at the local level there is a different perception than expected of what constitutes drought. Droughts are considered to be events that cause the failure of crops, which relates primarily to the occurrence of prolonged dry spells following the planting season, fall army worms and even the occurrence of floods. Consistently, drought warnings that are disseminated at the local level have been found to focus on these aspects. Moreover, it was found that although these warnings do trigger actions, they do so only to a certain extent. Daily weather forecast are not being used by farmers due to the high uncertainty associated with these predictions. For NGOs, drought early warnings are used in combination with the famine early warnings to trigger early actions.
Communication channels and processes were found to be well adapted to local conditions and to disseminate the consistent drought warnings and guidance to end-users. This has led to improved trust towards drought early warnings received. However, the high level of illiteracy and lack of understanding of the link between impacts and weather information render the seasonal forecast and text-messages unusable to farmers, with agricultural extension officers and the community-radios the preferred channels of communication. Organisational structures and processes appear to be relatively clear. Nevertheless, feedback mechanisms were found to be only scantily implemented due to the lack documentation on local perceptions and indigenous knowledge. Overall our results show that progress has been made in meeting the requirements for a people-centred early warning. However, external challenges such as a lack of local funds which has led to a high dependency on donors or the frequent changes of government officials affect the well-development of such an approach.
How to cite: Calvel, A., Werner, M., van den Homberg, M., van der Kwast, H., Cabrera Flamini, A., and Mittal, N.: A review of the effectiveness of drought warning communication and dissemination in Malawi , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19652, https://doi.org/10.5194/egusphere-egu2020-19652, 2020.
Droughts constitute one of the major and complex natural hazards that may lead to food insecurity due to its long-term and cumulative impact, compounded by the difficulty of drought being predicted. Efforts to improve early warning systems are being conducted to help reduce the impacts caused by drought events, and although significant advances have been made in the forecasting of drought, provision of actionable warning that leads to effective response is challenging due to a range of factors. In this study we aim to improve our understanding of how people-centred warning communication and dissemination is being carried out for drought warning in Malawi. Our methodology is based on five focus group discussions with farmers and 25 semi-structured interviews with various government officials, as well as with representatives from UNDP, WFP and NGOs. The analysis of these interviews and discussions is based on a qualitative approach using the concept of grounded theory and content analysis; to better understand the organisational structure, communication processes and the ability of warnings triggering actions by farmers and NGOs.
Our results identified that both within the farming communities as with the NGO’s and working at the local level there is a different perception than expected of what constitutes drought. Droughts are considered to be events that cause the failure of crops, which relates primarily to the occurrence of prolonged dry spells following the planting season, fall army worms and even the occurrence of floods. Consistently, drought warnings that are disseminated at the local level have been found to focus on these aspects. Moreover, it was found that although these warnings do trigger actions, they do so only to a certain extent. Daily weather forecast are not being used by farmers due to the high uncertainty associated with these predictions. For NGOs, drought early warnings are used in combination with the famine early warnings to trigger early actions.
Communication channels and processes were found to be well adapted to local conditions and to disseminate the consistent drought warnings and guidance to end-users. This has led to improved trust towards drought early warnings received. However, the high level of illiteracy and lack of understanding of the link between impacts and weather information render the seasonal forecast and text-messages unusable to farmers, with agricultural extension officers and the community-radios the preferred channels of communication. Organisational structures and processes appear to be relatively clear. Nevertheless, feedback mechanisms were found to be only scantily implemented due to the lack documentation on local perceptions and indigenous knowledge. Overall our results show that progress has been made in meeting the requirements for a people-centred early warning. However, external challenges such as a lack of local funds which has led to a high dependency on donors or the frequent changes of government officials affect the well-development of such an approach.
How to cite: Calvel, A., Werner, M., van den Homberg, M., van der Kwast, H., Cabrera Flamini, A., and Mittal, N.: A review of the effectiveness of drought warning communication and dissemination in Malawi , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19652, https://doi.org/10.5194/egusphere-egu2020-19652, 2020.
EGU2020-19891 | Displays | ITS1.4/HS4.8
Forecast based action: developing triggers for preventing food insecurity in Eastern AfricaMarc van den Homberg, Gabriela Guimarães Nobre, and Edward Bolton
The project “Forecast based Financing for Food Security” (F4S) was initiated in July 2019 with the aim to provide a deeper understanding of how forecast information could be routinely used as a basis for financing early action for preventing food insecurity in pilot areas in Ethiopia, Kenya, and Uganda. The F4S project is linked to the existing Innovative Approaches in Response Preparedness Project and is in response to the growing interest and attention placed in recent years by academic institutions, development and humanitarian agencies on creating evidence that can leverage risk prevention and disaster risk reduction.
To ensure adequate forecast-based actions one needs to have the right information and evidence to guide fast decision-making. Key enabling aspects are an understanding of the impact of food insecurity, the resources needed to address it and an insight in the associated costs, beneficiaries’ preferences and lead times. In response to that, the F4S is currently:
- Developing an impact-based probabilistic food insecurity forecasting model using Machine Learning algorithms and datasets of food insecurity drivers;
- Collecting local evidence on food insecurity triggers and information on individual preferences on key design elements of cash transfer mechanisms through surveys and choice experiments;
- Evaluating the cost-effectiveness of different cash transfer mechanisms.
This PICO presentation seeks to share lessons learnt and preliminary results on the development of triggers for enabling early action against the first signs of food insecurity in Eastern Africa. It presents key results obtained through surveys and choice experiments regarding local knowledge in association with food insecurity and aid design. Furthermore, it presents the potential cost-effectiveness and advantages of acting based on forecasts.
How to cite: van den Homberg, M., Guimarães Nobre, G., and Bolton, E.: Forecast based action: developing triggers for preventing food insecurity in Eastern Africa, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19891, https://doi.org/10.5194/egusphere-egu2020-19891, 2020.
The project “Forecast based Financing for Food Security” (F4S) was initiated in July 2019 with the aim to provide a deeper understanding of how forecast information could be routinely used as a basis for financing early action for preventing food insecurity in pilot areas in Ethiopia, Kenya, and Uganda. The F4S project is linked to the existing Innovative Approaches in Response Preparedness Project and is in response to the growing interest and attention placed in recent years by academic institutions, development and humanitarian agencies on creating evidence that can leverage risk prevention and disaster risk reduction.
To ensure adequate forecast-based actions one needs to have the right information and evidence to guide fast decision-making. Key enabling aspects are an understanding of the impact of food insecurity, the resources needed to address it and an insight in the associated costs, beneficiaries’ preferences and lead times. In response to that, the F4S is currently:
- Developing an impact-based probabilistic food insecurity forecasting model using Machine Learning algorithms and datasets of food insecurity drivers;
- Collecting local evidence on food insecurity triggers and information on individual preferences on key design elements of cash transfer mechanisms through surveys and choice experiments;
- Evaluating the cost-effectiveness of different cash transfer mechanisms.
This PICO presentation seeks to share lessons learnt and preliminary results on the development of triggers for enabling early action against the first signs of food insecurity in Eastern Africa. It presents key results obtained through surveys and choice experiments regarding local knowledge in association with food insecurity and aid design. Furthermore, it presents the potential cost-effectiveness and advantages of acting based on forecasts.
How to cite: van den Homberg, M., Guimarães Nobre, G., and Bolton, E.: Forecast based action: developing triggers for preventing food insecurity in Eastern Africa, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19891, https://doi.org/10.5194/egusphere-egu2020-19891, 2020.
EGU2020-15190 | Displays | ITS1.4/HS4.8
Knowledge, Attitudes and Practices on extreme heat: Insights from outdoor workers in Hanoi, VietnamSteffen Lohrey, Melissa Chua, Jerôme Faucet, and Jason Lee
Purpose: Extreme heat threatens poor urban populations, and particularly those who are economically forced to work in the outdoors and hot environments. Thus, the Vietnamese Red Cross, with technical support from the German Red Cross, is implementing a Forecast-based Financing project to assist vulnerable population groups in urban areas before and during heatwaves. In order to inform this humanitarian project on choosing appropriate early actions, this research investigates empirical evidence on heat vulnerability using data from a “Knowledge Attitudes Practices” (KAP) survey conducted in 2018 among outdoor workers in Hanoi, Vietnam.
Methods: We analyze the outcome of the KAP survey, which comprised 1027 respondents classified into four different occupation groups. Key questions comprised respondents’ self-reported economic and health situation, impacts from past heatwaves, as well as on knowledge about measures reducing health impacts from extreme heat. We first use descriptive statistics to assess the basic properties of the surveyed population groups. We then use a principal component analysis to identify properties that best captured the variability of responses and to identify sub-groups.
Results: The different occupation groups surveyed (builders, vendors, bikers) showed distinctively different properties, not only in mean age (28 year, 45 years and 43 years respectively), but also in their knowledge about heat-health symptoms and their access to night-time air-conditioning (builders: only 14% compared to 42% for bikers). Air-conditioning access did not correlate with reported income. Builders knew considerably less about heat risk than other groups, but also reported fewer perceived symptoms. The three most common health symptoms reported were tiredness, sweating and thirst, with 22% of respondents having sought medical advice because of heat-related symptoms. Income reduction during heat events was reported by 48% of respondents. The vast majority of respondents have reported to increase drinking (89%) or to remain in shaded areas (87%). Most respondents (76%) could access and understand weather forecasts and early warnings.
Conclusion: Our data and analysis highlight how different occupation groups of outdoor workers in Hanoi vary in their socio-economic properties and their vulnerability to extreme heat. These insights into different groups can be used to direct the implementation of early actions for anticipatory humanitarian assistance before and during heatwaves.
How to cite: Lohrey, S., Chua, M., Faucet, J., and Lee, J.: Knowledge, Attitudes and Practices on extreme heat: Insights from outdoor workers in Hanoi, Vietnam, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-15190, https://doi.org/10.5194/egusphere-egu2020-15190, 2020.
Purpose: Extreme heat threatens poor urban populations, and particularly those who are economically forced to work in the outdoors and hot environments. Thus, the Vietnamese Red Cross, with technical support from the German Red Cross, is implementing a Forecast-based Financing project to assist vulnerable population groups in urban areas before and during heatwaves. In order to inform this humanitarian project on choosing appropriate early actions, this research investigates empirical evidence on heat vulnerability using data from a “Knowledge Attitudes Practices” (KAP) survey conducted in 2018 among outdoor workers in Hanoi, Vietnam.
Methods: We analyze the outcome of the KAP survey, which comprised 1027 respondents classified into four different occupation groups. Key questions comprised respondents’ self-reported economic and health situation, impacts from past heatwaves, as well as on knowledge about measures reducing health impacts from extreme heat. We first use descriptive statistics to assess the basic properties of the surveyed population groups. We then use a principal component analysis to identify properties that best captured the variability of responses and to identify sub-groups.
Results: The different occupation groups surveyed (builders, vendors, bikers) showed distinctively different properties, not only in mean age (28 year, 45 years and 43 years respectively), but also in their knowledge about heat-health symptoms and their access to night-time air-conditioning (builders: only 14% compared to 42% for bikers). Air-conditioning access did not correlate with reported income. Builders knew considerably less about heat risk than other groups, but also reported fewer perceived symptoms. The three most common health symptoms reported were tiredness, sweating and thirst, with 22% of respondents having sought medical advice because of heat-related symptoms. Income reduction during heat events was reported by 48% of respondents. The vast majority of respondents have reported to increase drinking (89%) or to remain in shaded areas (87%). Most respondents (76%) could access and understand weather forecasts and early warnings.
Conclusion: Our data and analysis highlight how different occupation groups of outdoor workers in Hanoi vary in their socio-economic properties and their vulnerability to extreme heat. These insights into different groups can be used to direct the implementation of early actions for anticipatory humanitarian assistance before and during heatwaves.
How to cite: Lohrey, S., Chua, M., Faucet, J., and Lee, J.: Knowledge, Attitudes and Practices on extreme heat: Insights from outdoor workers in Hanoi, Vietnam, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-15190, https://doi.org/10.5194/egusphere-egu2020-15190, 2020.
EGU2020-12337 | Displays | ITS1.4/HS4.8 | Highlight
Potential Tropical Cyclone Disaster Loss Assessment based on Multiple Hazard IndicatorsJian Li
Tropical cyclones could cause large casualties and economic loss in coastal area of China. It is of great importance to develop a method that can provide pre-event rapid loss assessment in a timely manner prior to the landing of a tropical cyclone. In this study, a pre-event tropical cyclone disaster loss assessment method based on similar tropical cyclone retrieval with multiple hazard indicators is proposed. Multiple indicators include tropical cyclone location, maximum wind speed, central pressure, radius of maximum wind, forward speed, Integrated Kinetic Energy (IKE), maximum storm surge, and maximum significant wave height. Firstly, the track similarity is measured by similarity deviation considering only the locations of tropical cyclone tracks. Secondly, the intensity similarity is measured by best similarity coefficient using central pressure, radius of maximum wind, maximum wind speed, moving speed, wind, storm surge, and wave intensity indices. Then, the potential loss of current tropical cyclone is assessed based on the retrieved similar tropical cyclones loss. Taking tropical cyclone Utor (2013) that affected China as an example, the potential loss is predicted according to the five most similar historical tropical cyclones which are retrieved from all the historical tropical cyclones. The method is flexible for rapid disaster loss assessment since it provides a relatively satisfactory result based on two scenarios of input dataset availability.
How to cite: Li, J.: Potential Tropical Cyclone Disaster Loss Assessment based on Multiple Hazard Indicators, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12337, https://doi.org/10.5194/egusphere-egu2020-12337, 2020.
Tropical cyclones could cause large casualties and economic loss in coastal area of China. It is of great importance to develop a method that can provide pre-event rapid loss assessment in a timely manner prior to the landing of a tropical cyclone. In this study, a pre-event tropical cyclone disaster loss assessment method based on similar tropical cyclone retrieval with multiple hazard indicators is proposed. Multiple indicators include tropical cyclone location, maximum wind speed, central pressure, radius of maximum wind, forward speed, Integrated Kinetic Energy (IKE), maximum storm surge, and maximum significant wave height. Firstly, the track similarity is measured by similarity deviation considering only the locations of tropical cyclone tracks. Secondly, the intensity similarity is measured by best similarity coefficient using central pressure, radius of maximum wind, maximum wind speed, moving speed, wind, storm surge, and wave intensity indices. Then, the potential loss of current tropical cyclone is assessed based on the retrieved similar tropical cyclones loss. Taking tropical cyclone Utor (2013) that affected China as an example, the potential loss is predicted according to the five most similar historical tropical cyclones which are retrieved from all the historical tropical cyclones. The method is flexible for rapid disaster loss assessment since it provides a relatively satisfactory result based on two scenarios of input dataset availability.
How to cite: Li, J.: Potential Tropical Cyclone Disaster Loss Assessment based on Multiple Hazard Indicators, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12337, https://doi.org/10.5194/egusphere-egu2020-12337, 2020.
EGU2020-11104 | Displays | ITS1.4/HS4.8 | Highlight
How Earthquake Early Warning Systems can affect scientist’s liability? International perspective for domestic questions.Cecilia Valbonesi
How Earthquake Early Warning Systems can affect scientist’s liability? International perspective for domestic questions.
Early Warning Systems (EWS) represent a technical-scientific challenge aimed at improving the chance of surviving of the population exposed to the effects of dangerous natural events. This improvement must necessarily face great difficulties in the application fields, because EWS may turn into serious responsibilities for people involved as scientists and engineers.
In this complex scenario is necessary to consider the differences among EWS (e.g. meteo, tsunami, earthquake) and their capability of predicting and avoiding the consequences of damaging events.
The development of EWS in Italy is not homogenous.
Some of these systems, such as Earthquake EWS (EEWS), are in a testing phase and we really need to learn a lot from the comparison with other Countries that have been adopting these solutions for years.
This recognition is very important, because the tragic and deadly events of the L'Aquila earthquake, the landslide in Sarno, and the recent eruption of Stromboli volcano have taught us that the relationship between science and law in Italy is really difficult.
So, before entering in the operative phase of the EEWS is necessary to start from a recognition of the international and national legislative and jurisprudential frameworks that supports the assessment of criminal and civil liability in the event of a “wrong” technical-scientific response, unable to decrease the consequences for people and infrastructures.
The future application of EEWS in our Country must be supported by a study and research of solutions that allow scientists and engineers to operate with more awareness and less fear of the consequences of this not renounceable progress.
In this framework, the different roles of those involved in the development and dissemination of EEWS are also relevant: the responsibilities of scientists developing the tools are not the same as those of technical operators who are called upon to disseminate the alert.
In all these cases, however, the offer of an EEW service represents a promise to the population to face the harmful consequences of certain natural and disastrous events.
This promise certainly creates a legitimate expectation that, where betrayed, can give rise to criminal and civil liability for adverse events (manslaughter, negligence, unintentional disaster etc.).
Population, however, should not only expect to receive a correct alarm but must be put in the condition to understand the uncertainties involved in rapid estimates, to be prepared to face the risk, and to react in the right ways.
How to cite: Valbonesi, C.: How Earthquake Early Warning Systems can affect scientist’s liability? International perspective for domestic questions., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11104, https://doi.org/10.5194/egusphere-egu2020-11104, 2020.
How Earthquake Early Warning Systems can affect scientist’s liability? International perspective for domestic questions.
Early Warning Systems (EWS) represent a technical-scientific challenge aimed at improving the chance of surviving of the population exposed to the effects of dangerous natural events. This improvement must necessarily face great difficulties in the application fields, because EWS may turn into serious responsibilities for people involved as scientists and engineers.
In this complex scenario is necessary to consider the differences among EWS (e.g. meteo, tsunami, earthquake) and their capability of predicting and avoiding the consequences of damaging events.
The development of EWS in Italy is not homogenous.
Some of these systems, such as Earthquake EWS (EEWS), are in a testing phase and we really need to learn a lot from the comparison with other Countries that have been adopting these solutions for years.
This recognition is very important, because the tragic and deadly events of the L'Aquila earthquake, the landslide in Sarno, and the recent eruption of Stromboli volcano have taught us that the relationship between science and law in Italy is really difficult.
So, before entering in the operative phase of the EEWS is necessary to start from a recognition of the international and national legislative and jurisprudential frameworks that supports the assessment of criminal and civil liability in the event of a “wrong” technical-scientific response, unable to decrease the consequences for people and infrastructures.
The future application of EEWS in our Country must be supported by a study and research of solutions that allow scientists and engineers to operate with more awareness and less fear of the consequences of this not renounceable progress.
In this framework, the different roles of those involved in the development and dissemination of EEWS are also relevant: the responsibilities of scientists developing the tools are not the same as those of technical operators who are called upon to disseminate the alert.
In all these cases, however, the offer of an EEW service represents a promise to the population to face the harmful consequences of certain natural and disastrous events.
This promise certainly creates a legitimate expectation that, where betrayed, can give rise to criminal and civil liability for adverse events (manslaughter, negligence, unintentional disaster etc.).
Population, however, should not only expect to receive a correct alarm but must be put in the condition to understand the uncertainties involved in rapid estimates, to be prepared to face the risk, and to react in the right ways.
How to cite: Valbonesi, C.: How Earthquake Early Warning Systems can affect scientist’s liability? International perspective for domestic questions., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11104, https://doi.org/10.5194/egusphere-egu2020-11104, 2020.
ITS1.5/NH9.21 – Resilience to natural hazards: assessments, frameworks and tools
EGU2020-12023 | Displays | ITS1.5/NH9.21 | Highlight
The Next Generation Drought Index ProjectDaniel Osgood and Markus Enenkel and the Daniel E Osgood
There is ample evidence about the added-value of anticipatory financing mechanism to mitigate the impact of extreme droughts on the livelihoods of vulnerable communities. Various projects have tried to optimize parametric insurance via different methods, resulting in useful lessons learnt for both macro- and micro-level insurance. In parallel, novel satellite-derived sources of information, such as soil moisture or evaporative stress, have become available to monitor key variables of the hydrological cycle and strengthen the drought narrative via cross-validation. The Next Generation Drought Index project was funded by the World Bank to develop a generic framework and related technical toolbox that allows decision-makers to understand every step of index design, calibration and validation. An interactive dashboard is linked directly to different data sources, the outputs of financial risk models and socioeconomic information to link climate hazard and impact information. Collaboration partners range from African Risk Capacity to the United Nations World Food Programme, the START Network, the World Bank’s Global Index Insurance Facility and the European Space Agency. The overall goal is to reduce basis risk without creating an analytical black box as well as to identify and use ‘low hanging fruits’, such as the detection of early season moisture deficits via remote sensing. The finding from Senegal suggest that the effectiveness of insurance might be improved through client centered design through participatory/crowdsourced processes, a suite of advanced satellite data and models, available government/institutional data and structured decision tree processes based on key performance indicators.
How to cite: Osgood, D. and Enenkel, M. and the Daniel E Osgood: The Next Generation Drought Index Project, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12023, https://doi.org/10.5194/egusphere-egu2020-12023, 2020.
There is ample evidence about the added-value of anticipatory financing mechanism to mitigate the impact of extreme droughts on the livelihoods of vulnerable communities. Various projects have tried to optimize parametric insurance via different methods, resulting in useful lessons learnt for both macro- and micro-level insurance. In parallel, novel satellite-derived sources of information, such as soil moisture or evaporative stress, have become available to monitor key variables of the hydrological cycle and strengthen the drought narrative via cross-validation. The Next Generation Drought Index project was funded by the World Bank to develop a generic framework and related technical toolbox that allows decision-makers to understand every step of index design, calibration and validation. An interactive dashboard is linked directly to different data sources, the outputs of financial risk models and socioeconomic information to link climate hazard and impact information. Collaboration partners range from African Risk Capacity to the United Nations World Food Programme, the START Network, the World Bank’s Global Index Insurance Facility and the European Space Agency. The overall goal is to reduce basis risk without creating an analytical black box as well as to identify and use ‘low hanging fruits’, such as the detection of early season moisture deficits via remote sensing. The finding from Senegal suggest that the effectiveness of insurance might be improved through client centered design through participatory/crowdsourced processes, a suite of advanced satellite data and models, available government/institutional data and structured decision tree processes based on key performance indicators.
How to cite: Osgood, D. and Enenkel, M. and the Daniel E Osgood: The Next Generation Drought Index Project, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12023, https://doi.org/10.5194/egusphere-egu2020-12023, 2020.
EGU2020-20976 | Displays | ITS1.5/NH9.21
Our Coastal Futures: pathways to sustainable developmentRobert Weiss, Valerie Cummins, Heath Kelsey, Sebastian Ferse, Anja Scheffers, Donald Forbes, and Bruce Glavovic
EGU2020-3934 | Displays | ITS1.5/NH9.21
Flood resilience measurement for communities: data for science and practiceMichael Szoenyi, Finn Laurien, and Adriana Keating
Given the increased attention put on strengthening disaster resilience, there is a growing need to invest in its measurement and the overall accountability of resilience strengthening initiatives. There is a major gap in evidence about what actually makes communities more resilient when an event occurs, because there are no empirically validated measures of disaster resilience. Similarly, an effort to identify operational indicators has gained some traction only more recently. The Flood Resilience Measurement for Communities (FRMC) framework and associated, fully operational, integrated tool takes a systems-thinking, holistic approach to serve the dual goals of generating data on the determinants of community flood resilience, and providing decision-support for on-the-ground investment. The FRMC framework measures “sources of resilience” before a flood happens and looks at the post-flood impacts afterwards. It is built around the notion of five types of capital (the 5Cs: human, social, physical, natural, and financial) and the 4Rs of a resilient system (robustness, redundancy, resourcefulness, and rapidity). The sources of resilience are graded based on Zurich’s Risk Engineering Technical Grading Standard. Results are displayed according to the 5Cs and 4Rs, the disaster risk management (DRM) cycle, themes and context level, to give the approach further flexibility and accessibility.
The Zurich Flood Resilience Alliance (ZFRA) has identified the measurement of resilience as a valuable ingredient in building community flood resilience. In the first application phase (2013-2018), we measured flood resilience in 118 communities across nine countries, building on responses at household and community levels. Continuing this endeavor in the second phase (2018 – 2023) will allow us to enrich the understanding of community flood resilience and to extend this unique data set.
We find that at the community level, the FRMC enables users to track community progress on resilience over time in a standardized way. It thus provides vital information for the decision-making process in terms of prioritizing the resilience-building measures most needed by the community. At community and higher decision-making levels, measuring resilience also provides a basis for improving the design of innovative investment programs to strengthen disaster resilience.
By exploring data across multiple communities (facing different flood types and with very different socioeconomic and political contexts), we can generate evidence with respect to which characteristics contribute most to community disaster resilience before an event strikes. This contributes to meeting the challenge of demonstrating that the work we do has the desired impact – that it actually builds resilience. Our findings suggest that stronger interactions between community functions induce co-benefits for community development.
How to cite: Szoenyi, M., Laurien, F., and Keating, A.: Flood resilience measurement for communities: data for science and practice, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3934, https://doi.org/10.5194/egusphere-egu2020-3934, 2020.
Given the increased attention put on strengthening disaster resilience, there is a growing need to invest in its measurement and the overall accountability of resilience strengthening initiatives. There is a major gap in evidence about what actually makes communities more resilient when an event occurs, because there are no empirically validated measures of disaster resilience. Similarly, an effort to identify operational indicators has gained some traction only more recently. The Flood Resilience Measurement for Communities (FRMC) framework and associated, fully operational, integrated tool takes a systems-thinking, holistic approach to serve the dual goals of generating data on the determinants of community flood resilience, and providing decision-support for on-the-ground investment. The FRMC framework measures “sources of resilience” before a flood happens and looks at the post-flood impacts afterwards. It is built around the notion of five types of capital (the 5Cs: human, social, physical, natural, and financial) and the 4Rs of a resilient system (robustness, redundancy, resourcefulness, and rapidity). The sources of resilience are graded based on Zurich’s Risk Engineering Technical Grading Standard. Results are displayed according to the 5Cs and 4Rs, the disaster risk management (DRM) cycle, themes and context level, to give the approach further flexibility and accessibility.
The Zurich Flood Resilience Alliance (ZFRA) has identified the measurement of resilience as a valuable ingredient in building community flood resilience. In the first application phase (2013-2018), we measured flood resilience in 118 communities across nine countries, building on responses at household and community levels. Continuing this endeavor in the second phase (2018 – 2023) will allow us to enrich the understanding of community flood resilience and to extend this unique data set.
We find that at the community level, the FRMC enables users to track community progress on resilience over time in a standardized way. It thus provides vital information for the decision-making process in terms of prioritizing the resilience-building measures most needed by the community. At community and higher decision-making levels, measuring resilience also provides a basis for improving the design of innovative investment programs to strengthen disaster resilience.
By exploring data across multiple communities (facing different flood types and with very different socioeconomic and political contexts), we can generate evidence with respect to which characteristics contribute most to community disaster resilience before an event strikes. This contributes to meeting the challenge of demonstrating that the work we do has the desired impact – that it actually builds resilience. Our findings suggest that stronger interactions between community functions induce co-benefits for community development.
How to cite: Szoenyi, M., Laurien, F., and Keating, A.: Flood resilience measurement for communities: data for science and practice, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3934, https://doi.org/10.5194/egusphere-egu2020-3934, 2020.
EGU2020-1807 | Displays | ITS1.5/NH9.21
Messy Maps: Qualitative GIS for Urban Flood ResilienceFaith Taylor, James Millington, Ezekiel Jacob, Bruce Malamud, and Mark Pelling
We present a methodology to include qualitative aspects of flood resilience such as emotion, social connections and experience into urban planning using qualitative GIS. The geographic information system (GIS) has become ubiquitous in urban planning and disaster risk reduction, but often results in resilience being conceptualised and deployed in highly technocratic and quantitative ways. Yet in the urban Global South, where the rate of informal growth often outstrips our ability to collect spatial data, the knowledge infrastructures used for resilience planning leave little room for participation and consideration of local experience. This presentation outlines two interlinked projects (‘Why we Disagree about Resilience’ and the follow-on ‘Expressive Mapping of Resilient Futures’) experimenting with qualitative GIS methodologies to map resilience as defined by informal settlement residents. We show examples from two case study cities, Nairobi (Kenya) and Cape Town (South Africa), with applicability across the urban Global South. Four map layers were generated: (i) flood footprints showing the detailed spatial knowledge of floods generated by locals; (ii) georeferenced, narrated 360° StorySpheres capturing differing perspectives about a space; (iii) spatial social network maps showing residents connections to formal and informal actors before and during floods; (iv) multimedia pop-ups communicating contextual details missing from traditional GIS maps. We show that for informal settlements, many locations and aspects of resilience have vague or imprecise spatial locations, and that placing markers on a map makes them visible in ways that planners can begin to engage with. We discuss challenges such as privacy, legacy and participation. Although challenges remain, we found openness by city-level actors to use qualitative forms of evidence, and that the contextual detail aided their retention and understanding of resilience. The ‘messy’ maps we present here illustrate that in the era of big data and metrics, there is a space for qualitative understanding of resilience, and that existing knowledge and spatial data infrastructures have potential to be more inclusive and holistic.
How to cite: Taylor, F., Millington, J., Jacob, E., Malamud, B., and Pelling, M.: Messy Maps: Qualitative GIS for Urban Flood Resilience , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1807, https://doi.org/10.5194/egusphere-egu2020-1807, 2020.
We present a methodology to include qualitative aspects of flood resilience such as emotion, social connections and experience into urban planning using qualitative GIS. The geographic information system (GIS) has become ubiquitous in urban planning and disaster risk reduction, but often results in resilience being conceptualised and deployed in highly technocratic and quantitative ways. Yet in the urban Global South, where the rate of informal growth often outstrips our ability to collect spatial data, the knowledge infrastructures used for resilience planning leave little room for participation and consideration of local experience. This presentation outlines two interlinked projects (‘Why we Disagree about Resilience’ and the follow-on ‘Expressive Mapping of Resilient Futures’) experimenting with qualitative GIS methodologies to map resilience as defined by informal settlement residents. We show examples from two case study cities, Nairobi (Kenya) and Cape Town (South Africa), with applicability across the urban Global South. Four map layers were generated: (i) flood footprints showing the detailed spatial knowledge of floods generated by locals; (ii) georeferenced, narrated 360° StorySpheres capturing differing perspectives about a space; (iii) spatial social network maps showing residents connections to formal and informal actors before and during floods; (iv) multimedia pop-ups communicating contextual details missing from traditional GIS maps. We show that for informal settlements, many locations and aspects of resilience have vague or imprecise spatial locations, and that placing markers on a map makes them visible in ways that planners can begin to engage with. We discuss challenges such as privacy, legacy and participation. Although challenges remain, we found openness by city-level actors to use qualitative forms of evidence, and that the contextual detail aided their retention and understanding of resilience. The ‘messy’ maps we present here illustrate that in the era of big data and metrics, there is a space for qualitative understanding of resilience, and that existing knowledge and spatial data infrastructures have potential to be more inclusive and holistic.
How to cite: Taylor, F., Millington, J., Jacob, E., Malamud, B., and Pelling, M.: Messy Maps: Qualitative GIS for Urban Flood Resilience , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1807, https://doi.org/10.5194/egusphere-egu2020-1807, 2020.
EGU2020-21602 | Displays | ITS1.5/NH9.21
The Flood Resilience DashboardIan McCallum, Stefan Velev, Finn Laurien, Reinhard Mechler, Adriana Keating, Stefan Hochrainer-Stigler, and Michael Szoenyi
The purpose of the “Flood Resilience Dashboard” is to put geo-spatial flood resilience data into the hands of practitioners. The idea is to provide an intuitive platform that combines as much open, peer-reviewed flood resilience related spatial data as possible with available related spatial data from the Flood Resilience Alliance, which in turn can be used to inform decisions. This data will include among others the Zurich Flood Resilience Measurement for Communities (FRMC) data, Vulnerability Capacity Assessment (VCA) maps, remote sensing derived information on flooding and other biophysical datasets (e.g. forest cover, water extent), modelled risk information, satellite imagery (e.g. night-time lights), crowdsourced data and more.
The Dashboard will, as much as possible, lower the entry barrier for non-technical users, providing a simple login experience for the users. Users should be able to explore the Dashboard using standard web map navigation tools. The various charts and tables on the Dashboard dynamically refresh as features on the map are selected or the map extent is changed. No previous experience or understanding of geo-spatial data is required, beyond basic web-map navigation.
How to cite: McCallum, I., Velev, S., Laurien, F., Mechler, R., Keating, A., Hochrainer-Stigler, S., and Szoenyi, M.: The Flood Resilience Dashboard, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21602, https://doi.org/10.5194/egusphere-egu2020-21602, 2020.
The purpose of the “Flood Resilience Dashboard” is to put geo-spatial flood resilience data into the hands of practitioners. The idea is to provide an intuitive platform that combines as much open, peer-reviewed flood resilience related spatial data as possible with available related spatial data from the Flood Resilience Alliance, which in turn can be used to inform decisions. This data will include among others the Zurich Flood Resilience Measurement for Communities (FRMC) data, Vulnerability Capacity Assessment (VCA) maps, remote sensing derived information on flooding and other biophysical datasets (e.g. forest cover, water extent), modelled risk information, satellite imagery (e.g. night-time lights), crowdsourced data and more.
The Dashboard will, as much as possible, lower the entry barrier for non-technical users, providing a simple login experience for the users. Users should be able to explore the Dashboard using standard web map navigation tools. The various charts and tables on the Dashboard dynamically refresh as features on the map are selected or the map extent is changed. No previous experience or understanding of geo-spatial data is required, beyond basic web-map navigation.
How to cite: McCallum, I., Velev, S., Laurien, F., Mechler, R., Keating, A., Hochrainer-Stigler, S., and Szoenyi, M.: The Flood Resilience Dashboard, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21602, https://doi.org/10.5194/egusphere-egu2020-21602, 2020.
EGU2020-6355 | Displays | ITS1.5/NH9.21
The Framework for Generating Climate Risk Infographic and Applying to Risk CommunicationChin Chieh Liu and Ching Pin Tung
Adaptation is an indispensable part of climate change impact, and risk assessment plays an important role between data arrangement and strategy planning. This study aims at developing a framework from risk assessment to information presentation, then applying to risk communication. This framework refers to Climate Risk Template, defining risk as to the integration of hazard, exposure and sensitivity; simultaneously, Climate Risk Template is an auxiliary tool basing on Climate Change Adaptation Six Steps(CCA6Steps), which is the systematic procedure to analyze risk and plan adaptation pathway. This study emphasized on landslide disaster as the key issue and selected community residents, roads as the protected targets. First of all, collate stimulated results of landslide potential evaluation and literature, cases, questionnaires which were probed into exposure and sensitivity. Next, establish a factors list of climate risk and giving weights to correlation factors by Entropy Method. Finally, use risk matrix to evaluate the risk value and present the results of risk assessment by infographic. For essentially helping on risk communication, this study proposes a framework to make the general public understand the causes of regional disaster risk and assists executive units to implement climate risk assessment and adaptation pathway planning. Eventually, the study will innovate a prototype of using this framework; therefore, users just have to write down the key issue, protected target and choose the composition factors of risk, then they can accomplish climate risk assessment and generate climate risk infographic by themselves.
Keywords: Climate risk template, Climate risk assessment, Risk communication, infographic
How to cite: Liu, C. C. and Tung, C. P.: The Framework for Generating Climate Risk Infographic and Applying to Risk Communication, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6355, https://doi.org/10.5194/egusphere-egu2020-6355, 2020.
Adaptation is an indispensable part of climate change impact, and risk assessment plays an important role between data arrangement and strategy planning. This study aims at developing a framework from risk assessment to information presentation, then applying to risk communication. This framework refers to Climate Risk Template, defining risk as to the integration of hazard, exposure and sensitivity; simultaneously, Climate Risk Template is an auxiliary tool basing on Climate Change Adaptation Six Steps(CCA6Steps), which is the systematic procedure to analyze risk and plan adaptation pathway. This study emphasized on landslide disaster as the key issue and selected community residents, roads as the protected targets. First of all, collate stimulated results of landslide potential evaluation and literature, cases, questionnaires which were probed into exposure and sensitivity. Next, establish a factors list of climate risk and giving weights to correlation factors by Entropy Method. Finally, use risk matrix to evaluate the risk value and present the results of risk assessment by infographic. For essentially helping on risk communication, this study proposes a framework to make the general public understand the causes of regional disaster risk and assists executive units to implement climate risk assessment and adaptation pathway planning. Eventually, the study will innovate a prototype of using this framework; therefore, users just have to write down the key issue, protected target and choose the composition factors of risk, then they can accomplish climate risk assessment and generate climate risk infographic by themselves.
Keywords: Climate risk template, Climate risk assessment, Risk communication, infographic
How to cite: Liu, C. C. and Tung, C. P.: The Framework for Generating Climate Risk Infographic and Applying to Risk Communication, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6355, https://doi.org/10.5194/egusphere-egu2020-6355, 2020.
EGU2020-5868 | Displays | ITS1.5/NH9.21
Review, mathematical representation and classification of preventive drought management measuresana paez, Gerald Corzo, and Dimitri Solomatine
In the context of proactive drought management plans, a key element consists of analyzing, selecting and allocating measures aimed at increasing resistance to droughts and reducing its potential impacts on the society, environment and economy. Currently, these measures, known as preventive drought management measures (Fatulová et al., 2015), are embedded within measures for flood management, catchment management plans, rural development plans, among others. This situation raises two issues. Firstly, information about potential preventive drought management measures (PDMM) is commonly fragmented and it is not a trivial task find or select measures that could be implemented as PDDM. Secondly, even though the same measure can be implemented from different management perspectives (Flood management, land degradation management, catchment management, rural development plans,) its applicability, advantages and limitations, may change according to the management perspective.
Considering the above, this study attempts to provide a review of PDMM that includes: measure description, applicability, limitations, mathematical representation (For further implementation in modelling systems) and classification, from a drought management perspective. It is worth to mention that this study is focused on hydrologically based measures, applicable for agricultural and hydrological drought management.
The research methodology is divided in three phases. The first phase consists of identifying drivers that trigger and/or enhance agricultural and hydrological droughts. This analysis is carried out from a hydrological angle, where land surface processes and human activities are potential drivers agricultural and hydrological droughts (Van Loon et al., 2016). The second phase examines an extensive list of technical documents, books, books sections, journal articles and case studies in order to identify those measures that could manage or mitigate the impact of potential drivers of agricultural and hydrological droughts. In this phase, PDMM are described in terms of applicability, advantages, limitations and mathematical representation for further implementation in modelling systems. Based on the analysis of the PDMM, the third phase of the study focusses on their classification, into three categories: nature-based solutions, grey infrastructure and changes in human water consumption
How to cite: paez, A., Corzo, G., and Solomatine, D.: Review, mathematical representation and classification of preventive drought management measures, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5868, https://doi.org/10.5194/egusphere-egu2020-5868, 2020.
In the context of proactive drought management plans, a key element consists of analyzing, selecting and allocating measures aimed at increasing resistance to droughts and reducing its potential impacts on the society, environment and economy. Currently, these measures, known as preventive drought management measures (Fatulová et al., 2015), are embedded within measures for flood management, catchment management plans, rural development plans, among others. This situation raises two issues. Firstly, information about potential preventive drought management measures (PDMM) is commonly fragmented and it is not a trivial task find or select measures that could be implemented as PDDM. Secondly, even though the same measure can be implemented from different management perspectives (Flood management, land degradation management, catchment management, rural development plans,) its applicability, advantages and limitations, may change according to the management perspective.
Considering the above, this study attempts to provide a review of PDMM that includes: measure description, applicability, limitations, mathematical representation (For further implementation in modelling systems) and classification, from a drought management perspective. It is worth to mention that this study is focused on hydrologically based measures, applicable for agricultural and hydrological drought management.
The research methodology is divided in three phases. The first phase consists of identifying drivers that trigger and/or enhance agricultural and hydrological droughts. This analysis is carried out from a hydrological angle, where land surface processes and human activities are potential drivers agricultural and hydrological droughts (Van Loon et al., 2016). The second phase examines an extensive list of technical documents, books, books sections, journal articles and case studies in order to identify those measures that could manage or mitigate the impact of potential drivers of agricultural and hydrological droughts. In this phase, PDMM are described in terms of applicability, advantages, limitations and mathematical representation for further implementation in modelling systems. Based on the analysis of the PDMM, the third phase of the study focusses on their classification, into three categories: nature-based solutions, grey infrastructure and changes in human water consumption
How to cite: paez, A., Corzo, G., and Solomatine, D.: Review, mathematical representation and classification of preventive drought management measures, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5868, https://doi.org/10.5194/egusphere-egu2020-5868, 2020.
EGU2020-19990 | Displays | ITS1.5/NH9.21 | Highlight
Generating multiple resilience dividends from managing unnatural disasters. Opportunities for measurement and policy.Reinhard Mechler and Stefan Hochrainer-Stigler
Despite solid evidence regarding the large benefits of reducing disaster risk, it has remained difficult to motivate sustained investment into disaster risk reduction and resilience. Recently, international policy debate has started to emphasize the need for focusing DRR investment toward actions that generate multiple dividends, including reducing loss of lives and livelihoods, unlocking development, and creating development co-benefits. We examine whether available and innovative decision support tools are fit-for-purpose. Focusing on the Asia region, we identify evidence of multiple dividends crafted using expert-based methods, such as cost–benefit analysis for selecting and evaluating “hard resilience-type” interventions. Given a rising demand for “softer” and systemic DRR investments in projects and programs, participatory decision support tools have become increasingly relevant. As one set of tools, resilience capacity (capital) measurement approaches may be used to support actions and decisions throughout all stages of the project cycle. Measuring capacity for resilience dividends, not outcome, such tools can serve as participatory decision support for organizations working at community and other levels for scoping out how development and disaster risk interact, as well as for supporting the co-generation of multiple resilience dividend-type solutions with those at risk.
How to cite: Mechler, R. and Hochrainer-Stigler, S.: Generating multiple resilience dividends from managing unnatural disasters. Opportunities for measurement and policy., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19990, https://doi.org/10.5194/egusphere-egu2020-19990, 2020.
Despite solid evidence regarding the large benefits of reducing disaster risk, it has remained difficult to motivate sustained investment into disaster risk reduction and resilience. Recently, international policy debate has started to emphasize the need for focusing DRR investment toward actions that generate multiple dividends, including reducing loss of lives and livelihoods, unlocking development, and creating development co-benefits. We examine whether available and innovative decision support tools are fit-for-purpose. Focusing on the Asia region, we identify evidence of multiple dividends crafted using expert-based methods, such as cost–benefit analysis for selecting and evaluating “hard resilience-type” interventions. Given a rising demand for “softer” and systemic DRR investments in projects and programs, participatory decision support tools have become increasingly relevant. As one set of tools, resilience capacity (capital) measurement approaches may be used to support actions and decisions throughout all stages of the project cycle. Measuring capacity for resilience dividends, not outcome, such tools can serve as participatory decision support for organizations working at community and other levels for scoping out how development and disaster risk interact, as well as for supporting the co-generation of multiple resilience dividend-type solutions with those at risk.
How to cite: Mechler, R. and Hochrainer-Stigler, S.: Generating multiple resilience dividends from managing unnatural disasters. Opportunities for measurement and policy., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19990, https://doi.org/10.5194/egusphere-egu2020-19990, 2020.
EGU2020-2488 | Displays | ITS1.5/NH9.21
A Study on the Difference of Urban and Rural Resilience under Climate Change- A Case Study of Chiayi CountyShin-En Pai and Hsueh-Sheng Chang
In recent years, the impact of climate change has caused critical risks to urban and rural systems, how to mitigate the damage caused by extreme climate events has become a topic of considerable concern in various countries in recent years. The United Nations International Strategy for Disaster Reduction (UNISDR) mentioned in the Hyogo Framework for Action (HFA) and the Sendai Framework for Disaster Risk Reduction 2015-2030 (Sendai Framework) that improving community resilience will help to deal with the harm caused by climate change. However, most of the previous research on resilience have only focused solely on urban or rural only, and have failed to clearly identify the differences in resilience between urban and rural areas. In fact, if we can understand the difference in resilience between urban and rural in the face of climate change, it will provide planners with better planning strategies or resource allocation. Based on this, the study first developed the resilience index through literature review, and then filtered and screened the index through Principle Component Analysis (PCA). After that, the resilience index was applied to empirical areas, and the spatial correlation of resilience was explored through Local Indicators of Spatial Autocorrelation (LISA). Finally, the binary logistic regression is used to analyze the difference in resilience of urban or rural under climate change.
How to cite: Pai, S.-E. and Chang, H.-S.: A Study on the Difference of Urban and Rural Resilience under Climate Change- A Case Study of Chiayi County, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2488, https://doi.org/10.5194/egusphere-egu2020-2488, 2020.
In recent years, the impact of climate change has caused critical risks to urban and rural systems, how to mitigate the damage caused by extreme climate events has become a topic of considerable concern in various countries in recent years. The United Nations International Strategy for Disaster Reduction (UNISDR) mentioned in the Hyogo Framework for Action (HFA) and the Sendai Framework for Disaster Risk Reduction 2015-2030 (Sendai Framework) that improving community resilience will help to deal with the harm caused by climate change. However, most of the previous research on resilience have only focused solely on urban or rural only, and have failed to clearly identify the differences in resilience between urban and rural areas. In fact, if we can understand the difference in resilience between urban and rural in the face of climate change, it will provide planners with better planning strategies or resource allocation. Based on this, the study first developed the resilience index through literature review, and then filtered and screened the index through Principle Component Analysis (PCA). After that, the resilience index was applied to empirical areas, and the spatial correlation of resilience was explored through Local Indicators of Spatial Autocorrelation (LISA). Finally, the binary logistic regression is used to analyze the difference in resilience of urban or rural under climate change.
How to cite: Pai, S.-E. and Chang, H.-S.: A Study on the Difference of Urban and Rural Resilience under Climate Change- A Case Study of Chiayi County, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2488, https://doi.org/10.5194/egusphere-egu2020-2488, 2020.
EGU2020-4852 | Displays | ITS1.5/NH9.21
Flood resilience and indirect impacts in art citiesChiara Arrighi and Fabio Castelli
Resilience is commonly defined as the ability to recover from a shock and quickly restore antecedent conditions. Although it is widely recognized as crucial to reduce adverse impacts and it is gaining importance at global level, resilience to most natural hazards is difficult to measure and predict, as both direct and indirect impacts matter. In this work the mutual connection between flood resilience and indirect flood impacts is investigated through a mathematical model which describes the temporal evolution of the state of the system after an urban inundation event. The inputs to the resilience model are i) a hydraulic model simulating the flood hazard; ii) a vulnerability and recovery model estimating the physical damage to cultural heritage and the temporal persistence of direct and indirect consequences. The method is applied to the historic district of Florence (Italy) affected by a severe flood in 1966. The variables selected as proxies of the state of the system are the number of monuments open to the public after the flood and the number of visitors, which represent a measure of indirect social and economic impacts on the city. The model results show that the resilience model helps the quantification of indirect impacts due to the loss of accessibility of cultural heritage and allows evaluating the effectiveness of prevention measures.
Acknowledgments
Authors were beneficiary of funding by Italian Ministry of Education, University and Research (MIUR) under the PRIN 2015 programme with the Project MICHe “Mitigating the impacts of natural hazards on cultural heritage sites, structures and artefacts”
How to cite: Arrighi, C. and Castelli, F.: Flood resilience and indirect impacts in art cities, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4852, https://doi.org/10.5194/egusphere-egu2020-4852, 2020.
Resilience is commonly defined as the ability to recover from a shock and quickly restore antecedent conditions. Although it is widely recognized as crucial to reduce adverse impacts and it is gaining importance at global level, resilience to most natural hazards is difficult to measure and predict, as both direct and indirect impacts matter. In this work the mutual connection between flood resilience and indirect flood impacts is investigated through a mathematical model which describes the temporal evolution of the state of the system after an urban inundation event. The inputs to the resilience model are i) a hydraulic model simulating the flood hazard; ii) a vulnerability and recovery model estimating the physical damage to cultural heritage and the temporal persistence of direct and indirect consequences. The method is applied to the historic district of Florence (Italy) affected by a severe flood in 1966. The variables selected as proxies of the state of the system are the number of monuments open to the public after the flood and the number of visitors, which represent a measure of indirect social and economic impacts on the city. The model results show that the resilience model helps the quantification of indirect impacts due to the loss of accessibility of cultural heritage and allows evaluating the effectiveness of prevention measures.
Acknowledgments
Authors were beneficiary of funding by Italian Ministry of Education, University and Research (MIUR) under the PRIN 2015 programme with the Project MICHe “Mitigating the impacts of natural hazards on cultural heritage sites, structures and artefacts”
How to cite: Arrighi, C. and Castelli, F.: Flood resilience and indirect impacts in art cities, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4852, https://doi.org/10.5194/egusphere-egu2020-4852, 2020.
EGU2020-10315 | Displays | ITS1.5/NH9.21
From Ashes to Fire: The Possibilities of the Phoenix Effect in Post-Disaster Lombok, IndonesiaJop Koopman
The Lombok earthquake of August 2018 killed approximately 555, injured 1400, and displaced 353.000 people. With Indonesia being vulnerable to natural disaster due to its geographic location, events like these are not uncommon. However, this event was significantly different from the majority of disasters in the Indonesian archipelago. The difference pertains to how the communities researched in this thesis, coped with the adversity they had experienced and how they showed resilience in a unique way.
A disaster drastically ushers in a liminal period wherein its victims are forced to rethink certain aspects of social life, give meaning to what has happened, and determine how to rebuild society sustainably.
This thesis argues that going back to a pre-disaster state of society is not possible, due to the lived experiences during the disaster and aftermath. Instead of going back, the culture of response of the Indonesian government (and the NGOs and communities) on which this thesis is focused, started a process towards Dyer’s Phoenix Effect.
This thesis explores the cultural, social, and organizational changes in post-disaster Lombok, which make the occurrence of the Phoenix Effect likely. (1) Cultural changes constitute the explanations for the earthquake from different religious perspectives and the resurgence of traditionally embedded building strategies. (2) Social changes equate to the reinvention of gotong royong from being a state-philosophy to an embedded set of mutual help. (3) Organizational changes, signify biopolitics of disaster management of the Indonesian government, the role of NGOs, and the emergence of peoples’ initiatives in order to become more resilient.
This thesis concludes that the possibility of the Phoenix Effect is likely, if the involved communities can maintain their cultural, organizational, and social changes sustainably.
How to cite: Koopman, J.: From Ashes to Fire: The Possibilities of the Phoenix Effect in Post-Disaster Lombok, Indonesia, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10315, https://doi.org/10.5194/egusphere-egu2020-10315, 2020.
The Lombok earthquake of August 2018 killed approximately 555, injured 1400, and displaced 353.000 people. With Indonesia being vulnerable to natural disaster due to its geographic location, events like these are not uncommon. However, this event was significantly different from the majority of disasters in the Indonesian archipelago. The difference pertains to how the communities researched in this thesis, coped with the adversity they had experienced and how they showed resilience in a unique way.
A disaster drastically ushers in a liminal period wherein its victims are forced to rethink certain aspects of social life, give meaning to what has happened, and determine how to rebuild society sustainably.
This thesis argues that going back to a pre-disaster state of society is not possible, due to the lived experiences during the disaster and aftermath. Instead of going back, the culture of response of the Indonesian government (and the NGOs and communities) on which this thesis is focused, started a process towards Dyer’s Phoenix Effect.
This thesis explores the cultural, social, and organizational changes in post-disaster Lombok, which make the occurrence of the Phoenix Effect likely. (1) Cultural changes constitute the explanations for the earthquake from different religious perspectives and the resurgence of traditionally embedded building strategies. (2) Social changes equate to the reinvention of gotong royong from being a state-philosophy to an embedded set of mutual help. (3) Organizational changes, signify biopolitics of disaster management of the Indonesian government, the role of NGOs, and the emergence of peoples’ initiatives in order to become more resilient.
This thesis concludes that the possibility of the Phoenix Effect is likely, if the involved communities can maintain their cultural, organizational, and social changes sustainably.
How to cite: Koopman, J.: From Ashes to Fire: The Possibilities of the Phoenix Effect in Post-Disaster Lombok, Indonesia, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10315, https://doi.org/10.5194/egusphere-egu2020-10315, 2020.
EGU2020-14406 | Displays | ITS1.5/NH9.21
The impact of hurricanes on the island of Saint-Martin (French West Indies) from 1954 to 2017: how are our society changes?Kelly Pasquon, Gwenaël Jouannic, Julien Gargani, Chloé Tran Duc Minh, and Denis Crozier
Natural disasters lead to many victims and major damage in France and around the world. In 2017, Hurricane Irma hit the French islands of Saint-Martin and Saint-Barthélemy (West Indies), killing 11 people and causing more than €2 billion in insured damage. Ranked 5 in category on the Saffir-Simpson scale, with average winds of 287 km/h, this hurricane highlighted the vulnerability of our society to this type of phenomenon.
One can question the inability of society to face up to and recover from the consequences of these events. In this sense, this work questions the adaptation of the island of Saint-Martin to hurricanes and its entire environment. We have chosen to focus on the evolution of this island over 65 years: from 1954 to 2017 (before Hurricane Irma). We mainly used aerial images of IGN (Institut National de l’Information Géographique et Forestière) available regularly since 1947. Among the elements that have served us to characterize this evolution, we have focused on land use (buildings, infrastructure and anthropization) and demographics.
We show, in this study, that between 1954 and 2017 (before Hurricane Irma), Saint Martin had to adapt to numerous constraints, some of which were far more important than hurricanes. In 65 years, the population density of the French part of Saint Martin increased from 75 to 668 inhab/km². The majority of this increase occurred in a five year period following the Pons law of 1986 which favoured tax breaks for real estate investment. More than 12 000 buildings have been built in Saint Martin to welcome the new inhabitants of the island as well as tourists. Many neighbourhoods experienced significant growth which started in the late 80's. However we observe differences in urban planning, a result of social and territorial segregation which exists on the island. On the one hand, there are private residences in affluent neighbourhoods, on the other hand working-class neighbourhoods with vulnerable dwellings. The effect of hurricanes on this society, which has been highly unequal since the 1960's up to the 1980's, is to reinforce inequalities. The fragile habitats of the poorest populations have been more deeply affected than the richest parts of the population which have been financially supported for reconstruction.
How to cite: Pasquon, K., Jouannic, G., Gargani, J., Tran Duc Minh, C., and Crozier, D.: The impact of hurricanes on the island of Saint-Martin (French West Indies) from 1954 to 2017: how are our society changes?, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-14406, https://doi.org/10.5194/egusphere-egu2020-14406, 2020.
Natural disasters lead to many victims and major damage in France and around the world. In 2017, Hurricane Irma hit the French islands of Saint-Martin and Saint-Barthélemy (West Indies), killing 11 people and causing more than €2 billion in insured damage. Ranked 5 in category on the Saffir-Simpson scale, with average winds of 287 km/h, this hurricane highlighted the vulnerability of our society to this type of phenomenon.
One can question the inability of society to face up to and recover from the consequences of these events. In this sense, this work questions the adaptation of the island of Saint-Martin to hurricanes and its entire environment. We have chosen to focus on the evolution of this island over 65 years: from 1954 to 2017 (before Hurricane Irma). We mainly used aerial images of IGN (Institut National de l’Information Géographique et Forestière) available regularly since 1947. Among the elements that have served us to characterize this evolution, we have focused on land use (buildings, infrastructure and anthropization) and demographics.
We show, in this study, that between 1954 and 2017 (before Hurricane Irma), Saint Martin had to adapt to numerous constraints, some of which were far more important than hurricanes. In 65 years, the population density of the French part of Saint Martin increased from 75 to 668 inhab/km². The majority of this increase occurred in a five year period following the Pons law of 1986 which favoured tax breaks for real estate investment. More than 12 000 buildings have been built in Saint Martin to welcome the new inhabitants of the island as well as tourists. Many neighbourhoods experienced significant growth which started in the late 80's. However we observe differences in urban planning, a result of social and territorial segregation which exists on the island. On the one hand, there are private residences in affluent neighbourhoods, on the other hand working-class neighbourhoods with vulnerable dwellings. The effect of hurricanes on this society, which has been highly unequal since the 1960's up to the 1980's, is to reinforce inequalities. The fragile habitats of the poorest populations have been more deeply affected than the richest parts of the population which have been financially supported for reconstruction.
How to cite: Pasquon, K., Jouannic, G., Gargani, J., Tran Duc Minh, C., and Crozier, D.: The impact of hurricanes on the island of Saint-Martin (French West Indies) from 1954 to 2017: how are our society changes?, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-14406, https://doi.org/10.5194/egusphere-egu2020-14406, 2020.
EGU2020-11885 | Displays | ITS1.5/NH9.21
Improving crop water use in West Africa in the context of climate changeSehouevi Mawuton David Agoungbome, Nick van de Giesen, Frank Ohene Annor, and Marie-Claire ten Veldhuis
Africa’s population is growing fast and is expected to double by 2050, meaning the food production must follow the cadence in order to meet the demand. However, one of the major challenges of agriculture in Africa is productivity (World Bank, 2009; IFRI, 2016). For instance, more than 40 million hectares of farmland were dedicated to maize in Africa in 2017 (approx. 20% of world total maize farms), but only 7.4% of the total world maize production came from the African continent (FAO, 2017). This shows the poor productivity which has its causes rooted in lack of good climate and weather information, slow technology uptake and financial support for farmers. In West Africa, where more than 70% of crop production is rain-fed, millions of farmers depend on rainfall, yet the region is one of the most vulnerable and least monitored in terms of climate change and rainfall variability. With a high uncertainty of future climate conditions in the region, one must foresee the big challenges ahead: farmers will be exposed to a lot of damages and losses leading to food insecurity resulting in famine and poverty if measures are not put in place to improve productivity. This study aims at addressing low productivity in agriculture by providing farmers with the right moment to start farming in order to improve efficiency and productivity of crop water use. By analyzing yield response to water availability of specific crops using AquaCrop, the Food and Agriculture Organization crop growth model, we investigate the crop water productivity variability throughout the rainy season and come up with recommendations that help optimize rainfall water use and maximize crop yield.
How to cite: Agoungbome, S. M. D., van de Giesen, N., Annor, F. O., and ten Veldhuis, M.-C.: Improving crop water use in West Africa in the context of climate change, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11885, https://doi.org/10.5194/egusphere-egu2020-11885, 2020.
Africa’s population is growing fast and is expected to double by 2050, meaning the food production must follow the cadence in order to meet the demand. However, one of the major challenges of agriculture in Africa is productivity (World Bank, 2009; IFRI, 2016). For instance, more than 40 million hectares of farmland were dedicated to maize in Africa in 2017 (approx. 20% of world total maize farms), but only 7.4% of the total world maize production came from the African continent (FAO, 2017). This shows the poor productivity which has its causes rooted in lack of good climate and weather information, slow technology uptake and financial support for farmers. In West Africa, where more than 70% of crop production is rain-fed, millions of farmers depend on rainfall, yet the region is one of the most vulnerable and least monitored in terms of climate change and rainfall variability. With a high uncertainty of future climate conditions in the region, one must foresee the big challenges ahead: farmers will be exposed to a lot of damages and losses leading to food insecurity resulting in famine and poverty if measures are not put in place to improve productivity. This study aims at addressing low productivity in agriculture by providing farmers with the right moment to start farming in order to improve efficiency and productivity of crop water use. By analyzing yield response to water availability of specific crops using AquaCrop, the Food and Agriculture Organization crop growth model, we investigate the crop water productivity variability throughout the rainy season and come up with recommendations that help optimize rainfall water use and maximize crop yield.
How to cite: Agoungbome, S. M. D., van de Giesen, N., Annor, F. O., and ten Veldhuis, M.-C.: Improving crop water use in West Africa in the context of climate change, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11885, https://doi.org/10.5194/egusphere-egu2020-11885, 2020.
ITS1.7/SM3.5 – International Monitoring System and On-site Verification for the CTBT, disaster risk reduction and Earth sciences
EGU2020-12289 | Displays | ITS1.7/SM3.5
Hydroacoustic measurements of the 2004 submarine eruption near Nightingale Island, Tristan da CunhaDirk Metz, Ingo Grevemeyer, Marion Jegen, Wolfram Geissler, and Julien Vergoz
Little is known about active volcanism in the remote regions of the global ocean. Here, we resort to long‐range acoustic measurements to study the July/August 2004 eruption at Isolde, a submarine volcanic cone in the Tristan da Cunha archipelago, South Atlantic Ocean. Underwater sound phases associated with the event were recorded as far as Cape Leeuwin, Western Australia, where a bottom-moored hydrophone array is operated as part of the International Monitoring System (IMS). IMS hydrophone data in combination with local seismic observations suggest that the center of activity is located east of Nightingale Island, where a recent seafloor mapping campaign aboard R/V Maria S Merian (MSM20/2) has revealed a previously unknown, potentially newly formed stratocone. Transmission loss modeling via the parabolic equation approach indicates that low-frequency sound phases travel at shallow depths near and within the Antarctic Circumpolar Current, thereby avoiding bathymetric interference along the 10,265 km source-receiver path. Our study highlights the potential of the IMS network for the detection and study of future eruptions both at Isolde and elsewhere. Implications for test-ban treaty monitoring and volcano early warning will be discussed.
How to cite: Metz, D., Grevemeyer, I., Jegen, M., Geissler, W., and Vergoz, J.: Hydroacoustic measurements of the 2004 submarine eruption near Nightingale Island, Tristan da Cunha , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12289, https://doi.org/10.5194/egusphere-egu2020-12289, 2020.
Little is known about active volcanism in the remote regions of the global ocean. Here, we resort to long‐range acoustic measurements to study the July/August 2004 eruption at Isolde, a submarine volcanic cone in the Tristan da Cunha archipelago, South Atlantic Ocean. Underwater sound phases associated with the event were recorded as far as Cape Leeuwin, Western Australia, where a bottom-moored hydrophone array is operated as part of the International Monitoring System (IMS). IMS hydrophone data in combination with local seismic observations suggest that the center of activity is located east of Nightingale Island, where a recent seafloor mapping campaign aboard R/V Maria S Merian (MSM20/2) has revealed a previously unknown, potentially newly formed stratocone. Transmission loss modeling via the parabolic equation approach indicates that low-frequency sound phases travel at shallow depths near and within the Antarctic Circumpolar Current, thereby avoiding bathymetric interference along the 10,265 km source-receiver path. Our study highlights the potential of the IMS network for the detection and study of future eruptions both at Isolde and elsewhere. Implications for test-ban treaty monitoring and volcano early warning will be discussed.
How to cite: Metz, D., Grevemeyer, I., Jegen, M., Geissler, W., and Vergoz, J.: Hydroacoustic measurements of the 2004 submarine eruption near Nightingale Island, Tristan da Cunha , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12289, https://doi.org/10.5194/egusphere-egu2020-12289, 2020.
EGU2020-5481 | Displays | ITS1.7/SM3.5
Analysis of CTBT IMS Hydroacoustic hydrophone station underwater system electronics calibration sequencesMario Zampolli, Georgios Haralabus, Jerry Stanley, and Peter Nielsen
The end-to-end calibration from the hydrophone ceramic element input to the digitizer output of CTBT IMS Hydroacoustic (HA) hydrophone stations is measured in a laboratory environment before deployment. After the hydrophones are deployed permanently with the Underwater System (UWS) hydrophone triplets, the response of the digitizer component can be measured by activating remotely a relay which excludes the hydrophone ceramic, preamplifier and riser cable, and feeds a pre-stored known waveform into the digitizer circuit via a digital-to-analogue converter. Analysis of these underwater calibration sequences makes it possible to verify the stability of the digitizer response over time and obtain useful information for investigations which require an accurate knowledge of the system response. Results are presented showing the stability of the UWS electronics response over time and one case, pertaining to the H10S triplet of HA10 Ascension Island, where changes in the calibration response appeared after the onset of electronic noise in one hydrophone channel with cross-talk to the other two channels.
How to cite: Zampolli, M., Haralabus, G., Stanley, J., and Nielsen, P.: Analysis of CTBT IMS Hydroacoustic hydrophone station underwater system electronics calibration sequences, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5481, https://doi.org/10.5194/egusphere-egu2020-5481, 2020.
The end-to-end calibration from the hydrophone ceramic element input to the digitizer output of CTBT IMS Hydroacoustic (HA) hydrophone stations is measured in a laboratory environment before deployment. After the hydrophones are deployed permanently with the Underwater System (UWS) hydrophone triplets, the response of the digitizer component can be measured by activating remotely a relay which excludes the hydrophone ceramic, preamplifier and riser cable, and feeds a pre-stored known waveform into the digitizer circuit via a digital-to-analogue converter. Analysis of these underwater calibration sequences makes it possible to verify the stability of the digitizer response over time and obtain useful information for investigations which require an accurate knowledge of the system response. Results are presented showing the stability of the UWS electronics response over time and one case, pertaining to the H10S triplet of HA10 Ascension Island, where changes in the calibration response appeared after the onset of electronic noise in one hydrophone channel with cross-talk to the other two channels.
How to cite: Zampolli, M., Haralabus, G., Stanley, J., and Nielsen, P.: Analysis of CTBT IMS Hydroacoustic hydrophone station underwater system electronics calibration sequences, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5481, https://doi.org/10.5194/egusphere-egu2020-5481, 2020.
EGU2020-18440 | Displays | ITS1.7/SM3.5
Long-range hydroacoustic observations of the Monowai Volcanic Centre as a proxy for seasonal variations in sound propagationPieter Smets, Kees Weemstra, and Läslo Evers
Hydroacoustic activity of the submarine Monowai Volcanic Centre (MVC) is repeatedly observed at two distant triplet hydrophone stations, south of Juan Fernandez Islands (H03S, 9,159km) and north of Ascension Island (H10N, 15,823km). T-phase converted energy recorded at the broadband seismic station Rarotonga on Cook Island (RAR, 1,845km) is used as a reference for the cross-correlation analysis. A detailed processing scheme for the calculation of the daily cross-correlation functions (CCF) of the hydroacoustic and seismic data is provided. Preprocessing is essential to account for the non-identical measurements and sensitivities as well as the different sample rates. Further postprocessing by systematic data selection has to be applied before stacking CCFs in order to account for the non-continuous activity of the MVC source. Daily volcanic activity is determined for the period from 2006 until 2018 using the signal-to-noise ratio of the CCFs assuming sound propagation in the SOFAR channel. Monthly stacked CCFs with clear volcanic activity are used to study seasonal variations in sound propagation between the MVC and the hydrophone stations. In winter, however, a faster than expected signal is observed at H10N which is hypothesized to (partial) propagation through the formed sea ice along the path near Antarctica.
How to cite: Smets, P., Weemstra, K., and Evers, L.: Long-range hydroacoustic observations of the Monowai Volcanic Centre as a proxy for seasonal variations in sound propagation, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18440, https://doi.org/10.5194/egusphere-egu2020-18440, 2020.
Hydroacoustic activity of the submarine Monowai Volcanic Centre (MVC) is repeatedly observed at two distant triplet hydrophone stations, south of Juan Fernandez Islands (H03S, 9,159km) and north of Ascension Island (H10N, 15,823km). T-phase converted energy recorded at the broadband seismic station Rarotonga on Cook Island (RAR, 1,845km) is used as a reference for the cross-correlation analysis. A detailed processing scheme for the calculation of the daily cross-correlation functions (CCF) of the hydroacoustic and seismic data is provided. Preprocessing is essential to account for the non-identical measurements and sensitivities as well as the different sample rates. Further postprocessing by systematic data selection has to be applied before stacking CCFs in order to account for the non-continuous activity of the MVC source. Daily volcanic activity is determined for the period from 2006 until 2018 using the signal-to-noise ratio of the CCFs assuming sound propagation in the SOFAR channel. Monthly stacked CCFs with clear volcanic activity are used to study seasonal variations in sound propagation between the MVC and the hydrophone stations. In winter, however, a faster than expected signal is observed at H10N which is hypothesized to (partial) propagation through the formed sea ice along the path near Antarctica.
How to cite: Smets, P., Weemstra, K., and Evers, L.: Long-range hydroacoustic observations of the Monowai Volcanic Centre as a proxy for seasonal variations in sound propagation, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18440, https://doi.org/10.5194/egusphere-egu2020-18440, 2020.
EGU2020-21819 | Displays | ITS1.7/SM3.5
Long-term trend analysis of deep-ocean acoustic noise dataSei-Him Cheong, Stephen P Robinson, Peter M Harris, Lian S Wang, and Valerie Livina
Underwater noise is recognised as a form of marine pollutant and there is evidence that over exposure to excessive levels of noise can have effects on the wellbeing of the marine ecosystem. Consequently, the variation in the ambient sound levels in the deep ocean has been the subject of a number of recent studies, with particular interest in the identification of long-term trends. We describe a statistical method for performing long-term trend analysis and uncertainty evaluation of the estimated trends from deep-ocean noise data. This study has been extended to include measured data from four monitoring stations located in the Indian (Cape Leeuwin & Diego Garcia), Pacific (Wake Island) and Southern Atlantic (Ascension Islands) Oceans over periods spanning between 8 to 15 years. The data were obtained from the hydro-acoustic monitoring stations of the Preparatory Commission for the Comprehensive Nuclear Test Ban Treaty Organization (CTBTO). The monitoring stations provide information at a sampling frequency of 250 Hz, leading to very large datasets, and at acoustic frequencies up to 105 Hz.
The analysis method uses a flexible discrete model that incorporates terms that capture seasonal variations in the data together with a moving-average statistical model to describe the serial correlation of residual deviations. The trend analysis is applied to time series representing daily aggregated statistical levels for four frequency bands to obtain estimates for the change in sound pressure level (SPL) over the examined period with associated coverage intervals. The analysis demonstrates that there are statistically significant changes in the levels of deep-ocean noise over periods exceeding a decade. The main features of the approach include (a) using a functional model with terms that represent both long-term and seasonal behaviour of deep-ocean noise, (b) using a statistical model to capture the serial correlation of the residual deviations that are not explained by the functional model, (c) using daily aggregation intervals derived from 1-minute sound pressure level averages, and (d) applying a non-parametric approach to validate the uncertainties of the trend estimates that avoids the need to make an assumption about the distribution of the residual deviations.
The obtained results show the long term trends vary differently at the four stations. It was observed that low frequency noise generally dominated the significant trends in these oceans. The relative differences between the various statistical levels are remarkably similar for all the frequency bands. Given the complexity of the acoustic environment, it is difficult to identify the main causes of these trends. Some possible explanations for the observed trends are discussed. It was however observed some stations are subjected to strong seasonal variation with a high degree of correlation with climatic factors such as sea surface temperature, Antarctic ice coverage and wind speed. The same seasonal effects is less pronounced in station located closer to the equator.
How to cite: Cheong, S.-H., Robinson, S. P., Harris, P. M., Wang, L. S., and Livina, V.: Long-term trend analysis of deep-ocean acoustic noise data, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21819, https://doi.org/10.5194/egusphere-egu2020-21819, 2020.
Underwater noise is recognised as a form of marine pollutant and there is evidence that over exposure to excessive levels of noise can have effects on the wellbeing of the marine ecosystem. Consequently, the variation in the ambient sound levels in the deep ocean has been the subject of a number of recent studies, with particular interest in the identification of long-term trends. We describe a statistical method for performing long-term trend analysis and uncertainty evaluation of the estimated trends from deep-ocean noise data. This study has been extended to include measured data from four monitoring stations located in the Indian (Cape Leeuwin & Diego Garcia), Pacific (Wake Island) and Southern Atlantic (Ascension Islands) Oceans over periods spanning between 8 to 15 years. The data were obtained from the hydro-acoustic monitoring stations of the Preparatory Commission for the Comprehensive Nuclear Test Ban Treaty Organization (CTBTO). The monitoring stations provide information at a sampling frequency of 250 Hz, leading to very large datasets, and at acoustic frequencies up to 105 Hz.
The analysis method uses a flexible discrete model that incorporates terms that capture seasonal variations in the data together with a moving-average statistical model to describe the serial correlation of residual deviations. The trend analysis is applied to time series representing daily aggregated statistical levels for four frequency bands to obtain estimates for the change in sound pressure level (SPL) over the examined period with associated coverage intervals. The analysis demonstrates that there are statistically significant changes in the levels of deep-ocean noise over periods exceeding a decade. The main features of the approach include (a) using a functional model with terms that represent both long-term and seasonal behaviour of deep-ocean noise, (b) using a statistical model to capture the serial correlation of the residual deviations that are not explained by the functional model, (c) using daily aggregation intervals derived from 1-minute sound pressure level averages, and (d) applying a non-parametric approach to validate the uncertainties of the trend estimates that avoids the need to make an assumption about the distribution of the residual deviations.
The obtained results show the long term trends vary differently at the four stations. It was observed that low frequency noise generally dominated the significant trends in these oceans. The relative differences between the various statistical levels are remarkably similar for all the frequency bands. Given the complexity of the acoustic environment, it is difficult to identify the main causes of these trends. Some possible explanations for the observed trends are discussed. It was however observed some stations are subjected to strong seasonal variation with a high degree of correlation with climatic factors such as sea surface temperature, Antarctic ice coverage and wind speed. The same seasonal effects is less pronounced in station located closer to the equator.
How to cite: Cheong, S.-H., Robinson, S. P., Harris, P. M., Wang, L. S., and Livina, V.: Long-term trend analysis of deep-ocean acoustic noise data, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21819, https://doi.org/10.5194/egusphere-egu2020-21819, 2020.
EGU2020-4594 | Displays | ITS1.7/SM3.5
Estimates of seismo-acoustic transfer functions relevant to CTBT IMS T-phase stationsPeter Nielsen, Mario Zampolli, Ronan Le Bras, Georgios Haralabus, Jeffry Stevens, and Jeffrey Hanson
The Comprehensive Nuclear-Test-Ban Treaty (CTBT) International Monitoring System (IMS) is a world-wide network of stations and laboratories designed to detect nuclear explosions underground, in the oceans and in the atmosphere. The IMS incorporates four technologies: seismic, hydroacoustic and infrasound (collectively referred to as waveform technologies), and radionuclide (particulate and noble gas). The focus of this presentation is the hydroacoustic component of the IMS, which consists of 6 hydroacoustic stations employing hydrophones deployed in the oceans and 5 near-shore seismic stations, called T-phase stations, located on islands or continental coastal regions. The purpose of T-phase stations is to detect water-borne hydroacoustic pressure waves converted into seismic waves that propagate on the earth’s crust and are detected by land seismometers. However, the conversion process from in-water pressure to near-interface seismic waves is complex and strongly dependent on the properties of the local underwater and geological environment. To further understand this conversion process, state-of-the-art hybrid seismo-acoustic wave propagation models have been applied to simplified environments and to scenarios representative of the conditions encountered at IMS T-phase stations to compute broadband pressure time-series in the water and particle velocity components on-land. Transfer functions from in-water pressure to on-land seismic particle velocity and vice versa were estimated both from modelling results and from real data acquired in locations where the hydrophones and (non-IMS) seismic stations were within 50-km distance. The presented results have been used to give a first assessment of the feasibility of characterizing the hydroacoustic phase of an in-water event by on-land seismic recordings at IMS T-phase stations, subject to limited a-priori environmental information and limiting factors, such as band-width and instrumental and/or environmental noise.
How to cite: Nielsen, P., Zampolli, M., Le Bras, R., Haralabus, G., Stevens, J., and Hanson, J.: Estimates of seismo-acoustic transfer functions relevant to CTBT IMS T-phase stations, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4594, https://doi.org/10.5194/egusphere-egu2020-4594, 2020.
The Comprehensive Nuclear-Test-Ban Treaty (CTBT) International Monitoring System (IMS) is a world-wide network of stations and laboratories designed to detect nuclear explosions underground, in the oceans and in the atmosphere. The IMS incorporates four technologies: seismic, hydroacoustic and infrasound (collectively referred to as waveform technologies), and radionuclide (particulate and noble gas). The focus of this presentation is the hydroacoustic component of the IMS, which consists of 6 hydroacoustic stations employing hydrophones deployed in the oceans and 5 near-shore seismic stations, called T-phase stations, located on islands or continental coastal regions. The purpose of T-phase stations is to detect water-borne hydroacoustic pressure waves converted into seismic waves that propagate on the earth’s crust and are detected by land seismometers. However, the conversion process from in-water pressure to near-interface seismic waves is complex and strongly dependent on the properties of the local underwater and geological environment. To further understand this conversion process, state-of-the-art hybrid seismo-acoustic wave propagation models have been applied to simplified environments and to scenarios representative of the conditions encountered at IMS T-phase stations to compute broadband pressure time-series in the water and particle velocity components on-land. Transfer functions from in-water pressure to on-land seismic particle velocity and vice versa were estimated both from modelling results and from real data acquired in locations where the hydrophones and (non-IMS) seismic stations were within 50-km distance. The presented results have been used to give a first assessment of the feasibility of characterizing the hydroacoustic phase of an in-water event by on-land seismic recordings at IMS T-phase stations, subject to limited a-priori environmental information and limiting factors, such as band-width and instrumental and/or environmental noise.
How to cite: Nielsen, P., Zampolli, M., Le Bras, R., Haralabus, G., Stevens, J., and Hanson, J.: Estimates of seismo-acoustic transfer functions relevant to CTBT IMS T-phase stations, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4594, https://doi.org/10.5194/egusphere-egu2020-4594, 2020.
EGU2020-18451 | Displays | ITS1.7/SM3.5
Seismically induced ground motions and source mechanism passively retrieved from remote infrasound detectionsShahar Shani-Kadmiel, Gil Averbuch, Pieter Smets, Jelle Assink, and Läslo Evers
The amplitude of ground motions caused by earthquakes and subsurface explosions generally decreases with distance from the epicenter. However, in the near-source region, other factors, e.g., near surface geology, topography, and the source radiation pattern, may significantly vary the amplitude of ground motions. Although source location and magnitude (or yield), can be rapidly determined using distant seismic stations, without a dense seismological network in the epicentral region, the ability to resolve such variations is limited.
Besides seismic waves, earthquakes and subsurface explosions generate infrasound, i.e., inaudible acoustic waves in the atmosphere. The mechanical ground motions from such sources, including the effects from the above mentioned factors, are encapsulated by the acoustic pressure perturbations over the source region. Due to the low frequency nature of infrasound and facilitated by waveguides in the atmosphere, such perturbations propagate over long ranges with limited attenuation and are detected at ground-based stations. In this work we demonstrate a method for resolving ground motions and the source mechanism from remotely detected infrasound. This is illustrated for the 2010 Mw 7.0 Port-au-Prince, Haiti earthquake, and the 6th and largest nuclear test conducted by the Democratic People's Republic of Korea in 2017.
Such observations are made possible by: (1) An advanced array processing technique that enables the detection of coherent wavefronts, even when amplitudes are below the noise level, and (2) A backprojection technique that maps infrasound detections in time to their origin on the Earth's surface.
Infrasound measurements are conducted globally for the verification of the Comprehensive Nuclear-Test-Ban Treaty and together with regional infrasound networks allow for an unprecedented global coverage. This makes infrasound as an earthquake disaster mitigation technique feasible for the first time and contributes to the Treaty's verification capacity.
How to cite: Shani-Kadmiel, S., Averbuch, G., Smets, P., Assink, J., and Evers, L.: Seismically induced ground motions and source mechanism passively retrieved from remote infrasound detections, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18451, https://doi.org/10.5194/egusphere-egu2020-18451, 2020.
The amplitude of ground motions caused by earthquakes and subsurface explosions generally decreases with distance from the epicenter. However, in the near-source region, other factors, e.g., near surface geology, topography, and the source radiation pattern, may significantly vary the amplitude of ground motions. Although source location and magnitude (or yield), can be rapidly determined using distant seismic stations, without a dense seismological network in the epicentral region, the ability to resolve such variations is limited.
Besides seismic waves, earthquakes and subsurface explosions generate infrasound, i.e., inaudible acoustic waves in the atmosphere. The mechanical ground motions from such sources, including the effects from the above mentioned factors, are encapsulated by the acoustic pressure perturbations over the source region. Due to the low frequency nature of infrasound and facilitated by waveguides in the atmosphere, such perturbations propagate over long ranges with limited attenuation and are detected at ground-based stations. In this work we demonstrate a method for resolving ground motions and the source mechanism from remotely detected infrasound. This is illustrated for the 2010 Mw 7.0 Port-au-Prince, Haiti earthquake, and the 6th and largest nuclear test conducted by the Democratic People's Republic of Korea in 2017.
Such observations are made possible by: (1) An advanced array processing technique that enables the detection of coherent wavefronts, even when amplitudes are below the noise level, and (2) A backprojection technique that maps infrasound detections in time to their origin on the Earth's surface.
Infrasound measurements are conducted globally for the verification of the Comprehensive Nuclear-Test-Ban Treaty and together with regional infrasound networks allow for an unprecedented global coverage. This makes infrasound as an earthquake disaster mitigation technique feasible for the first time and contributes to the Treaty's verification capacity.
How to cite: Shani-Kadmiel, S., Averbuch, G., Smets, P., Assink, J., and Evers, L.: Seismically induced ground motions and source mechanism passively retrieved from remote infrasound detections, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18451, https://doi.org/10.5194/egusphere-egu2020-18451, 2020.
EGU2020-17691 | Displays | ITS1.7/SM3.5
IDC Infrasound technology path to continuous improvementPierrick Mialle
The IDC advances its methods and continuously improves its automatic system for the infrasound technology. The IDC focuses on enhancing the automatic system for the identification of valid signals and the optimization of the network detection threshold by identifying ways to refine signal characterization methodology and association criteria. Alongside these efforts, the IDC and its partners also focuses on expanding the capabilities in NDC-in-a-Box (NiaB), which is a software package specifically aimed at the CTBTO user community, the National Data Centres (NDC).
An objective of this study is to illustrate the latest efforts by IDC to increase trust in its products, while continuing its infrasound specific effort on reducing the number of associated infrasound arrivals that are rejected from the automatic bulletins when generating the reviewed event bulletins. A number of ongoing projects at the IDC will be presented, such as: - improving the detection accuracy at the station processing stage by introducing the infrasound signal detection and interactive review software DTK-(G)PMCC (Progressive Multi-Channel Correlation) and by evaluating the performances of detection software; - development of the new generation of automatic waveform network processing software NET-VISA to pursue a lower ratio of false alarms over GA (Global Association) and a path for revisiting the historical IRED. The IDC identified a number of areas for improvement of its infrasound system, those will be shortly introduced.
How to cite: Mialle, P.: IDC Infrasound technology path to continuous improvement, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-17691, https://doi.org/10.5194/egusphere-egu2020-17691, 2020.
The IDC advances its methods and continuously improves its automatic system for the infrasound technology. The IDC focuses on enhancing the automatic system for the identification of valid signals and the optimization of the network detection threshold by identifying ways to refine signal characterization methodology and association criteria. Alongside these efforts, the IDC and its partners also focuses on expanding the capabilities in NDC-in-a-Box (NiaB), which is a software package specifically aimed at the CTBTO user community, the National Data Centres (NDC).
An objective of this study is to illustrate the latest efforts by IDC to increase trust in its products, while continuing its infrasound specific effort on reducing the number of associated infrasound arrivals that are rejected from the automatic bulletins when generating the reviewed event bulletins. A number of ongoing projects at the IDC will be presented, such as: - improving the detection accuracy at the station processing stage by introducing the infrasound signal detection and interactive review software DTK-(G)PMCC (Progressive Multi-Channel Correlation) and by evaluating the performances of detection software; - development of the new generation of automatic waveform network processing software NET-VISA to pursue a lower ratio of false alarms over GA (Global Association) and a path for revisiting the historical IRED. The IDC identified a number of areas for improvement of its infrasound system, those will be shortly introduced.
How to cite: Mialle, P.: IDC Infrasound technology path to continuous improvement, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-17691, https://doi.org/10.5194/egusphere-egu2020-17691, 2020.
EGU2020-7047 | Displays | ITS1.7/SM3.5
A climatology of infrasound detections at Kerguelen IslandOlivier F.C. den Ouden, Jelle D. Assink, Pieter S.M. Smets, and Läslo G. Evers
The International Monitoring System (IMS) is in place for the verification of the Comprehensive Nuclear-Test-Ban Treaty (CTBT). Part of the IMS are 60 infrasound arrays, of which 51 currently provide real-time infrasound recordings from around the world. Those arrays play a central role in the characterization of the global infrasonic wavefield and localization of infrasound sources.
Power Spectral Density (PSD) estimates give insight into the noise levels per station and array. The IMS global low and high noise model curves have been determined in a study by Brown et al. [2014] using a distribution of computed PSDs. All the IMS infrasound arrays, except IS23, have been included in the determination of the atmospheric ambient noise curves. IS23 is located at Kerguelen Island and exist of 15 elements that have been divided into five 100 meter aperture triplets arrays. The array is located at one of the noisiest locations in the world, due to the high wind conditions that exist year-round. The resulting high noise floor appears to hamper infrasound detection at this island array.
In this work, the effects of meteorological, oceanographic, and topographical conditions on the infrasound recordings at IS23 are studied. Five years of infrasound data is analyzed, as recorded by IS23, by using various processing techniques. Contributions within different frequency bands are evaluated. The infrasound detections are explained in terms of the stratospheric winds and ocean wave activity. Understanding and characterization of the low-frequency recordings of IS23 are of importance for successfully including this array for verification of the CTBT.
How to cite: den Ouden, O. F. C., Assink, J. D., Smets, P. S. M., and Evers, L. G.: A climatology of infrasound detections at Kerguelen Island, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7047, https://doi.org/10.5194/egusphere-egu2020-7047, 2020.
The International Monitoring System (IMS) is in place for the verification of the Comprehensive Nuclear-Test-Ban Treaty (CTBT). Part of the IMS are 60 infrasound arrays, of which 51 currently provide real-time infrasound recordings from around the world. Those arrays play a central role in the characterization of the global infrasonic wavefield and localization of infrasound sources.
Power Spectral Density (PSD) estimates give insight into the noise levels per station and array. The IMS global low and high noise model curves have been determined in a study by Brown et al. [2014] using a distribution of computed PSDs. All the IMS infrasound arrays, except IS23, have been included in the determination of the atmospheric ambient noise curves. IS23 is located at Kerguelen Island and exist of 15 elements that have been divided into five 100 meter aperture triplets arrays. The array is located at one of the noisiest locations in the world, due to the high wind conditions that exist year-round. The resulting high noise floor appears to hamper infrasound detection at this island array.
In this work, the effects of meteorological, oceanographic, and topographical conditions on the infrasound recordings at IS23 are studied. Five years of infrasound data is analyzed, as recorded by IS23, by using various processing techniques. Contributions within different frequency bands are evaluated. The infrasound detections are explained in terms of the stratospheric winds and ocean wave activity. Understanding and characterization of the low-frequency recordings of IS23 are of importance for successfully including this array for verification of the CTBT.
How to cite: den Ouden, O. F. C., Assink, J. D., Smets, P. S. M., and Evers, L. G.: A climatology of infrasound detections at Kerguelen Island, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7047, https://doi.org/10.5194/egusphere-egu2020-7047, 2020.
EGU2020-21841 | Displays | ITS1.7/SM3.5
Test bench development dedicated to microbarometers run-inEmeline Guilbert and Anthony Hue
Seismo Wave Company is ongoing improving metrological processes and quality surveys to guarantee the best Infrasound sensors technology. In accordance with our quality approach, a running-in step for infrasound sensors has been investigated and implemented. Once the metrology process is completed (acoustical and electrical calibration, self-noise measurement), objective is to keep monitoring on sensitivity of MB3a sensors during several days, using the in-situ electrical calibration capability. For this purpose, a new bench has been designed and characterized in our laboratory. Different sensitivity assessment methods have been compared. Testing conditions, bench design, methodology and results are laid out in this poster.
How to cite: Guilbert, E. and Hue, A.: Test bench development dedicated to microbarometers run-in, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21841, https://doi.org/10.5194/egusphere-egu2020-21841, 2020.
Seismo Wave Company is ongoing improving metrological processes and quality surveys to guarantee the best Infrasound sensors technology. In accordance with our quality approach, a running-in step for infrasound sensors has been investigated and implemented. Once the metrology process is completed (acoustical and electrical calibration, self-noise measurement), objective is to keep monitoring on sensitivity of MB3a sensors during several days, using the in-situ electrical calibration capability. For this purpose, a new bench has been designed and characterized in our laboratory. Different sensitivity assessment methods have been compared. Testing conditions, bench design, methodology and results are laid out in this poster.
How to cite: Guilbert, E. and Hue, A.: Test bench development dedicated to microbarometers run-in, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21841, https://doi.org/10.5194/egusphere-egu2020-21841, 2020.
EGU2020-6415 | Displays | ITS1.7/SM3.5
Study of a high activity eruption sequence of Kadovar volcano, Papua New Guinea, using data recorded by the CTBT International Monitoring SystemHiroyuki Matsumoto, Mario Zampolli, Georgios Haralabus, Jerry Stanley, James Robertson, and Nurcan Meral Özel
The analysis of hydroacoustic signals originating from marine volcanic activity recorded by a remote hydroacoustic (HA) station, HA11 at Wake Island, of the Comprehensive Nuclear-Test-Ban Treaty (CTBT) International Monitoring System (IMS) is presented in this study. The events studied pertain to an eruption series at Kadovar Island, Papua New Guinea during the period January to February 2018. Local visual observations determined that the Kadovar volcano began to erupt at the summit of the island, and then created new vent spots near the coast. The events included the collapse of a lava dome on 9 February 2018. Directions-of-arrivals of the hydroacoustic signals detected at HA11 were evaluated using a cross-correlation technique, this allowed discrimination between hydroacoustics signals originating from the Kadovar volcanic activity and other numerous hydroacoustic signals generated by general seismic activity in the Pacific. Discrimination between volcanic activity and seismicity was achieved by examining the time-frequency characteristics of the hydroacoustic signals, i.e. associating short duration broadband bursts with volcanic eruptions, in line with criteria generally applied for such events. Episodes of high volcanic activity with as many as 80 detections per hour were identified on two occasions, separated by a one-month period of relative quiet. Some of the hydroacoustic signals were characterized by broadband frequency content and high received levels (i.e. ca. 30 dB higher than the ocean microseismic background). It was found that corresponding non-hydroacoustic signals could not be identified by other regional IMS stations, this providing an indication of the likely submarine origin of these events. Long duration bursts recorded on the day when the lava dome collapsed have been identified and characterized in time-frequency space. This study provides a further example of the added value of CTBT IMS hydroacoustic station remote monitoring of marine volcanic events.
How to cite: Matsumoto, H., Zampolli, M., Haralabus, G., Stanley, J., Robertson, J., and Meral Özel, N.: Study of a high activity eruption sequence of Kadovar volcano, Papua New Guinea, using data recorded by the CTBT International Monitoring System, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6415, https://doi.org/10.5194/egusphere-egu2020-6415, 2020.
The analysis of hydroacoustic signals originating from marine volcanic activity recorded by a remote hydroacoustic (HA) station, HA11 at Wake Island, of the Comprehensive Nuclear-Test-Ban Treaty (CTBT) International Monitoring System (IMS) is presented in this study. The events studied pertain to an eruption series at Kadovar Island, Papua New Guinea during the period January to February 2018. Local visual observations determined that the Kadovar volcano began to erupt at the summit of the island, and then created new vent spots near the coast. The events included the collapse of a lava dome on 9 February 2018. Directions-of-arrivals of the hydroacoustic signals detected at HA11 were evaluated using a cross-correlation technique, this allowed discrimination between hydroacoustics signals originating from the Kadovar volcanic activity and other numerous hydroacoustic signals generated by general seismic activity in the Pacific. Discrimination between volcanic activity and seismicity was achieved by examining the time-frequency characteristics of the hydroacoustic signals, i.e. associating short duration broadband bursts with volcanic eruptions, in line with criteria generally applied for such events. Episodes of high volcanic activity with as many as 80 detections per hour were identified on two occasions, separated by a one-month period of relative quiet. Some of the hydroacoustic signals were characterized by broadband frequency content and high received levels (i.e. ca. 30 dB higher than the ocean microseismic background). It was found that corresponding non-hydroacoustic signals could not be identified by other regional IMS stations, this providing an indication of the likely submarine origin of these events. Long duration bursts recorded on the day when the lava dome collapsed have been identified and characterized in time-frequency space. This study provides a further example of the added value of CTBT IMS hydroacoustic station remote monitoring of marine volcanic events.
How to cite: Matsumoto, H., Zampolli, M., Haralabus, G., Stanley, J., Robertson, J., and Meral Özel, N.: Study of a high activity eruption sequence of Kadovar volcano, Papua New Guinea, using data recorded by the CTBT International Monitoring System, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6415, https://doi.org/10.5194/egusphere-egu2020-6415, 2020.
EGU2020-7367 * | Displays | ITS1.7/SM3.5 | Highlight
NEMO - The NEar real-time MOnitoring system for bright fireballsTheresa Ott, Esther Drolshagen, Detlef Koschny, Gerhard Drolshagen, Christoph Pilger, Pierrick Mialle, Jeremie Vaubaillon, and Björn Poppe
Fireballs are very bright meteors with magnitudes of at least -4. They can spark a lot of public interest. Especially, if they can be seen during daytime over populous areas. Social Media allows us to be informed about almost everything, worldwide, and in all areas of life in real-time. In the age of intensive use of these media, information is freely available seconds after the sighting of a fireball.
This is the basis of the alert system which is part of NEMO, the NEar real-time MOnitoring system, for bright fireballs. It uses Social Media, mainly Twitter, to be informed about a fireball event in near real-time. In addition, the system accesses various data sources to collect further information about the detected fireballs. The sources range from meteor networks, the data from weather satellites or lightning detectors to the infrasound data of the IMS (International Monitoring System) operated by the CTBTO (Comprehensive Nuclear-Test-Ban Treaty Organisation).
Since large meteoroids or asteroids can be detected by these infrasound sensors when they enter the Earth's atmosphere, this network provides the possibility to detect fireballs worldwide and during day and night. From the infrasound data the energy of the object that caused the fireball can be determined and hence, its size and mass can be calculated. By combining all available information about the fireball from different data sources the amount of scientific knowledge about the event can be maximized.
NEMO was under development for about 2.5 years. Since the beginning of the year the system is in operation at the European Space Agency, as part of its Space Safety Programme. In this presentation we will give an overview about NEMO, its working principle and its relation to the IMS.
How to cite: Ott, T., Drolshagen, E., Koschny, D., Drolshagen, G., Pilger, C., Mialle, P., Vaubaillon, J., and Poppe, B.: NEMO - The NEar real-time MOnitoring system for bright fireballs, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7367, https://doi.org/10.5194/egusphere-egu2020-7367, 2020.
Fireballs are very bright meteors with magnitudes of at least -4. They can spark a lot of public interest. Especially, if they can be seen during daytime over populous areas. Social Media allows us to be informed about almost everything, worldwide, and in all areas of life in real-time. In the age of intensive use of these media, information is freely available seconds after the sighting of a fireball.
This is the basis of the alert system which is part of NEMO, the NEar real-time MOnitoring system, for bright fireballs. It uses Social Media, mainly Twitter, to be informed about a fireball event in near real-time. In addition, the system accesses various data sources to collect further information about the detected fireballs. The sources range from meteor networks, the data from weather satellites or lightning detectors to the infrasound data of the IMS (International Monitoring System) operated by the CTBTO (Comprehensive Nuclear-Test-Ban Treaty Organisation).
Since large meteoroids or asteroids can be detected by these infrasound sensors when they enter the Earth's atmosphere, this network provides the possibility to detect fireballs worldwide and during day and night. From the infrasound data the energy of the object that caused the fireball can be determined and hence, its size and mass can be calculated. By combining all available information about the fireball from different data sources the amount of scientific knowledge about the event can be maximized.
NEMO was under development for about 2.5 years. Since the beginning of the year the system is in operation at the European Space Agency, as part of its Space Safety Programme. In this presentation we will give an overview about NEMO, its working principle and its relation to the IMS.
How to cite: Ott, T., Drolshagen, E., Koschny, D., Drolshagen, G., Pilger, C., Mialle, P., Vaubaillon, J., and Poppe, B.: NEMO - The NEar real-time MOnitoring system for bright fireballs, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7367, https://doi.org/10.5194/egusphere-egu2020-7367, 2020.
EGU2020-9926 | Displays | ITS1.7/SM3.5
IDC events related to volcanic activity at Kamchatka PeninsulaPaulina Bittner, Jane Gore, David Applbaum, Aaron Jimenez, Marcela Villarroel, and Pierrick Mialle
International Monitoring System (IMS) is designed to detect and locate nuclear test explosions as part of Comprehensive Nuclear Test-Ban Treaty (CTBT) verification regime. This network can be also used for civil applications, such as the remote monitoring of volcanic activity.
Events related to volcanic eruptions, which are listed in the International Data Centre (IDC) bulletins, are typically detected by infrasound stations of the IMS network. Infrasound station IS44 and primary seismic station PS36 are situated in Kamchatka, Russian Federation, in the vicinity of several active volcanoes. These two stations recorded seismo-acoustic events generated by volcanic eruptions. In addition to atmospheric events, the IMS network has the potential of detecting underwater volcanic activity. Under favourable conditions, the hydroacoustic stations located in the Pacific Ocean and PS36 may detect underwater events close to the shore of Kamchatka Peninsula.
The aim of this presentation is to show examples of volcanic eruptions at Kamchatka Peninsula recorded by the IMS network. Supplementary information obtained by other observing networks can be found in reports issued by Kamchatkan Volcanic Eruption Response Team (KVERT) or Tokyo Volcanic Ash Advisory Center (VAAC). Such information can be compared with events listed in IDC bulletins.
How to cite: Bittner, P., Gore, J., Applbaum, D., Jimenez, A., Villarroel, M., and Mialle, P.: IDC events related to volcanic activity at Kamchatka Peninsula , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9926, https://doi.org/10.5194/egusphere-egu2020-9926, 2020.
International Monitoring System (IMS) is designed to detect and locate nuclear test explosions as part of Comprehensive Nuclear Test-Ban Treaty (CTBT) verification regime. This network can be also used for civil applications, such as the remote monitoring of volcanic activity.
Events related to volcanic eruptions, which are listed in the International Data Centre (IDC) bulletins, are typically detected by infrasound stations of the IMS network. Infrasound station IS44 and primary seismic station PS36 are situated in Kamchatka, Russian Federation, in the vicinity of several active volcanoes. These two stations recorded seismo-acoustic events generated by volcanic eruptions. In addition to atmospheric events, the IMS network has the potential of detecting underwater volcanic activity. Under favourable conditions, the hydroacoustic stations located in the Pacific Ocean and PS36 may detect underwater events close to the shore of Kamchatka Peninsula.
The aim of this presentation is to show examples of volcanic eruptions at Kamchatka Peninsula recorded by the IMS network. Supplementary information obtained by other observing networks can be found in reports issued by Kamchatkan Volcanic Eruption Response Team (KVERT) or Tokyo Volcanic Ash Advisory Center (VAAC). Such information can be compared with events listed in IDC bulletins.
How to cite: Bittner, P., Gore, J., Applbaum, D., Jimenez, A., Villarroel, M., and Mialle, P.: IDC events related to volcanic activity at Kamchatka Peninsula , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9926, https://doi.org/10.5194/egusphere-egu2020-9926, 2020.
EGU2020-13316 | Displays | ITS1.7/SM3.5
S-wave reflection imaging of a tectonically determined cavern by use of next generation electro-mechanic vibratorPeter Labak, Attila Kovacs, and Endre Hegedus
The key objectives of a ground-based geophysical mapping during an On-site Inspection are to detect, locate and characterize the zones of rock damage associated with an underground nuclear explosion (UNE). The cavity, rubble zone and fracturized rock matrix are also common features in the close vicinity of a cave of karstic origin. The natural cavities are mainly developed within the weakest zones of the rock matrix. The connatural features with an UNE are important but the thermal and pressure effects are lacking in the case of natural origin. However, the similarities may justify the efforts to investigate the cavern and its surroundings by geophysical methods.
The oval shaped cavern with a diameter of 28 m located 70 m below the surface was discovered within a clay mine in N-Hungary. The deep basement is composed of Triassic limestone, the cavern is located in the overlying Oligocene sandstone formation. As a result of hydrothermal activity in the Pleistocene a cave formed in the limestone which may have collapsed over time. The opening of the deep part of the cave influenced the overlying sandstone formation but the collapse did not reach the surface.
As a consequence of a UNE the cracks and open fissures could provide a pathway for the radioactive gas to find its way near to the surface. The detection of these fracturized zones require the highest possible resolution of the seismic imaging of the subsurface. Therefore, we made a 2D survey above the cavern site and determined that the optimal method is to generate and detect horizontally polarized (SH) waves. The electro-mechanically driven vibrator has provided a bandwidth ranging from 5 to 200 Hz which can be extended up to 400 Hz. The use of the Lightning type vibrator has broadened the seismic bandwidth achieving the maximum penetration of 250 m with substantial increase of the resolution.
The joint interpretation of the seismic and geoelectric tomographic results with the SH- wave reflection section has provided a clear pattern of the tectonized rock matrix around the cavern.
How to cite: Labak, P., Kovacs, A., and Hegedus, E.: S-wave reflection imaging of a tectonically determined cavern by use of next generation electro-mechanic vibrator, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13316, https://doi.org/10.5194/egusphere-egu2020-13316, 2020.
The key objectives of a ground-based geophysical mapping during an On-site Inspection are to detect, locate and characterize the zones of rock damage associated with an underground nuclear explosion (UNE). The cavity, rubble zone and fracturized rock matrix are also common features in the close vicinity of a cave of karstic origin. The natural cavities are mainly developed within the weakest zones of the rock matrix. The connatural features with an UNE are important but the thermal and pressure effects are lacking in the case of natural origin. However, the similarities may justify the efforts to investigate the cavern and its surroundings by geophysical methods.
The oval shaped cavern with a diameter of 28 m located 70 m below the surface was discovered within a clay mine in N-Hungary. The deep basement is composed of Triassic limestone, the cavern is located in the overlying Oligocene sandstone formation. As a result of hydrothermal activity in the Pleistocene a cave formed in the limestone which may have collapsed over time. The opening of the deep part of the cave influenced the overlying sandstone formation but the collapse did not reach the surface.
As a consequence of a UNE the cracks and open fissures could provide a pathway for the radioactive gas to find its way near to the surface. The detection of these fracturized zones require the highest possible resolution of the seismic imaging of the subsurface. Therefore, we made a 2D survey above the cavern site and determined that the optimal method is to generate and detect horizontally polarized (SH) waves. The electro-mechanically driven vibrator has provided a bandwidth ranging from 5 to 200 Hz which can be extended up to 400 Hz. The use of the Lightning type vibrator has broadened the seismic bandwidth achieving the maximum penetration of 250 m with substantial increase of the resolution.
The joint interpretation of the seismic and geoelectric tomographic results with the SH- wave reflection section has provided a clear pattern of the tectonized rock matrix around the cavern.
How to cite: Labak, P., Kovacs, A., and Hegedus, E.: S-wave reflection imaging of a tectonically determined cavern by use of next generation electro-mechanic vibrator, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13316, https://doi.org/10.5194/egusphere-egu2020-13316, 2020.
EGU2020-13214 | Displays | ITS1.7/SM3.5
Detection of the deep cavern at the Felsopeteny, Hungary site using seismic ambient noise dataMiriam Kristekova, Jozef Kristek, Peter Moczo, and Peter Labak
Nuclear explosions are banned by the Comprehensive Nuclear-Test-Ban Treaty (CTBT). Obviously, the CTBT needs robust and comprehensive verification tools to make sure that no nuclear explosion goes undetected. The detection of underground cavity due to nuclear explosions is a primary task for an on-site inspection (OSI) and resonance seismometry. Recently we have developed the finite-frequency-range spectral-power method that makes it possible to use seismic ambient noise recorded at the free surface above an underground cavity for localizing it. In this contribution we present results of application of the method to data recorded at a site of the Great Cavern near Felsopeteny, Hungary.
CTBTO performed several active and passive seismic measurements at the free surface above the Great Cavern in September 2019. Seismic ambient noise was recorded one week continuously at almost 50 stations with interstation distance around 50 m covering area 400 x 400 m.
The oval shaped cavern with a diameter of 28 m located 70 m below the surface was discovered within a clay mine in N-Hungary. The deep basement is composed of Triassic limestone, the cavern is in the overlying Oligocene sandstone formation. As a result of hydrothermal activity in the Pleistocene a cave formed in the limestone which may have collapsed over time. The opening of the deep part of the cave influenced the overlying sandstone formation but the collapse did not reach the surface.
We present the procedure of pre-processing and identification of a position of the cavern based on the recorded seismic ambient noise. We checked robustness of the obtained results. The results demonstrate potential of our methodology for the OSI purposes.
How to cite: Kristekova, M., Kristek, J., Moczo, P., and Labak, P.: Detection of the deep cavern at the Felsopeteny, Hungary site using seismic ambient noise data, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13214, https://doi.org/10.5194/egusphere-egu2020-13214, 2020.
Nuclear explosions are banned by the Comprehensive Nuclear-Test-Ban Treaty (CTBT). Obviously, the CTBT needs robust and comprehensive verification tools to make sure that no nuclear explosion goes undetected. The detection of underground cavity due to nuclear explosions is a primary task for an on-site inspection (OSI) and resonance seismometry. Recently we have developed the finite-frequency-range spectral-power method that makes it possible to use seismic ambient noise recorded at the free surface above an underground cavity for localizing it. In this contribution we present results of application of the method to data recorded at a site of the Great Cavern near Felsopeteny, Hungary.
CTBTO performed several active and passive seismic measurements at the free surface above the Great Cavern in September 2019. Seismic ambient noise was recorded one week continuously at almost 50 stations with interstation distance around 50 m covering area 400 x 400 m.
The oval shaped cavern with a diameter of 28 m located 70 m below the surface was discovered within a clay mine in N-Hungary. The deep basement is composed of Triassic limestone, the cavern is in the overlying Oligocene sandstone formation. As a result of hydrothermal activity in the Pleistocene a cave formed in the limestone which may have collapsed over time. The opening of the deep part of the cave influenced the overlying sandstone formation but the collapse did not reach the surface.
We present the procedure of pre-processing and identification of a position of the cavern based on the recorded seismic ambient noise. We checked robustness of the obtained results. The results demonstrate potential of our methodology for the OSI purposes.
How to cite: Kristekova, M., Kristek, J., Moczo, P., and Labak, P.: Detection of the deep cavern at the Felsopeteny, Hungary site using seismic ambient noise data, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13214, https://doi.org/10.5194/egusphere-egu2020-13214, 2020.
EGU2020-839 | Displays | ITS1.7/SM3.5
Detecting explosion-induced dynamic phenomena using time-lapse seismic surveyingShaji Mathew, Colin MacBeth, Maria-Daphne Mangriotis, and Jenny Stevanovic
Characterization of seismic events as underground nuclear explosions is a challenging task. Geophysical methods such as seismic monitoring systems are used by CTBTO to link post-explosion phenomena to potential sources. The main challenges in seismic monitoring involve accurately locating of sources and separating underground variations in seismic properties due to the explosion from naturally occurring variations. Underground detonations result in an immense change in pressure and temperature concentrated around the source origin. This results in the formation of characteristic static and dynamic phenomena. This study highlights the potential of using time-lapse seismic to identify ground zero by monitoring post-explosion dynamic phenomena. Time-lapse seismic, also known as 4D seismic, is successfully employed in the oil and gas industry for petroleum production monitoring and management. It involves taking more than one 2D/3D seismic at different calendar times over the same reservoir and studying the difference in seismic attributes.
Following an underground explosion, dynamic changes in rock and fluid properties are observable for a prolonged period, even up to several decades. This is prominent near to source origin, and it is a result of the redistribution of residual energy, such as pressure, temperature, and saturation. Frequent seismic monitoring surveys (time-lapse seismic) enables one to monitor the changes in rock and fluid properties. This study presents the characteristics of the time-lapse seismic signature observed in a heterogeneous medium (or heterogeneous cavity). We will look into the impact of factors affecting land 4D repeatability on the 4D signature. The significance of identifying the 4D signature related to the explosion in a seismic section, and the feasibility of detecting it during the OSI with resource and time constraints in place will be discussed. We present a fast detection method using machine learning for the detection of explosion related time-lapse signature, which could be an identifier of the source location or ground zero.
Acknowledgments: Authors would like to thank EPSRC and AWE for funding this project.
How to cite: Mathew, S., MacBeth, C., Mangriotis, M.-D., and Stevanovic, J.: Detecting explosion-induced dynamic phenomena using time-lapse seismic surveying, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-839, https://doi.org/10.5194/egusphere-egu2020-839, 2020.
Characterization of seismic events as underground nuclear explosions is a challenging task. Geophysical methods such as seismic monitoring systems are used by CTBTO to link post-explosion phenomena to potential sources. The main challenges in seismic monitoring involve accurately locating of sources and separating underground variations in seismic properties due to the explosion from naturally occurring variations. Underground detonations result in an immense change in pressure and temperature concentrated around the source origin. This results in the formation of characteristic static and dynamic phenomena. This study highlights the potential of using time-lapse seismic to identify ground zero by monitoring post-explosion dynamic phenomena. Time-lapse seismic, also known as 4D seismic, is successfully employed in the oil and gas industry for petroleum production monitoring and management. It involves taking more than one 2D/3D seismic at different calendar times over the same reservoir and studying the difference in seismic attributes.
Following an underground explosion, dynamic changes in rock and fluid properties are observable for a prolonged period, even up to several decades. This is prominent near to source origin, and it is a result of the redistribution of residual energy, such as pressure, temperature, and saturation. Frequent seismic monitoring surveys (time-lapse seismic) enables one to monitor the changes in rock and fluid properties. This study presents the characteristics of the time-lapse seismic signature observed in a heterogeneous medium (or heterogeneous cavity). We will look into the impact of factors affecting land 4D repeatability on the 4D signature. The significance of identifying the 4D signature related to the explosion in a seismic section, and the feasibility of detecting it during the OSI with resource and time constraints in place will be discussed. We present a fast detection method using machine learning for the detection of explosion related time-lapse signature, which could be an identifier of the source location or ground zero.
Acknowledgments: Authors would like to thank EPSRC and AWE for funding this project.
How to cite: Mathew, S., MacBeth, C., Mangriotis, M.-D., and Stevanovic, J.: Detecting explosion-induced dynamic phenomena using time-lapse seismic surveying, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-839, https://doi.org/10.5194/egusphere-egu2020-839, 2020.
EGU2020-21345 | Displays | ITS1.7/SM3.5
Güralp recommendations for the installation of high-performance, observatory-grade seismic stationsSofia Filippi, Sally Mohr, Marie Balon, Phil Hill, Neil Watkiss, and Shawn Goessen
In order to monitor nuclear tests on a global scale, it is one of the IMS’s fundamental tasks to maintain a network of seismic stations with high data reliability. Installation is often the most critical aspect of a successful seismic station. A poorly designed layout leads to the introduction of noise that can hugely impact data quality, therefore rendering expensive, high performance equipment inadequate.
To facilitate quality installations and encourage better practices in the global seismic monitoring community, Güralp has developed a system that will allow for observatory-grade data.
The system is a classic seismic station composed of an ultra-low noise broadband seismometer: a Güralp 3T (120s or 360s), a high-performance digitizer-datalogger: the Güralp Affinity, a sensor cable and an atmospheric pressure enclosure.
The custom-built pressure enclosure enhances performance in vault installations, protecting the sensor from minuscule fluctuations in temperature and pressure, hence considerably reducing noise levels.
Güralp also provide best practice guidelines to assist researchers in designing their station, from site selection, installation and training to data retrieval and analysis.
Since 2001, Güralp have been managing the Eskdalemuir seismic array (EKA), the United Kingdom’s auxiliary station for the International Monitoring System. These years of experience with CTBT related monitoring have taught us that good results do not come from the instrument alone. This is why Güralp endeavors to accompany operators through every step of the process with a team of specialist engineers, applying 35 years of expertise from project conception to data retrieval.
How to cite: Filippi, S., Mohr, S., Balon, M., Hill, P., Watkiss, N., and Goessen, S.: Güralp recommendations for the installation of high-performance, observatory-grade seismic stations, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21345, https://doi.org/10.5194/egusphere-egu2020-21345, 2020.
In order to monitor nuclear tests on a global scale, it is one of the IMS’s fundamental tasks to maintain a network of seismic stations with high data reliability. Installation is often the most critical aspect of a successful seismic station. A poorly designed layout leads to the introduction of noise that can hugely impact data quality, therefore rendering expensive, high performance equipment inadequate.
To facilitate quality installations and encourage better practices in the global seismic monitoring community, Güralp has developed a system that will allow for observatory-grade data.
The system is a classic seismic station composed of an ultra-low noise broadband seismometer: a Güralp 3T (120s or 360s), a high-performance digitizer-datalogger: the Güralp Affinity, a sensor cable and an atmospheric pressure enclosure.
The custom-built pressure enclosure enhances performance in vault installations, protecting the sensor from minuscule fluctuations in temperature and pressure, hence considerably reducing noise levels.
Güralp also provide best practice guidelines to assist researchers in designing their station, from site selection, installation and training to data retrieval and analysis.
Since 2001, Güralp have been managing the Eskdalemuir seismic array (EKA), the United Kingdom’s auxiliary station for the International Monitoring System. These years of experience with CTBT related monitoring have taught us that good results do not come from the instrument alone. This is why Güralp endeavors to accompany operators through every step of the process with a team of specialist engineers, applying 35 years of expertise from project conception to data retrieval.
How to cite: Filippi, S., Mohr, S., Balon, M., Hill, P., Watkiss, N., and Goessen, S.: Güralp recommendations for the installation of high-performance, observatory-grade seismic stations, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21345, https://doi.org/10.5194/egusphere-egu2020-21345, 2020.
EGU2020-7776 | Displays | ITS1.7/SM3.5
The Italian CTBTO Cooperating National Facility (CNF): status of the artDamiano Pesaresi, Michele Bertoni, Elvio Del Negro, Stefano Parolai, and Paolo Comelli
The Italian National Institute for Oceanography and Experimental Geophysics (Istituto Nazionale di Oceanografia e di Geofisica Sperimentale - OGS) in Trieste (Italy) is offering, in agreement with the Italian CTBTO National Authority, its Cludinico (CLUD) seismic station as a Cooperating National Facility (CNF) to the CTBTO. As outlined in Pesaresi and Horn (2015) the additional data from the Italian CNF improve the CTBTO location capabilities in the Europe/Middle East area of about 21%, which might be of interest given the actual situation in Iran that breached the nuclear Joint Comprehensive Plan of Action (JCPOA) (Reuters, 2019).
In this presentation, we will illustrate the technical details of the solutions adopted to incorporate the Italian CNF into the CTBTO International Monitoring System (IMS): evaluation of CTBTO certification readiness, CTBTO Standard Station Interface (SSI) hardware and software procurement, test and installation, UPS upgrade, implementation and test of CTBTO communication, security measures.
Reference:
Pesaresi, D., and Horn, N.: Improving CTBTO monitoring capabilities: the Italian proposal for a CNF, CTBT Science and Technology 2015, Vienna, Austria, 22-26 June 2015, T4.1-P31, doi:10.13140/RG.2.1.2862.1927, 2015.
Reuters: "Iran further breaches nuclear deal, says it can exceed 20% enrichment", https://www.reuters.com/article/us-iran-nuclear-idUSKCN1VS05B, last access: 14 January 2020, 2019.
How to cite: Pesaresi, D., Bertoni, M., Del Negro, E., Parolai, S., and Comelli, P.: The Italian CTBTO Cooperating National Facility (CNF): status of the art, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7776, https://doi.org/10.5194/egusphere-egu2020-7776, 2020.
The Italian National Institute for Oceanography and Experimental Geophysics (Istituto Nazionale di Oceanografia e di Geofisica Sperimentale - OGS) in Trieste (Italy) is offering, in agreement with the Italian CTBTO National Authority, its Cludinico (CLUD) seismic station as a Cooperating National Facility (CNF) to the CTBTO. As outlined in Pesaresi and Horn (2015) the additional data from the Italian CNF improve the CTBTO location capabilities in the Europe/Middle East area of about 21%, which might be of interest given the actual situation in Iran that breached the nuclear Joint Comprehensive Plan of Action (JCPOA) (Reuters, 2019).
In this presentation, we will illustrate the technical details of the solutions adopted to incorporate the Italian CNF into the CTBTO International Monitoring System (IMS): evaluation of CTBTO certification readiness, CTBTO Standard Station Interface (SSI) hardware and software procurement, test and installation, UPS upgrade, implementation and test of CTBTO communication, security measures.
Reference:
Pesaresi, D., and Horn, N.: Improving CTBTO monitoring capabilities: the Italian proposal for a CNF, CTBT Science and Technology 2015, Vienna, Austria, 22-26 June 2015, T4.1-P31, doi:10.13140/RG.2.1.2862.1927, 2015.
Reuters: "Iran further breaches nuclear deal, says it can exceed 20% enrichment", https://www.reuters.com/article/us-iran-nuclear-idUSKCN1VS05B, last access: 14 January 2020, 2019.
How to cite: Pesaresi, D., Bertoni, M., Del Negro, E., Parolai, S., and Comelli, P.: The Italian CTBTO Cooperating National Facility (CNF): status of the art, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7776, https://doi.org/10.5194/egusphere-egu2020-7776, 2020.
EGU2020-20511 | Displays | ITS1.7/SM3.5
The application of airborne remote sensing during an On-Site InspectionAled Rowlands, Peter Labak, Massimo Chiappini, Luis Gaya-Pique, John Buckle, and Henry Seywerd
The application of airborne remote sensing techniques permitted by the Comprehensive Nuclear‑Test‑Ban Treaty (magnetic and gamma survey as well as optical imaging including infrared measurements) is done through the prism of inspection team functionality – a logic which applies equally to air and ground-based techniques. Work undertaken over recent years through modelling and practical testing has aimed to better understand the ability of airborne remote sensing techniques to detect relevant observables under different conditions. This has led to the compilation of a concept of operations document that provides guidance on the application of inspection activities during an On-Site Inspection. As well as highlighting the relative merits of each technique, the document also addresses the relative likelihood a particular airborne technique will return relevant information and will avoid the commitment of resources to missions with little likelihood of success.
The paper also addresses the approaches which have been taken to streamline the acquisition of airborne remotely sensed data through bespoke installations, the identification of optimal data processing routines to facilitate the production of reports and the fusion of airborne data products with other data gathered during an inspection.
How to cite: Rowlands, A., Labak, P., Chiappini, M., Gaya-Pique, L., Buckle, J., and Seywerd, H.: The application of airborne remote sensing during an On-Site Inspection, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20511, https://doi.org/10.5194/egusphere-egu2020-20511, 2020.
The application of airborne remote sensing techniques permitted by the Comprehensive Nuclear‑Test‑Ban Treaty (magnetic and gamma survey as well as optical imaging including infrared measurements) is done through the prism of inspection team functionality – a logic which applies equally to air and ground-based techniques. Work undertaken over recent years through modelling and practical testing has aimed to better understand the ability of airborne remote sensing techniques to detect relevant observables under different conditions. This has led to the compilation of a concept of operations document that provides guidance on the application of inspection activities during an On-Site Inspection. As well as highlighting the relative merits of each technique, the document also addresses the relative likelihood a particular airborne technique will return relevant information and will avoid the commitment of resources to missions with little likelihood of success.
The paper also addresses the approaches which have been taken to streamline the acquisition of airborne remotely sensed data through bespoke installations, the identification of optimal data processing routines to facilitate the production of reports and the fusion of airborne data products with other data gathered during an inspection.
How to cite: Rowlands, A., Labak, P., Chiappini, M., Gaya-Pique, L., Buckle, J., and Seywerd, H.: The application of airborne remote sensing during an On-Site Inspection, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20511, https://doi.org/10.5194/egusphere-egu2020-20511, 2020.
EGU2020-21039 | Displays | ITS1.7/SM3.5
Radon and CO2 tracer for radioxenon subsurface sampling in the On Site InspectionChiara Telloli, Barbara Ferrucci, Antonietta Rizzo, Stefano Salvi, Alberto Ubaldini, and Carmela Vaccaro
The detection of anomalous concentration of Xenon radiosotopes in the subsurface gases during an On Site Inspection (OSI) is a strong indicator of a suspicious underground nuclear explosion. This implies that the sampling methodology ensure the collection of a reliable representative subsurface gaseous sample, avoiding the mixing with atmospheric gases. Radioxenon sampling in shallow layers can provide reliable results for desert areas, but different local geological features could result in more complex migration of subsurface gases to the very near superficial layers affecting the representativeness of the sample.
Radon is currently use as tracer to reveal the effective sampling of gases form the deep surface, so its measurement is coupled with the collection of radioxenon subsurface gases. The detection of radon anomalous concentration in subsurface gases could indicate different causes: high Radon content in subsurface indicate high radon concentration underground caused by the accumulation in an underground and confined cavity; on the other side, low radon detection in subsurface indicate low radon concentration underground that can be indicative of the absence of an underground cavity or the presence of rocks in the cavity absorbing radon. This lead to the consideration that radon is not a univocal tracer for Xe surface sampling in the OSI. A portable isotopic analyzer (that measures d13C and CO2) could be used to localize the faults and fracturing that could lead to a seeping of the subsurface gases. Therefore, this technique could be proposed as an auxiliary equipment for a preliminary activity during an OSI and a monitoring tool during subsurface gas sampling.
How to cite: Telloli, C., Ferrucci, B., Rizzo, A., Salvi, S., Ubaldini, A., and Vaccaro, C.: Radon and CO2 tracer for radioxenon subsurface sampling in the On Site Inspection, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21039, https://doi.org/10.5194/egusphere-egu2020-21039, 2020.
The detection of anomalous concentration of Xenon radiosotopes in the subsurface gases during an On Site Inspection (OSI) is a strong indicator of a suspicious underground nuclear explosion. This implies that the sampling methodology ensure the collection of a reliable representative subsurface gaseous sample, avoiding the mixing with atmospheric gases. Radioxenon sampling in shallow layers can provide reliable results for desert areas, but different local geological features could result in more complex migration of subsurface gases to the very near superficial layers affecting the representativeness of the sample.
Radon is currently use as tracer to reveal the effective sampling of gases form the deep surface, so its measurement is coupled with the collection of radioxenon subsurface gases. The detection of radon anomalous concentration in subsurface gases could indicate different causes: high Radon content in subsurface indicate high radon concentration underground caused by the accumulation in an underground and confined cavity; on the other side, low radon detection in subsurface indicate low radon concentration underground that can be indicative of the absence of an underground cavity or the presence of rocks in the cavity absorbing radon. This lead to the consideration that radon is not a univocal tracer for Xe surface sampling in the OSI. A portable isotopic analyzer (that measures d13C and CO2) could be used to localize the faults and fracturing that could lead to a seeping of the subsurface gases. Therefore, this technique could be proposed as an auxiliary equipment for a preliminary activity during an OSI and a monitoring tool during subsurface gas sampling.
How to cite: Telloli, C., Ferrucci, B., Rizzo, A., Salvi, S., Ubaldini, A., and Vaccaro, C.: Radon and CO2 tracer for radioxenon subsurface sampling in the On Site Inspection, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21039, https://doi.org/10.5194/egusphere-egu2020-21039, 2020.
EGU2020-21517 | Displays | ITS1.7/SM3.5
GIMO – a new geospatial tool for On-site Inspection data collection and techniques integrationGustavo Haquin Gerade, Peter Labak, Aled Rowlands, Nenad Steric, Oleksandr Shabelnyk, Magnus Ahlander, and Alicia Lobo
An on-site inspection (OSI) is conducted to clarify whether a nuclear weapon test explosion or any other nuclear explosion has been carried out in violation of the Treaty. The conduct of inspection activities requires an approach that takes into account the operational, technical and time constraints specified in the Treaty. A systematic approach was developed, namely, the information-led search logic for the inspection team (IT) to function effectively. The core of the search logic is inspection data acquired. The realization of the search logic is the Inspection Team Functionality (ITF) which its essential element is having the most updated inspection data readily available to inspectors to facilitate the planning, processing and reporting.
To facilitate the work of an IT, the Provisional Technical Secretariat launched a project to develop a map centric tool to support the IT. The Geospatial Information Management system for On-site inspections (GIMO), supports decision-making and facilitates the progress of an inspection and not hinder it in anyway. At its core is the facilitation of the ITF concept and chain of custody of samples and electronic media. It is a single tool for planning inspection activities, managing data collection in the field, integration of data generated by the implementation of OSI techniques and reporting. Information security, chain of custody and confidentiality requirements are applied in GIMO following the need-to-know principle. GIMO, 3D geospatially centric software, has no software dependencies outside the internal local area network as required by the Treaty. The modular nature of GIMO means that additional functionality can be embedded as and when needed.
How to cite: Haquin Gerade, G., Labak, P., Rowlands, A., Steric, N., Shabelnyk, O., Ahlander, M., and Lobo, A.: GIMO – a new geospatial tool for On-site Inspection data collection and techniques integration, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21517, https://doi.org/10.5194/egusphere-egu2020-21517, 2020.
An on-site inspection (OSI) is conducted to clarify whether a nuclear weapon test explosion or any other nuclear explosion has been carried out in violation of the Treaty. The conduct of inspection activities requires an approach that takes into account the operational, technical and time constraints specified in the Treaty. A systematic approach was developed, namely, the information-led search logic for the inspection team (IT) to function effectively. The core of the search logic is inspection data acquired. The realization of the search logic is the Inspection Team Functionality (ITF) which its essential element is having the most updated inspection data readily available to inspectors to facilitate the planning, processing and reporting.
To facilitate the work of an IT, the Provisional Technical Secretariat launched a project to develop a map centric tool to support the IT. The Geospatial Information Management system for On-site inspections (GIMO), supports decision-making and facilitates the progress of an inspection and not hinder it in anyway. At its core is the facilitation of the ITF concept and chain of custody of samples and electronic media. It is a single tool for planning inspection activities, managing data collection in the field, integration of data generated by the implementation of OSI techniques and reporting. Information security, chain of custody and confidentiality requirements are applied in GIMO following the need-to-know principle. GIMO, 3D geospatially centric software, has no software dependencies outside the internal local area network as required by the Treaty. The modular nature of GIMO means that additional functionality can be embedded as and when needed.
How to cite: Haquin Gerade, G., Labak, P., Rowlands, A., Steric, N., Shabelnyk, O., Ahlander, M., and Lobo, A.: GIMO – a new geospatial tool for On-site Inspection data collection and techniques integration, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21517, https://doi.org/10.5194/egusphere-egu2020-21517, 2020.
EGU2020-21601 | Displays | ITS1.7/SM3.5
Key Aspects for the Definition of On-site Inspection Challenging EnvironmentsFeihong Kuang and Gustavo Haquin Gerade
On-site inspection (OSI) is an element of the verification regime of the Comprehensive Nuclear Test Ban Treaty (CTBT), with the sole purpose to clarify whether a nuclear weapon test explosion or any other nuclear explosion has been carried out in violation of the Treaty. An OSI could be launched in any environment where a triggering event occurred. A challenging environment may affect not only the signatures and observables of a nuclear explosion, but also the possibility to conduct the OSI. Harsh environmental conditions, such as extreme climate conditions, high vegetation coverage and complicated topographic characteristics, among others, could slow down the deployment of field missions, and affect the state-of-health of OSI equipment and even the performance of inspectors, thereby compromising the whole inspection. Thus, the operationalization of OSI in different environments is an important aspect in the development of OSI capability. In this respect, well defined OSI environment is an important step towards the development of comprehensive OSI capabilities. Based on the analysis of historical underground nuclear explosions data and knowledge on the environmental impact on observables, equipment and inspectors, a definition of OSI environment was developed. Climatic conditions were grouped into the main five groups of the Köppen-Geiger classification scheme. Vegetation coverage was re-grouped in four of the 16 classes of land coverage (not including water bodies) following the International Geosphere-Biosphere Programme. Complicated landforms grouped in topographic classification using a digital elevation model based on slope gradient, surface texture and local convexity within neighboring cells was used to classify topographic relief of four types of landforms for OSI. In this presentation, it is shown how these key environmental aspects will impact the conduct of an OSI.
How to cite: Kuang, F. and Haquin Gerade, G.: Key Aspects for the Definition of On-site Inspection Challenging Environments, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21601, https://doi.org/10.5194/egusphere-egu2020-21601, 2020.
On-site inspection (OSI) is an element of the verification regime of the Comprehensive Nuclear Test Ban Treaty (CTBT), with the sole purpose to clarify whether a nuclear weapon test explosion or any other nuclear explosion has been carried out in violation of the Treaty. An OSI could be launched in any environment where a triggering event occurred. A challenging environment may affect not only the signatures and observables of a nuclear explosion, but also the possibility to conduct the OSI. Harsh environmental conditions, such as extreme climate conditions, high vegetation coverage and complicated topographic characteristics, among others, could slow down the deployment of field missions, and affect the state-of-health of OSI equipment and even the performance of inspectors, thereby compromising the whole inspection. Thus, the operationalization of OSI in different environments is an important aspect in the development of OSI capability. In this respect, well defined OSI environment is an important step towards the development of comprehensive OSI capabilities. Based on the analysis of historical underground nuclear explosions data and knowledge on the environmental impact on observables, equipment and inspectors, a definition of OSI environment was developed. Climatic conditions were grouped into the main five groups of the Köppen-Geiger classification scheme. Vegetation coverage was re-grouped in four of the 16 classes of land coverage (not including water bodies) following the International Geosphere-Biosphere Programme. Complicated landforms grouped in topographic classification using a digital elevation model based on slope gradient, surface texture and local convexity within neighboring cells was used to classify topographic relief of four types of landforms for OSI. In this presentation, it is shown how these key environmental aspects will impact the conduct of an OSI.
How to cite: Kuang, F. and Haquin Gerade, G.: Key Aspects for the Definition of On-site Inspection Challenging Environments, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21601, https://doi.org/10.5194/egusphere-egu2020-21601, 2020.
EGU2020-16106 | Displays | ITS1.7/SM3.5
The National Data Centre Preparedness Exercise NPE 2019 - Scenario design and expert technical analysesJ. Ole Ross, Nicolai Gestermann, Peter Gaebler, and Lars Ceranna
For detection of non-compliance with the Comprehensive Nuclear-Test-Ban Treaty (CTBT) the global International Monitoring System (IMS) is being built up and nearly complete. The IMS is designed to detect and identify nuclear explosions through their seismic, hydroacoustic, infrasound, and radionuclide signature. The IMS data are collected, processed to analysis products, and distributed to the signatory states by the International Data Centre (IDC) in Vienna. The member states themselves may operate National Data Centers (NDC) giving technical advice concerning CTBT verification to their government. NDC Preparedness Exercises (NPE) are regularly performed to practice the verification procedures for the detection of nuclear explosions in the framework of CTBT monitoring. The NPE 2019 scenario was developed in close cooperation between the Italian NDC-RN (ENEA) and the German NDC (BGR). The fictitious state RAETIA announced a reactor incident with release of unspecified radionuclides into the atmosphere. Simulated concentrations of particulate and noble gas isotopes at IMS stations were given to the participants. The task was to check the consistency with the announcement and to serach for waveform events in the potential source region of the radioisotopes. In a next step, the fictitious neighbour state EASTRIA provided further national (synthetic) measurements and requested assistance from IDC with so called Expert Technical Analysis (ETA) about the origin of those traces. The presentation shows aspects of scenario design, event selection, and forward amospheric transport modelling as well as radionuclide and seismological analyses.
How to cite: Ross, J. O., Gestermann, N., Gaebler, P., and Ceranna, L.: The National Data Centre Preparedness Exercise NPE 2019 - Scenario design and expert technical analyses, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-16106, https://doi.org/10.5194/egusphere-egu2020-16106, 2020.
For detection of non-compliance with the Comprehensive Nuclear-Test-Ban Treaty (CTBT) the global International Monitoring System (IMS) is being built up and nearly complete. The IMS is designed to detect and identify nuclear explosions through their seismic, hydroacoustic, infrasound, and radionuclide signature. The IMS data are collected, processed to analysis products, and distributed to the signatory states by the International Data Centre (IDC) in Vienna. The member states themselves may operate National Data Centers (NDC) giving technical advice concerning CTBT verification to their government. NDC Preparedness Exercises (NPE) are regularly performed to practice the verification procedures for the detection of nuclear explosions in the framework of CTBT monitoring. The NPE 2019 scenario was developed in close cooperation between the Italian NDC-RN (ENEA) and the German NDC (BGR). The fictitious state RAETIA announced a reactor incident with release of unspecified radionuclides into the atmosphere. Simulated concentrations of particulate and noble gas isotopes at IMS stations were given to the participants. The task was to check the consistency with the announcement and to serach for waveform events in the potential source region of the radioisotopes. In a next step, the fictitious neighbour state EASTRIA provided further national (synthetic) measurements and requested assistance from IDC with so called Expert Technical Analysis (ETA) about the origin of those traces. The presentation shows aspects of scenario design, event selection, and forward amospheric transport modelling as well as radionuclide and seismological analyses.
How to cite: Ross, J. O., Gestermann, N., Gaebler, P., and Ceranna, L.: The National Data Centre Preparedness Exercise NPE 2019 - Scenario design and expert technical analyses, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-16106, https://doi.org/10.5194/egusphere-egu2020-16106, 2020.
EGU2020-7891 | Displays | ITS1.7/SM3.5
Comparison study for different approaches in applying High-Resolution Atmospheric Transport Modelling based on validation with Xe-133 observations in EuropeAnne Philipp, Michael Schoeppner, Jolanta Kusmierczyk-Michulec, Pierre Bourgouin, and Martin Kalinowski
The International Data Centre (IDC) of the Comprehensive Nuclear-Test-Ban Treaty Organisation (CTBTO) investigates the best method to add the utilisation of High-Resolution Atmospheric Transport Modelling (HRATM) in their operational and automatised pipeline. Supporting the decision process, the IDC accomplished a comparison study with different approaches for applying HRATM. An initial validation study with the HRATM Flexpart-WRF, which is a Lagrangian particle dispersion model (LPDM), showed a performance which is dependent on the scenario and delivered results comparable to the conventional Flexpart model. The approach uses the Weather Research and Forecasting model (WRF) to generate high-resolution meteorological input data for Flexpart-WRF and WRF was driven by the National Centers for Environmental Prediction (NCEP) data having a horizontal resolution of 0.5 degrees and time resolution of 1h. Based on this initial study, an extended study was conducted to compare the results to FLEXPART-WRF using input data from the European Centre for Medium-Range Weather Forecasts (ECMWF) for WRF and to results from the conventional Flexpart model using high-resolution ECMWF input data. Furthermore, a sensitivity study was performed to optimize the physical and computational parameters of WRF to test possible meteorological improvements prior to the comparison study.
The performance of the different approaches is evaluated by using observational data and includes statistical metrics which were established during the first ATM challenge in 2016. Observational data of seven episodes of elevated Xe-133 concentrations were selected from the IMS (International Monitoring System) noble gas system DEX33 located in Germany. Each episode consists of 6 to 11 subsequent samples with each sample being taken over 24 hours. Both Flexpart models were using the source terms from a medical isotope production facility in Belgium to simulate the resulting concentration time series at the DEX33 station for different output resolutions. Backward simulations for each sample were conducted, and in the case of Flexpart-WRF nested input of increased resolution around the source and receptor was used.
The simulated concentrations, as well as the measurements, are also compared to the simulated results produced by the conventional Flexpart model to guide the decision-making process.
How to cite: Philipp, A., Schoeppner, M., Kusmierczyk-Michulec, J., Bourgouin, P., and Kalinowski, M.: Comparison study for different approaches in applying High-Resolution Atmospheric Transport Modelling based on validation with Xe-133 observations in Europe , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7891, https://doi.org/10.5194/egusphere-egu2020-7891, 2020.
The International Data Centre (IDC) of the Comprehensive Nuclear-Test-Ban Treaty Organisation (CTBTO) investigates the best method to add the utilisation of High-Resolution Atmospheric Transport Modelling (HRATM) in their operational and automatised pipeline. Supporting the decision process, the IDC accomplished a comparison study with different approaches for applying HRATM. An initial validation study with the HRATM Flexpart-WRF, which is a Lagrangian particle dispersion model (LPDM), showed a performance which is dependent on the scenario and delivered results comparable to the conventional Flexpart model. The approach uses the Weather Research and Forecasting model (WRF) to generate high-resolution meteorological input data for Flexpart-WRF and WRF was driven by the National Centers for Environmental Prediction (NCEP) data having a horizontal resolution of 0.5 degrees and time resolution of 1h. Based on this initial study, an extended study was conducted to compare the results to FLEXPART-WRF using input data from the European Centre for Medium-Range Weather Forecasts (ECMWF) for WRF and to results from the conventional Flexpart model using high-resolution ECMWF input data. Furthermore, a sensitivity study was performed to optimize the physical and computational parameters of WRF to test possible meteorological improvements prior to the comparison study.
The performance of the different approaches is evaluated by using observational data and includes statistical metrics which were established during the first ATM challenge in 2016. Observational data of seven episodes of elevated Xe-133 concentrations were selected from the IMS (International Monitoring System) noble gas system DEX33 located in Germany. Each episode consists of 6 to 11 subsequent samples with each sample being taken over 24 hours. Both Flexpart models were using the source terms from a medical isotope production facility in Belgium to simulate the resulting concentration time series at the DEX33 station for different output resolutions. Backward simulations for each sample were conducted, and in the case of Flexpart-WRF nested input of increased resolution around the source and receptor was used.
The simulated concentrations, as well as the measurements, are also compared to the simulated results produced by the conventional Flexpart model to guide the decision-making process.
How to cite: Philipp, A., Schoeppner, M., Kusmierczyk-Michulec, J., Bourgouin, P., and Kalinowski, M.: Comparison study for different approaches in applying High-Resolution Atmospheric Transport Modelling based on validation with Xe-133 observations in Europe , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7891, https://doi.org/10.5194/egusphere-egu2020-7891, 2020.
EGU2020-2442 | Displays | ITS1.7/SM3.5
Global radioxenon emission inventory for 2014Martin Kalinowski
Global radioactivity monitoring for the verification of the Comprehensive Nuclear-Test-Ban Treaty (CTBT) includes the four xenon isotopes 131mXe, 133Xe, 133mXe and 135Xe. These four isotopes are serving as important indicators of nuclear explosions. The state-of-the-art radioxenon emission inventory uses generic release estimates for each known nuclear facility. However, the release amount can vary by several orders of magnitude from year to year. The year 2014 was selected for a single-year radioxenon emission inventory that avoids this uncertainty. Whenever 2014 emissions reported by the facility operator are available these are incorporated into the 2014 emission inventory. This presentation summarizes this new emission inventory. The overall emissions by facility type are compared with previous studies. The global radioxenon emission inventory for 2014 can be used for studies to estimate the contribution of this anthropogenic source to the observed ambient concentrations at IMS noble gas sensors to support CTBT monitoring activities, including calibration and performance assessment of the verification system as described in the Treaty as well as developing and validating methods for enhanced detection capabilities of signals that may indicate a nuclear test. One specific application will be the third ATM Challenge that was announced in December 2019.
How to cite: Kalinowski, M.: Global radioxenon emission inventory for 2014, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2442, https://doi.org/10.5194/egusphere-egu2020-2442, 2020.
Global radioactivity monitoring for the verification of the Comprehensive Nuclear-Test-Ban Treaty (CTBT) includes the four xenon isotopes 131mXe, 133Xe, 133mXe and 135Xe. These four isotopes are serving as important indicators of nuclear explosions. The state-of-the-art radioxenon emission inventory uses generic release estimates for each known nuclear facility. However, the release amount can vary by several orders of magnitude from year to year. The year 2014 was selected for a single-year radioxenon emission inventory that avoids this uncertainty. Whenever 2014 emissions reported by the facility operator are available these are incorporated into the 2014 emission inventory. This presentation summarizes this new emission inventory. The overall emissions by facility type are compared with previous studies. The global radioxenon emission inventory for 2014 can be used for studies to estimate the contribution of this anthropogenic source to the observed ambient concentrations at IMS noble gas sensors to support CTBT monitoring activities, including calibration and performance assessment of the verification system as described in the Treaty as well as developing and validating methods for enhanced detection capabilities of signals that may indicate a nuclear test. One specific application will be the third ATM Challenge that was announced in December 2019.
How to cite: Kalinowski, M.: Global radioxenon emission inventory for 2014, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2442, https://doi.org/10.5194/egusphere-egu2020-2442, 2020.
EGU2020-7504 | Displays | ITS1.7/SM3.5
A Seismic Event Relative Location Benchmark Case StudyTormod Kvaerna, Steven J. Gibbons, Timo Tiira, and Elena Kozlovskaya
"Precision seismology'' encompasses a set of methods which use differential measurements of time-delays to estimate the relative locations of earthquakes and explosions. Delay-times estimated from signal correlations often allow far more accurate estimates of one event location relative to another than is possible using classical hypocenter determination techniques. Many different algorithms and software implementations have been developed and different assumptions and procedures can often result in significant variability between different relative event location estimates. We present a Ground Truth (GT) database of 55 military surface explosions in northern Finland in 2007 that all took place within 300 meters of each other. The explosions were recorded with a high signal-to-noise ratio to distances of about 2 degrees, and the exceptional waveform similarity between the signals from the different explosions allows for accurate correlation-based time-delay measurements. With exact coordinates for the explosions, we can assess the fidelity of relative location estimates made using any location algorithm or implementation. Applying double-difference calculations using two different 1-d velocity models for the region results in hypocenter-to-hypocenter distances which are too short and the wavefield leaving the source region is more complicated than predicted by the models. Using the GT event coordinates, we can measure the slowness vectors associated with each outgoing ray from the source region. We demonstrate that, had such corrections been available, a significant improvement in the relative location estimates would have resulted. In practice we would of course need to solve for event hypocenters and slowness corrections simultaneously, and significant work will be needed to upgrade relative location algorithms to accommodate uncertainty in the form of the outgoing wavefield. We present this dataset, together with GT coordinates, raw waveforms for all events on six regional stations, and tables of time-delay measurements, as a reference benchmark by which relative location algorithms and software can be evaluated.
How to cite: Kvaerna, T., Gibbons, S. J., Tiira, T., and Kozlovskaya, E.: A Seismic Event Relative Location Benchmark Case Study, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7504, https://doi.org/10.5194/egusphere-egu2020-7504, 2020.
"Precision seismology'' encompasses a set of methods which use differential measurements of time-delays to estimate the relative locations of earthquakes and explosions. Delay-times estimated from signal correlations often allow far more accurate estimates of one event location relative to another than is possible using classical hypocenter determination techniques. Many different algorithms and software implementations have been developed and different assumptions and procedures can often result in significant variability between different relative event location estimates. We present a Ground Truth (GT) database of 55 military surface explosions in northern Finland in 2007 that all took place within 300 meters of each other. The explosions were recorded with a high signal-to-noise ratio to distances of about 2 degrees, and the exceptional waveform similarity between the signals from the different explosions allows for accurate correlation-based time-delay measurements. With exact coordinates for the explosions, we can assess the fidelity of relative location estimates made using any location algorithm or implementation. Applying double-difference calculations using two different 1-d velocity models for the region results in hypocenter-to-hypocenter distances which are too short and the wavefield leaving the source region is more complicated than predicted by the models. Using the GT event coordinates, we can measure the slowness vectors associated with each outgoing ray from the source region. We demonstrate that, had such corrections been available, a significant improvement in the relative location estimates would have resulted. In practice we would of course need to solve for event hypocenters and slowness corrections simultaneously, and significant work will be needed to upgrade relative location algorithms to accommodate uncertainty in the form of the outgoing wavefield. We present this dataset, together with GT coordinates, raw waveforms for all events on six regional stations, and tables of time-delay measurements, as a reference benchmark by which relative location algorithms and software can be evaluated.
How to cite: Kvaerna, T., Gibbons, S. J., Tiira, T., and Kozlovskaya, E.: A Seismic Event Relative Location Benchmark Case Study, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7504, https://doi.org/10.5194/egusphere-egu2020-7504, 2020.
EGU2020-9878 | Displays | ITS1.7/SM3.5
High-frequency hybrid modelling of near-source topographic effects at teleseismic distances: The Degelen mountain case studyMarta Pienkowska, Stuart Nippress, Tarje Nissen-Meyer, and David Bowers
We apply a hybrid method that couples global Instaseis databases (van Driel et al., 2015) with a local finite-difference code WPP (Nilsson et al., 2007) to study the 1960s-1980s nuclear explosions located at the USSR Degelen mountain test site. Observed teleseismic P waves (up to 2 Hz) display strong near-source signatures, yet the relative importance of contributing factors – such as explosion depth and yield, scattering from near-source topography and geological heterogeneities, as well as non-linear effects – are not well understood. An analysis of teleseismic waveforms suggest that these features are dependent on the source location within the Degelen mountain range, while depths and yields do not show a consistent effect. We therefore propose that the change in signal characteristics on teleseismic waveforms is related to the mountainous topography in the source region and we turn to deterministic hybrid modelling to test the effect of Degelen topography at teleseismic distances. Despite simplistic modelling assumptions, we achieve an excellent fit with the observed waveforms. Amplitudes are in good agreement and many observed features are reproduced by synthetic seismograms at 2 Hz, highlighting the importance of near-source 3-D effects on long-range wave propagation. Hybrid modelling of more realistic high-frequency scenarios could ultimately lead to waveform-based constraints on explosion locations, for example via grid-search methods or more advanced learning algorithms, or even improve nuclear discrimination methods.
How to cite: Pienkowska, M., Nippress, S., Nissen-Meyer, T., and Bowers, D.: High-frequency hybrid modelling of near-source topographic effects at teleseismic distances: The Degelen mountain case study, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9878, https://doi.org/10.5194/egusphere-egu2020-9878, 2020.
We apply a hybrid method that couples global Instaseis databases (van Driel et al., 2015) with a local finite-difference code WPP (Nilsson et al., 2007) to study the 1960s-1980s nuclear explosions located at the USSR Degelen mountain test site. Observed teleseismic P waves (up to 2 Hz) display strong near-source signatures, yet the relative importance of contributing factors – such as explosion depth and yield, scattering from near-source topography and geological heterogeneities, as well as non-linear effects – are not well understood. An analysis of teleseismic waveforms suggest that these features are dependent on the source location within the Degelen mountain range, while depths and yields do not show a consistent effect. We therefore propose that the change in signal characteristics on teleseismic waveforms is related to the mountainous topography in the source region and we turn to deterministic hybrid modelling to test the effect of Degelen topography at teleseismic distances. Despite simplistic modelling assumptions, we achieve an excellent fit with the observed waveforms. Amplitudes are in good agreement and many observed features are reproduced by synthetic seismograms at 2 Hz, highlighting the importance of near-source 3-D effects on long-range wave propagation. Hybrid modelling of more realistic high-frequency scenarios could ultimately lead to waveform-based constraints on explosion locations, for example via grid-search methods or more advanced learning algorithms, or even improve nuclear discrimination methods.
How to cite: Pienkowska, M., Nippress, S., Nissen-Meyer, T., and Bowers, D.: High-frequency hybrid modelling of near-source topographic effects at teleseismic distances: The Degelen mountain case study, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9878, https://doi.org/10.5194/egusphere-egu2020-9878, 2020.
EGU2020-3340 | Displays | ITS1.7/SM3.5
Comparing two location estimates of the same seismo-acoustic event. A survey of available statistical tools.Ronan Le Bras and Ehsan Qorbani
The Comprehensive Nuclear-Test-Ban Treaty (CTBT) calls for a verification regime which involves interactions between the International Data Centre (IDC) component of the Provisional Technical Secretariat (PTS) established in Vienna, Austria, and the National Data Centres (NDC) of Member States of the Treaty. The results of location estimates of the same event by the two organizations is obtained using similar methods and software but potentially involve different seismo-acoustic networks and therefore a direct comparison of the distances and time differences is not sufficient and the different error estimates for the event should be taken into account. Most methods of location are using iterative linear inversions and the probability distributions are Gaussian, using the covariance matrix resulting from the last step of the iterative inversion process as the parameters of the Gaussian distributions. We explored the statistical tools available to compare two multi-dimensional distributions and measure a distance between them in an objective manner, including the Hellinger distance, the Bhattacharyya distance, and the Mahalanobis distance and we will show examples of application to the seismo-acoustic location problem.
How to cite: Le Bras, R. and Qorbani, E.: Comparing two location estimates of the same seismo-acoustic event. A survey of available statistical tools., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3340, https://doi.org/10.5194/egusphere-egu2020-3340, 2020.
The Comprehensive Nuclear-Test-Ban Treaty (CTBT) calls for a verification regime which involves interactions between the International Data Centre (IDC) component of the Provisional Technical Secretariat (PTS) established in Vienna, Austria, and the National Data Centres (NDC) of Member States of the Treaty. The results of location estimates of the same event by the two organizations is obtained using similar methods and software but potentially involve different seismo-acoustic networks and therefore a direct comparison of the distances and time differences is not sufficient and the different error estimates for the event should be taken into account. Most methods of location are using iterative linear inversions and the probability distributions are Gaussian, using the covariance matrix resulting from the last step of the iterative inversion process as the parameters of the Gaussian distributions. We explored the statistical tools available to compare two multi-dimensional distributions and measure a distance between them in an objective manner, including the Hellinger distance, the Bhattacharyya distance, and the Mahalanobis distance and we will show examples of application to the seismo-acoustic location problem.
How to cite: Le Bras, R. and Qorbani, E.: Comparing two location estimates of the same seismo-acoustic event. A survey of available statistical tools., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3340, https://doi.org/10.5194/egusphere-egu2020-3340, 2020.
EGU2020-1633 | Displays | ITS1.7/SM3.5
Identification a of shear mechanism in moderately overburied chemical explosive experiments and relation to the DPRK declared nuclear testsDavid Steedman and Christopher Bradley
The Source Physics Experiments (SPE) provided new insights into explosion phenomenology. In particular, the data reveal a mechanism for generating shear energy in the near-source region which may explain why certain North Korean declared nuclear tests do not conform to explosion/earthquake discriminants based on relative body wave (mb) and shear wave (MS) magnitudes.
The SPE chemical explosive detonations in granite included three scaled depth of burial (SDOB) categories: 1) nominally buried defines the burial depths from which mb:MS discriminants were derived; 2) deeply overburied, or Green’s function depth; and 3) moderately overburied, or between the two end cases above. This last category is a general descriptor for the North Korean declared nuclear tests which fail the mb:MS discriminant.
Near-source three-axis borehole accelerometers indicate that the nominal and deeply buried SPE experiments created the expected spherical shock environment dominated by radial ground motion with insignificant tangential response.
The moderately overburied SPE experiments indicate a significant contrast. The tangential records in these experiments are quiescent with initial shock arrival and then exhibit a sudden, significant surge immediately following the peak radial component. At distant ranges where the shock wave amplitude has attenuated the environment becomes more consistent with a spherical shock with no significant tangential components.
We interpret a “shear release” mechanism on an obliquely loaded rock joint:
- During incipient loading the normal shock component forces closure of the joint.
- In cases of low explosive loading and/or high in situ stress the tangential component is insufficient to cause joint sliding and this load is stored as shearing strain.
- As the ground shock peak passes the joint unloads and dilates, and the now open joint allows a sudden release of the stored shear strain resulting in sudden joint rupture and slippage.
Step 3 above is essential for identifying when this mechanism occurs. For large in situ stress accompanied by low explosive loading (i.e., deep burial, or high SDOB) the joint fails to open and rupture does not occur. For low in situ stress accompanied by high explosive loading (i.e., shallow burial, or nominal SDOB) there is insufficient resistance to tangential slippage and no shear energy is stored for later release.
The above provides a fully geodynamic definition for why certain explosive events in jointed rock will fall within the correct explosion population of a mb:MS discriminant while others may not. Moreover, we illustrate that these observations for the SPE results map directly to generally accepted yield and depth combinations for the six declared North Korean nuclear tests.
How to cite: Steedman, D. and Bradley, C.: Identification a of shear mechanism in moderately overburied chemical explosive experiments and relation to the DPRK declared nuclear tests, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1633, https://doi.org/10.5194/egusphere-egu2020-1633, 2020.
The Source Physics Experiments (SPE) provided new insights into explosion phenomenology. In particular, the data reveal a mechanism for generating shear energy in the near-source region which may explain why certain North Korean declared nuclear tests do not conform to explosion/earthquake discriminants based on relative body wave (mb) and shear wave (MS) magnitudes.
The SPE chemical explosive detonations in granite included three scaled depth of burial (SDOB) categories: 1) nominally buried defines the burial depths from which mb:MS discriminants were derived; 2) deeply overburied, or Green’s function depth; and 3) moderately overburied, or between the two end cases above. This last category is a general descriptor for the North Korean declared nuclear tests which fail the mb:MS discriminant.
Near-source three-axis borehole accelerometers indicate that the nominal and deeply buried SPE experiments created the expected spherical shock environment dominated by radial ground motion with insignificant tangential response.
The moderately overburied SPE experiments indicate a significant contrast. The tangential records in these experiments are quiescent with initial shock arrival and then exhibit a sudden, significant surge immediately following the peak radial component. At distant ranges where the shock wave amplitude has attenuated the environment becomes more consistent with a spherical shock with no significant tangential components.
We interpret a “shear release” mechanism on an obliquely loaded rock joint:
- During incipient loading the normal shock component forces closure of the joint.
- In cases of low explosive loading and/or high in situ stress the tangential component is insufficient to cause joint sliding and this load is stored as shearing strain.
- As the ground shock peak passes the joint unloads and dilates, and the now open joint allows a sudden release of the stored shear strain resulting in sudden joint rupture and slippage.
Step 3 above is essential for identifying when this mechanism occurs. For large in situ stress accompanied by low explosive loading (i.e., deep burial, or high SDOB) the joint fails to open and rupture does not occur. For low in situ stress accompanied by high explosive loading (i.e., shallow burial, or nominal SDOB) there is insufficient resistance to tangential slippage and no shear energy is stored for later release.
The above provides a fully geodynamic definition for why certain explosive events in jointed rock will fall within the correct explosion population of a mb:MS discriminant while others may not. Moreover, we illustrate that these observations for the SPE results map directly to generally accepted yield and depth combinations for the six declared North Korean nuclear tests.
How to cite: Steedman, D. and Bradley, C.: Identification a of shear mechanism in moderately overburied chemical explosive experiments and relation to the DPRK declared nuclear tests, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1633, https://doi.org/10.5194/egusphere-egu2020-1633, 2020.
EGU2020-2203 | Displays | ITS1.7/SM3.5
Magnitude Scaling of Regional-phase Amplitudes from the DPRK Announced Nuclear TestsSheila Peacock, David Bowers, and Neil Selby
At regional distances (<~1700 km) the phases Pn, Pg and Lg are generally the most prominent arrivals from a crustal seismic source. Amplitude ratios of Pn or Pg to Lg have been investigated by several authors (e.g. Hartse et al. 1997 Bull. Seism. Soc. Am.) as earthquake/explosion discriminators. Theory and observation show that explosions generate shear phases less efficiently than earthquakes, hence the amplitude ratio of Pn and Pg to Lg is expected to be higher for explosions, especially at frequencies above ~2 Hz. Walter et al. (2108 Seismol. Res. Lett. DOI 10.1785/0220180128) showed that amplitude ratios Pg/Lg and Pn/Lg at 2-4 Hz were clear discriminants between the six announced nuclear tests of the Democratic People's Republic of Korea (DPRK) and a population of earthquakes. We investigate regional-phase amplitudes for stations MDJ (distance ~376 km) and USRK (~406 km). Walter et al. found a weak dependence of Pg/Lg in the 2-4 Hz band at MDJ on the magnitude Mw of the explosion. We find this dependence at USRK also. We also explore the regional amplitude behaviour at a range of frequencies, and dependence on different magnitude measures, such as network body-wave and surface-wave magnitudes.
© British Crown Owned Copyright 2019/AWE
This work is distributed under the Creative Commons Attribution 4.0 License. This licence does not affect the Crown copyright work, which is re-usable under the Open Government Licence (OGL). The Creative Commons Attribution 4.0 License and the OGL are interoperable and do not conflict with, reduce or limit each other.
How to cite: Peacock, S., Bowers, D., and Selby, N.: Magnitude Scaling of Regional-phase Amplitudes from the DPRK Announced Nuclear Tests, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2203, https://doi.org/10.5194/egusphere-egu2020-2203, 2020.
At regional distances (<~1700 km) the phases Pn, Pg and Lg are generally the most prominent arrivals from a crustal seismic source. Amplitude ratios of Pn or Pg to Lg have been investigated by several authors (e.g. Hartse et al. 1997 Bull. Seism. Soc. Am.) as earthquake/explosion discriminators. Theory and observation show that explosions generate shear phases less efficiently than earthquakes, hence the amplitude ratio of Pn and Pg to Lg is expected to be higher for explosions, especially at frequencies above ~2 Hz. Walter et al. (2108 Seismol. Res. Lett. DOI 10.1785/0220180128) showed that amplitude ratios Pg/Lg and Pn/Lg at 2-4 Hz were clear discriminants between the six announced nuclear tests of the Democratic People's Republic of Korea (DPRK) and a population of earthquakes. We investigate regional-phase amplitudes for stations MDJ (distance ~376 km) and USRK (~406 km). Walter et al. found a weak dependence of Pg/Lg in the 2-4 Hz band at MDJ on the magnitude Mw of the explosion. We find this dependence at USRK also. We also explore the regional amplitude behaviour at a range of frequencies, and dependence on different magnitude measures, such as network body-wave and surface-wave magnitudes.
© British Crown Owned Copyright 2019/AWE
This work is distributed under the Creative Commons Attribution 4.0 License. This licence does not affect the Crown copyright work, which is re-usable under the Open Government Licence (OGL). The Creative Commons Attribution 4.0 License and the OGL are interoperable and do not conflict with, reduce or limit each other.
How to cite: Peacock, S., Bowers, D., and Selby, N.: Magnitude Scaling of Regional-phase Amplitudes from the DPRK Announced Nuclear Tests, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2203, https://doi.org/10.5194/egusphere-egu2020-2203, 2020.
EGU2020-6035 | Displays | ITS1.7/SM3.5
Understanding explosion-related aftershocks using field experiments and physics-based simulationKayla Kroll, Gene Ichinose, Sean Ford, Arben Pitarka, William Walter, and Douglas Dodge
Previous studies have shown that explosion sources produce fewer aftershocks and that they are generally smaller in magnitude compared to aftershocks of similarly sized earthquake sources (Jarpe et al., 1994, Ford and Walter, 2010). It has also been suggested that the explosion-induced aftershocks have smaller Gutenberg-Richter b-values (Ryall and Savage, 1969, Ford and Labak, 2016) and that their rates decay faster than a typical Omori-like sequence (Gross, 1996). Recent chemical explosion experiments at the Nevada National Security Site (NNSS) were observed to generate vigorous aftershock activity and allow for further comparison between earthquake- and explosion-triggered aftershocks. Of the four recent chemical explosion experiments conducted between July 2018 and June 2019, the two largest explosions (i.e. 10-ton and 50-ton) generated hundreds to thousands of aftershocks. Preliminary analysis indicates that these aftershock sequences have similar statistical characteristics to traditional tectonically driven aftershocks in the region.
The physical mechanisms that contribute to differences in aftershock behavior following earthquake and explosion sources are poorly understood. Possible mechanisms may be related to weak material properties in the shallow subsurface that do not give rise to stress concentrations large enough to support brittle failure. Additionally, minimal changes in the shear component of the stress tensor for explosion sources may also contribute to differences in aftershock distributions. Here, we compare aftershock statistics and productivity of the explosion-related aftershocks at the NNSS site to synthetic catalogs of aftershocks triggered by explosion sources. These synthetic catalogs are built by coupling strains that result from modeling the explosion source process with the SW4 wave propagation code with the 3D physics-based earthquake simulation code, RSQSim. We compare statistical properties of the aftershock sequence (e.g. productivity, maximum aftershock magnitude, Omori decay rate) and the spatiotemporal relationship between stress changes and event locations of the synthetic and observed aftershocks to understand the primary mechanisms that control them.
Prepared by LLNL under Contract DE-AC52-07NA27344.
How to cite: Kroll, K., Ichinose, G., Ford, S., Pitarka, A., Walter, W., and Dodge, D.: Understanding explosion-related aftershocks using field experiments and physics-based simulation, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6035, https://doi.org/10.5194/egusphere-egu2020-6035, 2020.
Previous studies have shown that explosion sources produce fewer aftershocks and that they are generally smaller in magnitude compared to aftershocks of similarly sized earthquake sources (Jarpe et al., 1994, Ford and Walter, 2010). It has also been suggested that the explosion-induced aftershocks have smaller Gutenberg-Richter b-values (Ryall and Savage, 1969, Ford and Labak, 2016) and that their rates decay faster than a typical Omori-like sequence (Gross, 1996). Recent chemical explosion experiments at the Nevada National Security Site (NNSS) were observed to generate vigorous aftershock activity and allow for further comparison between earthquake- and explosion-triggered aftershocks. Of the four recent chemical explosion experiments conducted between July 2018 and June 2019, the two largest explosions (i.e. 10-ton and 50-ton) generated hundreds to thousands of aftershocks. Preliminary analysis indicates that these aftershock sequences have similar statistical characteristics to traditional tectonically driven aftershocks in the region.
The physical mechanisms that contribute to differences in aftershock behavior following earthquake and explosion sources are poorly understood. Possible mechanisms may be related to weak material properties in the shallow subsurface that do not give rise to stress concentrations large enough to support brittle failure. Additionally, minimal changes in the shear component of the stress tensor for explosion sources may also contribute to differences in aftershock distributions. Here, we compare aftershock statistics and productivity of the explosion-related aftershocks at the NNSS site to synthetic catalogs of aftershocks triggered by explosion sources. These synthetic catalogs are built by coupling strains that result from modeling the explosion source process with the SW4 wave propagation code with the 3D physics-based earthquake simulation code, RSQSim. We compare statistical properties of the aftershock sequence (e.g. productivity, maximum aftershock magnitude, Omori decay rate) and the spatiotemporal relationship between stress changes and event locations of the synthetic and observed aftershocks to understand the primary mechanisms that control them.
Prepared by LLNL under Contract DE-AC52-07NA27344.
How to cite: Kroll, K., Ichinose, G., Ford, S., Pitarka, A., Walter, W., and Dodge, D.: Understanding explosion-related aftershocks using field experiments and physics-based simulation, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6035, https://doi.org/10.5194/egusphere-egu2020-6035, 2020.
EGU2020-9799 | Displays | ITS1.7/SM3.5
Seismo-Acoustic Shockwave Isolation for Low-Yield Local ExplosionsJoshua Dickey, Michael Pasyanos, Richard Martin, and Raúl Peña
Seismic and acoustic recordings have long been used for the forensic analysis of various natural and anthropogenic events, especially in the realm of nuclear treaty monitoring. More recently, multi-phenomenological analysis has been applied to these signals with great success, providing unique constraints for studying a broad range of source events, including man-made noise, earthquakes and explosions. In particular, the fusion of seismic and infrasonic data has proven valuable for the analysis of explosive yield, significantly improving on the yield estimates obtained from either seismic or acoustic analysis alone.
Unfortunately, the seismo-acoustic analysis of local explosions is complicated by the fact that the two phenomena are potentially co-dependent. Large seismic waves displace the earth like a piston, potentially inducing acoustic waves into the atmosphere as they pass. Similarly, large acoustic waves can couple into the earth, inducing ground motion along their path. This co-dependence can be problematic, particularly when the passing acoustic shockwave couples into the earth coincident with a seismic phase arrival, thereby corrupting the signal.
To address this problem, we present a method for isolating the shockwave response of a seismic sensor, such that any underlying seismic phase arrivals can be recovered. This is accomplished by employing the adaptive noise cancellation model, where a co-located infrasound sensor is used as a reference measurement for the shockwave. In this model, the adaptive filter learns the transform between the relative atmospheric pressure (as recorded by the infrasound sensor), and the resulting ground motion (as recorded by the seismometer). In this way, the filtered infrasound recording approximates the seismic shockwave response, and can be subtracted from the seismograph to recover the phase arrivals.
The experimental data comes from a set of three low-yield near-surface chemical explosions conducted by LLNL as part of a field experiment, known as FE2. The explosions were recorded at eight stations, located at varying distances from the source (between 64m and 2km), with each station consisting of a co-located three-component seismic velocity transducer and differential infrasound sensor. The adaptive technique is demonstrated for recovering seismic arrivals in both the vertical and horizontal channels across all eight stations, and evaluated using leave-one-out cross-validation across the three explosions.
How to cite: Dickey, J., Pasyanos, M., Martin, R., and Peña, R.: Seismo-Acoustic Shockwave Isolation for Low-Yield Local Explosions, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9799, https://doi.org/10.5194/egusphere-egu2020-9799, 2020.
Seismic and acoustic recordings have long been used for the forensic analysis of various natural and anthropogenic events, especially in the realm of nuclear treaty monitoring. More recently, multi-phenomenological analysis has been applied to these signals with great success, providing unique constraints for studying a broad range of source events, including man-made noise, earthquakes and explosions. In particular, the fusion of seismic and infrasonic data has proven valuable for the analysis of explosive yield, significantly improving on the yield estimates obtained from either seismic or acoustic analysis alone.
Unfortunately, the seismo-acoustic analysis of local explosions is complicated by the fact that the two phenomena are potentially co-dependent. Large seismic waves displace the earth like a piston, potentially inducing acoustic waves into the atmosphere as they pass. Similarly, large acoustic waves can couple into the earth, inducing ground motion along their path. This co-dependence can be problematic, particularly when the passing acoustic shockwave couples into the earth coincident with a seismic phase arrival, thereby corrupting the signal.
To address this problem, we present a method for isolating the shockwave response of a seismic sensor, such that any underlying seismic phase arrivals can be recovered. This is accomplished by employing the adaptive noise cancellation model, where a co-located infrasound sensor is used as a reference measurement for the shockwave. In this model, the adaptive filter learns the transform between the relative atmospheric pressure (as recorded by the infrasound sensor), and the resulting ground motion (as recorded by the seismometer). In this way, the filtered infrasound recording approximates the seismic shockwave response, and can be subtracted from the seismograph to recover the phase arrivals.
The experimental data comes from a set of three low-yield near-surface chemical explosions conducted by LLNL as part of a field experiment, known as FE2. The explosions were recorded at eight stations, located at varying distances from the source (between 64m and 2km), with each station consisting of a co-located three-component seismic velocity transducer and differential infrasound sensor. The adaptive technique is demonstrated for recovering seismic arrivals in both the vertical and horizontal channels across all eight stations, and evaluated using leave-one-out cross-validation across the three explosions.
How to cite: Dickey, J., Pasyanos, M., Martin, R., and Peña, R.: Seismo-Acoustic Shockwave Isolation for Low-Yield Local Explosions, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9799, https://doi.org/10.5194/egusphere-egu2020-9799, 2020.
EGU2020-10441 | Displays | ITS1.7/SM3.5
Uncertainty Propagation and Stochastic Interpretation of Shear Motion Generation due to Underground Chemical Explosions in Jointed RockSouheil Ezzedine, Oleg Vorobiev, Tarabay Antoun, and William Walter
We have performed 3D simulations of underground chemical explosions conducted recently in granitic outcrop as part of the Source Physics Experiment (SPE) campaign. The main goal of these simulations is to understand the nature of the shear motions recorded in the near field considering uncertainties in a) the geological characterization of the joints, such as density, orientation and persistency and b) the geomechanical material properties, such as, friction angle, bulk sonic speed, poroelasticity etc. The approach is probabilistic; joints are depicted using a Boolean stochastic representation of inclusions conditional to observations and their probability density functions inferred from borehole data. Then, using a novel continuum approach, joints and faults are painted into the continuum host material, granite. To ensure the fidelity of the painted joints we have conducted a sensitivity study of continuum vs. discrete representation of joints. Simulating wave propagation in heterogeneous discontinuous rock mass is a highly non-linear problem and uncertainty propagation via intrusive methods is practically forbidden. Therefore, using a series of nested Monte Carlo simulations, we have explored and propagated both the geological and the geomechanical uncertainty parameters. We have probabilistically shown that significant shear motions can be generated by sliding on the joints caused by spherical wave propagation. Polarity of the shear motion may change during unloading when the stress state may favor joint sliding on a different joint set. Although this study focuses on understanding shear wave generation in the near field, the overall goal of our investigation is to understand the far field seismic signatures associated with shear waves generated in the immediate vicinity of an underground explosion. Therefore, we have abstracted the near field behavior into a probabilistic source-zone model which is used in the far field wave propagation.
This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344
How to cite: Ezzedine, S., Vorobiev, O., Antoun, T., and Walter, W.: Uncertainty Propagation and Stochastic Interpretation of Shear Motion Generation due to Underground Chemical Explosions in Jointed Rock, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10441, https://doi.org/10.5194/egusphere-egu2020-10441, 2020.
We have performed 3D simulations of underground chemical explosions conducted recently in granitic outcrop as part of the Source Physics Experiment (SPE) campaign. The main goal of these simulations is to understand the nature of the shear motions recorded in the near field considering uncertainties in a) the geological characterization of the joints, such as density, orientation and persistency and b) the geomechanical material properties, such as, friction angle, bulk sonic speed, poroelasticity etc. The approach is probabilistic; joints are depicted using a Boolean stochastic representation of inclusions conditional to observations and their probability density functions inferred from borehole data. Then, using a novel continuum approach, joints and faults are painted into the continuum host material, granite. To ensure the fidelity of the painted joints we have conducted a sensitivity study of continuum vs. discrete representation of joints. Simulating wave propagation in heterogeneous discontinuous rock mass is a highly non-linear problem and uncertainty propagation via intrusive methods is practically forbidden. Therefore, using a series of nested Monte Carlo simulations, we have explored and propagated both the geological and the geomechanical uncertainty parameters. We have probabilistically shown that significant shear motions can be generated by sliding on the joints caused by spherical wave propagation. Polarity of the shear motion may change during unloading when the stress state may favor joint sliding on a different joint set. Although this study focuses on understanding shear wave generation in the near field, the overall goal of our investigation is to understand the far field seismic signatures associated with shear waves generated in the immediate vicinity of an underground explosion. Therefore, we have abstracted the near field behavior into a probabilistic source-zone model which is used in the far field wave propagation.
This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344
How to cite: Ezzedine, S., Vorobiev, O., Antoun, T., and Walter, W.: Uncertainty Propagation and Stochastic Interpretation of Shear Motion Generation due to Underground Chemical Explosions in Jointed Rock, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10441, https://doi.org/10.5194/egusphere-egu2020-10441, 2020.
EGU2020-22299 | Displays | ITS1.7/SM3.5
Seismoacoustic Monitoring of Underground Explosions at Redmond Salt Mine, Utah, United StatesNathan Downey, Sarah Albert, and Daniel Bowman
Underground blasting within an extensive tunnel complex occurs as part of regular operations at Redmond Salt Mine, located in central Utah, United States. During the period of October 2017 – July 2019, we monitored these explosions using seismic and infrasound sensors. The experiment recorded approximately 1000 mining-related blasts as well as several hundred small earthquakes that naturally occur in the monitoring region at source to receiver offsets of 3-25 km. The data collected early in the experiment allow us to explore the characteristics of infrasound signals generated in subterranean tunnels, which show a variety of interesting characteristics, including components related to the structure of the underground tunnel complex, and a time-varying propagation efficiency. We present analyses that attempt to explain these properties. In addition, the data collected during the experiment allow us to test location algorithms at local distances by comparing computed locations with those taken from ground-truth logs. Finally, comparison of the tectonic and explosion signals allows us to examine possible discrimination methods that will effectively differentiate explosions from earthquakes at local distances.
How to cite: Downey, N., Albert, S., and Bowman, D.: Seismoacoustic Monitoring of Underground Explosions at Redmond Salt Mine, Utah, United States, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22299, https://doi.org/10.5194/egusphere-egu2020-22299, 2020.
Underground blasting within an extensive tunnel complex occurs as part of regular operations at Redmond Salt Mine, located in central Utah, United States. During the period of October 2017 – July 2019, we monitored these explosions using seismic and infrasound sensors. The experiment recorded approximately 1000 mining-related blasts as well as several hundred small earthquakes that naturally occur in the monitoring region at source to receiver offsets of 3-25 km. The data collected early in the experiment allow us to explore the characteristics of infrasound signals generated in subterranean tunnels, which show a variety of interesting characteristics, including components related to the structure of the underground tunnel complex, and a time-varying propagation efficiency. We present analyses that attempt to explain these properties. In addition, the data collected during the experiment allow us to test location algorithms at local distances by comparing computed locations with those taken from ground-truth logs. Finally, comparison of the tectonic and explosion signals allows us to examine possible discrimination methods that will effectively differentiate explosions from earthquakes at local distances.
How to cite: Downey, N., Albert, S., and Bowman, D.: Seismoacoustic Monitoring of Underground Explosions at Redmond Salt Mine, Utah, United States, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22299, https://doi.org/10.5194/egusphere-egu2020-22299, 2020.
EGU2020-8949 | Displays | ITS1.7/SM3.5
Tuning IMS station processing parameters and detection thresholds to increase detection precision and decrease detection miss rateChristos Saragiotis and Ivan Kitov
Two principal performance measures of the International Monitoring System (IMS) stations detection capability are the rate of automatic detections associated with events in the Reviewed Event Bulletin (REB) and the rate of detections manually added to the REB. These two metrics roughly correspond to the precision (which is the complement of the false-discovery rate) and miss rate or false-negative rate statistical measures of a binary classification test, respectively. The false-discovery and miss rates are clearly significantly influenced by the number of phases detected by the detection algorithm, which in turn depends on prespecified slowness-, frequency- and azimuth- dependent threshold values used in the short-term average over long-term average ratio detection scheme of the IMS stations. In particular, the lower the threshold, the more the detections and therefore the lower the miss rate but the higher the false discovery rate; the higher the threshold, the less the detections and therefore the higher the miss rate but also the lower the false discovery rate. In that sense decreasing both the false-discovery rate and the miss rate are conflicting goals that need to be balanced. On one hand, it is essential that the miss rate is as low as possible since no nuclear explosion should go unnoticed by the IMS. On the other hand, a high false-discovery rate compromises the quality of the automatically generated event lists and adds heavy and unnecessary workload to the seismic analysts during the interactive processing stage.
A previous study concluded that a way to decrease both the miss and false-discovery rates as well as the analyst workload is to increase the retiming interval, i.e., the maximum allowable time that an analyst is allowed to move an arrival pick without having to declare a new arrival. Indeed, when a detection needs to be moved by an interval larger than the retiming interval, not only is this a much more time-consuming task for the analyst than just retiming it, but it also affects negatively both the associated rate (the automatic detection is deleted and therefore not associated to an event) and the added rate (a new arrival has to be added to arrival list). The International Data Centre has increased the retiming interval from 4 s to 10 s since October 2018. We show how this change affected the associated-detections and added-detections rates and how the values of these metrics can be further improved by tuning the detection threshold levels.
How to cite: Saragiotis, C. and Kitov, I.: Tuning IMS station processing parameters and detection thresholds to increase detection precision and decrease detection miss rate, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8949, https://doi.org/10.5194/egusphere-egu2020-8949, 2020.
Two principal performance measures of the International Monitoring System (IMS) stations detection capability are the rate of automatic detections associated with events in the Reviewed Event Bulletin (REB) and the rate of detections manually added to the REB. These two metrics roughly correspond to the precision (which is the complement of the false-discovery rate) and miss rate or false-negative rate statistical measures of a binary classification test, respectively. The false-discovery and miss rates are clearly significantly influenced by the number of phases detected by the detection algorithm, which in turn depends on prespecified slowness-, frequency- and azimuth- dependent threshold values used in the short-term average over long-term average ratio detection scheme of the IMS stations. In particular, the lower the threshold, the more the detections and therefore the lower the miss rate but the higher the false discovery rate; the higher the threshold, the less the detections and therefore the higher the miss rate but also the lower the false discovery rate. In that sense decreasing both the false-discovery rate and the miss rate are conflicting goals that need to be balanced. On one hand, it is essential that the miss rate is as low as possible since no nuclear explosion should go unnoticed by the IMS. On the other hand, a high false-discovery rate compromises the quality of the automatically generated event lists and adds heavy and unnecessary workload to the seismic analysts during the interactive processing stage.
A previous study concluded that a way to decrease both the miss and false-discovery rates as well as the analyst workload is to increase the retiming interval, i.e., the maximum allowable time that an analyst is allowed to move an arrival pick without having to declare a new arrival. Indeed, when a detection needs to be moved by an interval larger than the retiming interval, not only is this a much more time-consuming task for the analyst than just retiming it, but it also affects negatively both the associated rate (the automatic detection is deleted and therefore not associated to an event) and the added rate (a new arrival has to be added to arrival list). The International Data Centre has increased the retiming interval from 4 s to 10 s since October 2018. We show how this change affected the associated-detections and added-detections rates and how the values of these metrics can be further improved by tuning the detection threshold levels.
How to cite: Saragiotis, C. and Kitov, I.: Tuning IMS station processing parameters and detection thresholds to increase detection precision and decrease detection miss rate, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8949, https://doi.org/10.5194/egusphere-egu2020-8949, 2020.
EGU2020-9396 | Displays | ITS1.7/SM3.5
Assessment of the empirical matched field processing algorithm for autonomous tracking of aftershock sequencesAndreas Köhler, Tormod Kværna, and Steven J. Gibbons
Autonomous algorithms can improve the processing of aftershock sequences, for example by reducing the analyst workload. We present a system for automatic detection and location of aftershocks in a specific region following a large earthquake. The system seeks to identify all signals generated by seismic events in the target region, while passing over signals generated by sources in all other regions. For a given station, we can generate a sensitive empirical matched field (EMF) detector for the target region using only an empirical template from the mainshock signal. These EMF detectors perform much better on seismic arrays than on 3-component stations. For each selected station in the network, a multivariate detector combines the EMF detector with an optimized continuous AR-AIC detector to generate a target-optimized detection list. For arrays, an additional continuous calibrated f-k process reliably screens out likely signals from other sources. A region-specific phase association algorithm takes the screened detection lists from each station and generates a preliminary aftershock bulletin. We have processed aftershock sequences from four major earthquakes: the Tohoku event in 2011 (Japan), the Illapel event in 2015 (Chile), the Papua New Guinea event in 2018 and the Gorkha event in 2015 (Nepal).
We evaluate the results in detail by comparing the automatically generated origins and corresponding phase arrival times with matching events and associated arrivals in the analyst reviewed (REB) and automatic (SEL3) bulletins issued by the CTBTO Preparatory Commission. Between 40% and 65% of all events in the REB are found to closely match the locations and origin times of the events found by our EMF-based procedure. The resulting discrepancies are assessed with respect to signal-to-noise ratio, number of defining stations, and epicentral distance. Furthermore, the REB events not detected by the EMF method are analyzed and a few phase misidentifications (i.e., P vs. pP) are assessed to better understand the limitations of the autonomous procedure. In general, we find that our EMF solutions are closer to the matching REB events than the corresponding SEL3 events. The analyst is helped both by the improved location estimates and a lower number of qualitatively incorrect event hypotheses. A key factor in the performance is the number of contributing seismic arrays. Aftershock sequences in the southern hemisphere performed the worst given the poorer array coverage.
How to cite: Köhler, A., Kværna, T., and Gibbons, S. J.: Assessment of the empirical matched field processing algorithm for autonomous tracking of aftershock sequences, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9396, https://doi.org/10.5194/egusphere-egu2020-9396, 2020.
Autonomous algorithms can improve the processing of aftershock sequences, for example by reducing the analyst workload. We present a system for automatic detection and location of aftershocks in a specific region following a large earthquake. The system seeks to identify all signals generated by seismic events in the target region, while passing over signals generated by sources in all other regions. For a given station, we can generate a sensitive empirical matched field (EMF) detector for the target region using only an empirical template from the mainshock signal. These EMF detectors perform much better on seismic arrays than on 3-component stations. For each selected station in the network, a multivariate detector combines the EMF detector with an optimized continuous AR-AIC detector to generate a target-optimized detection list. For arrays, an additional continuous calibrated f-k process reliably screens out likely signals from other sources. A region-specific phase association algorithm takes the screened detection lists from each station and generates a preliminary aftershock bulletin. We have processed aftershock sequences from four major earthquakes: the Tohoku event in 2011 (Japan), the Illapel event in 2015 (Chile), the Papua New Guinea event in 2018 and the Gorkha event in 2015 (Nepal).
We evaluate the results in detail by comparing the automatically generated origins and corresponding phase arrival times with matching events and associated arrivals in the analyst reviewed (REB) and automatic (SEL3) bulletins issued by the CTBTO Preparatory Commission. Between 40% and 65% of all events in the REB are found to closely match the locations and origin times of the events found by our EMF-based procedure. The resulting discrepancies are assessed with respect to signal-to-noise ratio, number of defining stations, and epicentral distance. Furthermore, the REB events not detected by the EMF method are analyzed and a few phase misidentifications (i.e., P vs. pP) are assessed to better understand the limitations of the autonomous procedure. In general, we find that our EMF solutions are closer to the matching REB events than the corresponding SEL3 events. The analyst is helped both by the improved location estimates and a lower number of qualitatively incorrect event hypotheses. A key factor in the performance is the number of contributing seismic arrays. Aftershock sequences in the southern hemisphere performed the worst given the poorer array coverage.
How to cite: Köhler, A., Kværna, T., and Gibbons, S. J.: Assessment of the empirical matched field processing algorithm for autonomous tracking of aftershock sequences, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9396, https://doi.org/10.5194/egusphere-egu2020-9396, 2020.
EGU2020-12105 | Displays | ITS1.7/SM3.5
The Seismic Network of Zapopan: Evaluating the local seismicity of the western Guadalajara Metropolitan ZoneDiana Núñez, Francisco J. Núñez-Cornú, Edgar Alarcón, Claudia B. M. Quinteros-Cartaya, Carlos Suárez-Plascencia, and Sergio Ramírez
The Municipality of Zapopan, Jalisco, is located west of the Guadalajara Metropolitan Zone at the intersection of three rift zones: Tepic-Zacoalco, Chapala-Tula, and Colima. The importance of this region lies in the recent population growth that it has experienced in a few years. This growth has been supported by the development in commercial and service activities, and mainly in industry and technology, being ranked as the second-most populous city in Mexico, behind the federal capital.
The western region of the Guadalajara Metropolitan Zone (GMZ) has numerous fault systems where, historically, there have been significant earthquakes and seismic swarms such as those that occurred in 1685-1687, 1875, 1932, 1995 and 2002, showing similar characteristics. Besides, it is in this region where the Caldera de la Primavera is located, a rhyolitic volcanic caldera that continues presenting seismic and geothermal activity.
Recently, in the years 2015 and 2016, new seismic swarms occurred and were recorded instrumentally for the first time by the Jalisco Seismic and Accelerometric Network (RESAJ). The two seismic sequences took place in two alignments in the same direction as the Colima rift. These epicenters suggest the existence of two almost parallel normal faults, and that would be forming the Graben of Zapopan. Due to the length of these faults, 16 km for the east fault, and 28 km for the west fault, earthquakes of magnitudes 6.2 - 6.5 could be generated.
In the framework of the CeMIEGeo P-24 project (SENER-CONACyT), we continue studying the seismicity of this region with the deployment of 25 seismic stations in the vicinity of La Caldera de la Primavera. This study revealed the high seismicity that was taking place in the area of Zapopan, Tesistán Valley, and La Caldera de la Primavera.
Based on these new studies and the knowledge of the seismic history of the region, a collaboration agreement has been established between the Research Group UDG-CA-276 SisVOc and Civil Protection of the Municipality of Zapopan for the installation of a local seismic network that will allow to define tectonic and structurally the fault systems of the region and mitigate the possible effects of the local seismicity in the population. Since May 2019, three Obsidian 8X seismic stations with Lennartz 1Hz LE3D and Episensor sensors and two accelerometers installed in the city have been operating, constituting the Zapopan Seismic and Accelerometric Network (RESAZ). The RESAZ operates together with the nearest stations of the RESAJ. In this work, we present the first results of the seismicity analysis recorded in Zapopan.
How to cite: Núñez, D., Núñez-Cornú, F. J., Alarcón, E., Quinteros-Cartaya, C. B. M., Suárez-Plascencia, C., and Ramírez, S.: The Seismic Network of Zapopan: Evaluating the local seismicity of the western Guadalajara Metropolitan Zone, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12105, https://doi.org/10.5194/egusphere-egu2020-12105, 2020.
The Municipality of Zapopan, Jalisco, is located west of the Guadalajara Metropolitan Zone at the intersection of three rift zones: Tepic-Zacoalco, Chapala-Tula, and Colima. The importance of this region lies in the recent population growth that it has experienced in a few years. This growth has been supported by the development in commercial and service activities, and mainly in industry and technology, being ranked as the second-most populous city in Mexico, behind the federal capital.
The western region of the Guadalajara Metropolitan Zone (GMZ) has numerous fault systems where, historically, there have been significant earthquakes and seismic swarms such as those that occurred in 1685-1687, 1875, 1932, 1995 and 2002, showing similar characteristics. Besides, it is in this region where the Caldera de la Primavera is located, a rhyolitic volcanic caldera that continues presenting seismic and geothermal activity.
Recently, in the years 2015 and 2016, new seismic swarms occurred and were recorded instrumentally for the first time by the Jalisco Seismic and Accelerometric Network (RESAJ). The two seismic sequences took place in two alignments in the same direction as the Colima rift. These epicenters suggest the existence of two almost parallel normal faults, and that would be forming the Graben of Zapopan. Due to the length of these faults, 16 km for the east fault, and 28 km for the west fault, earthquakes of magnitudes 6.2 - 6.5 could be generated.
In the framework of the CeMIEGeo P-24 project (SENER-CONACyT), we continue studying the seismicity of this region with the deployment of 25 seismic stations in the vicinity of La Caldera de la Primavera. This study revealed the high seismicity that was taking place in the area of Zapopan, Tesistán Valley, and La Caldera de la Primavera.
Based on these new studies and the knowledge of the seismic history of the region, a collaboration agreement has been established between the Research Group UDG-CA-276 SisVOc and Civil Protection of the Municipality of Zapopan for the installation of a local seismic network that will allow to define tectonic and structurally the fault systems of the region and mitigate the possible effects of the local seismicity in the population. Since May 2019, three Obsidian 8X seismic stations with Lennartz 1Hz LE3D and Episensor sensors and two accelerometers installed in the city have been operating, constituting the Zapopan Seismic and Accelerometric Network (RESAZ). The RESAZ operates together with the nearest stations of the RESAJ. In this work, we present the first results of the seismicity analysis recorded in Zapopan.
How to cite: Núñez, D., Núñez-Cornú, F. J., Alarcón, E., Quinteros-Cartaya, C. B. M., Suárez-Plascencia, C., and Ramírez, S.: The Seismic Network of Zapopan: Evaluating the local seismicity of the western Guadalajara Metropolitan Zone, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12105, https://doi.org/10.5194/egusphere-egu2020-12105, 2020.
EGU2020-4378 | Displays | ITS1.7/SM3.5
Recognition of earthquakes and explosions based on generalized S transformQianli Yang and Tingting Wang
ITS1.8/SSS1.1 – Bridging between scientific disciplines: Participatory Citizen Science and Open Science as a way to go
EGU2020-7453 * | Displays | ITS1.8/SSS1.1 | Highlight
The Potential Role of Citizen Science for Addressing Global Challenges and Achieving the UN Sustainable Development GoalsDilek Fraisl, Jillian Campbell, Linda See, Uta Wehn, Jessica Wardlaw, Margaret Gold, Inian Moorthy, Rosa Arias, Jaume Piera, Jessica L. Oliver, Joan Maso, Marianne Penker, and Steffen Fritz
The contribution of citizen science to addressing societal challenges has long been recognized. The United Nations (UN) Sustainable Development Goals (SDGs), as an overarching policy framework and a roadmap to guide global development efforts until 2030 for achieving a better future for all, could benefit from the potential that citizen science offers. However, there is a lack of knowledge on the value of citizen science, particularly in addressing the data needs for SDG monitoring, among the UN agencies, national statistical offices, policy makers and the citizen science community itself. To address this challenge, we launched a Community of Practice on Citizen Science and the SDGs (SDGs CoP) in November 2018 as part of the EU Horizon 2020 funded WeObserve project.
The SDGs CoP brings together citizen science researchers, practitioners, UN custodian agencies, broader data communities and other key actors to develop an understanding on how to demonstrate the value of citizen science for SDG achievement. The initial focus and the main objective of the SDGs CoP has been to conduct a research study to understand the contribution of citizen science to SDG monitoring and implementation. In this talk, we will present the work of the SDGs CoP. We will first discuss existing data gaps and needs for measuring progress on the SDGs, and then provide an overview on the results of a systematic review that we undertook within the CoP, showing where citizen science is already contributing and could contribute data to the SDG framework. We will provide concrete examples of our findings to demonstrate how citizen science data could inform the SDGs. We will also touch on the challenges for and barriers to the uptake of citizen science data for the SDG monitoring processes, and how to bring this source of data into the scope of official statistics.
How to cite: Fraisl, D., Campbell, J., See, L., Wehn, U., Wardlaw, J., Gold, M., Moorthy, I., Arias, R., Piera, J., L. Oliver, J., Maso, J., Penker, M., and Fritz, S.: The Potential Role of Citizen Science for Addressing Global Challenges and Achieving the UN Sustainable Development Goals, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7453, https://doi.org/10.5194/egusphere-egu2020-7453, 2020.
The contribution of citizen science to addressing societal challenges has long been recognized. The United Nations (UN) Sustainable Development Goals (SDGs), as an overarching policy framework and a roadmap to guide global development efforts until 2030 for achieving a better future for all, could benefit from the potential that citizen science offers. However, there is a lack of knowledge on the value of citizen science, particularly in addressing the data needs for SDG monitoring, among the UN agencies, national statistical offices, policy makers and the citizen science community itself. To address this challenge, we launched a Community of Practice on Citizen Science and the SDGs (SDGs CoP) in November 2018 as part of the EU Horizon 2020 funded WeObserve project.
The SDGs CoP brings together citizen science researchers, practitioners, UN custodian agencies, broader data communities and other key actors to develop an understanding on how to demonstrate the value of citizen science for SDG achievement. The initial focus and the main objective of the SDGs CoP has been to conduct a research study to understand the contribution of citizen science to SDG monitoring and implementation. In this talk, we will present the work of the SDGs CoP. We will first discuss existing data gaps and needs for measuring progress on the SDGs, and then provide an overview on the results of a systematic review that we undertook within the CoP, showing where citizen science is already contributing and could contribute data to the SDG framework. We will provide concrete examples of our findings to demonstrate how citizen science data could inform the SDGs. We will also touch on the challenges for and barriers to the uptake of citizen science data for the SDG monitoring processes, and how to bring this source of data into the scope of official statistics.
How to cite: Fraisl, D., Campbell, J., See, L., Wehn, U., Wardlaw, J., Gold, M., Moorthy, I., Arias, R., Piera, J., L. Oliver, J., Maso, J., Penker, M., and Fritz, S.: The Potential Role of Citizen Science for Addressing Global Challenges and Achieving the UN Sustainable Development Goals, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7453, https://doi.org/10.5194/egusphere-egu2020-7453, 2020.
EGU2020-18606 | Displays | ITS1.8/SSS1.1
Co-Designing Mobile Applications for Data Collection in Citizen Science Projects – Challenges and Lessons Learned within the Nachtlicht-BüHNE ProjectFriederike Klan, Christopher C.M. Kyba, Nona Schulte-Römer, Helga U. Kuechly, Jürgen Oberst, and Anastasios Margonis
Data contributed by citizen scientists raise increasing interest in many areas of scientific research. Increasingly, projects rely on information technology such as mobile applications (apps) to facilitate data collection activities by lay people. When developing such smartphone apps, it is essential to account for both the requirements of the scientists interested in acquiring data and the needs of the citizen scientists contributing data. Citizens and participating scientists should therefore ideally work together during the conception, design and testing of mobile applications used in a citizen science project. This will benefit both sides, as both scientists and citizens can bring in their expectations, desires, knowledge, and commitment early on, thereby making better use of the potential of citizen science. Such processes of app co-design are highly transdisciplinary, and thus pose challenges in terms of the diversity of interests, skills, and background knowledge involved.
Our “Nachtlicht-BüHNE” citizen science project addresses these issues. Its major goal is the development of a co-design process enabling scientists and citizens to jointly develop citizen science projects based on smartphone apps. This includes (1) the conception and development of a mobile application for a specific scientific purpose, (2) the design, planning and organization of field campaigns using the mobile application, and (3) the evaluation of the approach. In Nachtlicht-BüHNE, the co-design approach is developed within the scope of two parallel pilot studies in the environmental and space sciences. Case study 1 deals with the problem of light pollution. Currently, little is known about how much different light source types contribute to emissions from Earth. Within the project, citizens and researchers will develop and use an app to capture information about all types of light sources visible from public streets. Case study 2 focuses on meteors. They are of great scientific interest because their pathways and traces of light can be used to derive dynamic and physical properties of comets and asteroids. Since the surveillance of the sky with cameras is usually incomplete, reports of fireball sightings are important. Within the project, citizens and scientists will create and use the first German-language app that allows reporting meteor sightings.
We will share our experiences on how researchers and communities of citizen scientists with backgrounds in the geosciences, space research, the social sciences, computer science and other disciplines work together in the Nachtlicht-BüHNE project to co-design mobile applications. We highlight challenges that arose and present different strategies for co-design that evolved within the project accounting for the specific needs and interests of the communities involved.
How to cite: Klan, F., Kyba, C. C. M., Schulte-Römer, N., Kuechly, H. U., Oberst, J., and Margonis, A.: Co-Designing Mobile Applications for Data Collection in Citizen Science Projects – Challenges and Lessons Learned within the Nachtlicht-BüHNE Project, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18606, https://doi.org/10.5194/egusphere-egu2020-18606, 2020.
Data contributed by citizen scientists raise increasing interest in many areas of scientific research. Increasingly, projects rely on information technology such as mobile applications (apps) to facilitate data collection activities by lay people. When developing such smartphone apps, it is essential to account for both the requirements of the scientists interested in acquiring data and the needs of the citizen scientists contributing data. Citizens and participating scientists should therefore ideally work together during the conception, design and testing of mobile applications used in a citizen science project. This will benefit both sides, as both scientists and citizens can bring in their expectations, desires, knowledge, and commitment early on, thereby making better use of the potential of citizen science. Such processes of app co-design are highly transdisciplinary, and thus pose challenges in terms of the diversity of interests, skills, and background knowledge involved.
Our “Nachtlicht-BüHNE” citizen science project addresses these issues. Its major goal is the development of a co-design process enabling scientists and citizens to jointly develop citizen science projects based on smartphone apps. This includes (1) the conception and development of a mobile application for a specific scientific purpose, (2) the design, planning and organization of field campaigns using the mobile application, and (3) the evaluation of the approach. In Nachtlicht-BüHNE, the co-design approach is developed within the scope of two parallel pilot studies in the environmental and space sciences. Case study 1 deals with the problem of light pollution. Currently, little is known about how much different light source types contribute to emissions from Earth. Within the project, citizens and researchers will develop and use an app to capture information about all types of light sources visible from public streets. Case study 2 focuses on meteors. They are of great scientific interest because their pathways and traces of light can be used to derive dynamic and physical properties of comets and asteroids. Since the surveillance of the sky with cameras is usually incomplete, reports of fireball sightings are important. Within the project, citizens and scientists will create and use the first German-language app that allows reporting meteor sightings.
We will share our experiences on how researchers and communities of citizen scientists with backgrounds in the geosciences, space research, the social sciences, computer science and other disciplines work together in the Nachtlicht-BüHNE project to co-design mobile applications. We highlight challenges that arose and present different strategies for co-design that evolved within the project accounting for the specific needs and interests of the communities involved.
How to cite: Klan, F., Kyba, C. C. M., Schulte-Römer, N., Kuechly, H. U., Oberst, J., and Margonis, A.: Co-Designing Mobile Applications for Data Collection in Citizen Science Projects – Challenges and Lessons Learned within the Nachtlicht-BüHNE Project, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18606, https://doi.org/10.5194/egusphere-egu2020-18606, 2020.
EGU2020-4964 | Displays | ITS1.8/SSS1.1 | Highlight
Geocitsci.com: A citsci platform for geological hazardsRecep Can, Sultan Kocaman, and Candan Gokceoglu
Geospatial technologies and data are multipurpose and valuable, that can initiate and contribute to different scientific researches. The rapid scientific and technological developments in this field lead to new research areas. In order to discuss the nature, quantity, quality, accuracy or infrastructure of geospatial data, they must be obtained first. To collect geospatial data, appropriate sensors and professional users are often in charge. However, with wide use of mobile devices with coordinate measurement capability (i.e. GNSS receivers), accessibility to freely available remote sensing data and maps, and web map applications, non-professionals are becoming more capable to collect and interpret geospatial data and thus contribute to this domain under various terms such as volunteered geographical information (VGI), participatory geographical information, etc.
Citizen science (CitSci) refers to the participation of individuals in scientific studies regardless of their research background. CitSci has important potential for geoscience researches that need massive and timely geospatial data. Considering the fact that almost every person can access Internet, online CitSci repositories where geospatial data are collected, analyzed and reported are good options for utilizing the potential of CitSci since they can provide platform independency for web and mobile apps with the sole requirement of data connection.
GeoCitSci is a freely accessible geospatial CitSci repository with a WebGIS interface and a mobile application (LaMA). The platform was initially developed to contribute the landslide researchers. The LaMA app and GeoCitSci helps volunteers to upload images and their observations on landslides, such as damages. The system can be adapted to different types of hazards, such as earthquakes. In addition to the mobile app, a web map interface that allows data upload is also implemented. A geodatabase running on the server complements the system by storing the collected data together with a landslide analysis mechanism from photos to ensure high quality content. Such a mechanism that checks the quality of the data provided by the participants is an indispensable part of CitSci repositories.
Since the nature of CitSci methods addresses the volunteers with different knowledge, experience and perspectives, a simple and responsive interface with highly understandable design that can be easily used by all participants is considered in the system implementation with an in-site navigation approach. The web map edit service is developed for those who do not have a smartphone with location feature or have no Internet access. Images obtained from the participants have great importance in order to analyze the landslides. A deep learning architecture has been developed and integrated to the application, which automatically detects and classifies the images whether the image contains landslides or not. The developed deep learning architecture overcome controlling data quality problem which is very important in CitSci projects and eliminate the manual labor. The system is currently being adapted for earthquake researches for the purpose of disaster mitigation and management; and to flood mapping in order to support public safety and reduce the risks and losses.
How to cite: Can, R., Kocaman, S., and Gokceoglu, C.: Geocitsci.com: A citsci platform for geological hazards, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4964, https://doi.org/10.5194/egusphere-egu2020-4964, 2020.
Geospatial technologies and data are multipurpose and valuable, that can initiate and contribute to different scientific researches. The rapid scientific and technological developments in this field lead to new research areas. In order to discuss the nature, quantity, quality, accuracy or infrastructure of geospatial data, they must be obtained first. To collect geospatial data, appropriate sensors and professional users are often in charge. However, with wide use of mobile devices with coordinate measurement capability (i.e. GNSS receivers), accessibility to freely available remote sensing data and maps, and web map applications, non-professionals are becoming more capable to collect and interpret geospatial data and thus contribute to this domain under various terms such as volunteered geographical information (VGI), participatory geographical information, etc.
Citizen science (CitSci) refers to the participation of individuals in scientific studies regardless of their research background. CitSci has important potential for geoscience researches that need massive and timely geospatial data. Considering the fact that almost every person can access Internet, online CitSci repositories where geospatial data are collected, analyzed and reported are good options for utilizing the potential of CitSci since they can provide platform independency for web and mobile apps with the sole requirement of data connection.
GeoCitSci is a freely accessible geospatial CitSci repository with a WebGIS interface and a mobile application (LaMA). The platform was initially developed to contribute the landslide researchers. The LaMA app and GeoCitSci helps volunteers to upload images and their observations on landslides, such as damages. The system can be adapted to different types of hazards, such as earthquakes. In addition to the mobile app, a web map interface that allows data upload is also implemented. A geodatabase running on the server complements the system by storing the collected data together with a landslide analysis mechanism from photos to ensure high quality content. Such a mechanism that checks the quality of the data provided by the participants is an indispensable part of CitSci repositories.
Since the nature of CitSci methods addresses the volunteers with different knowledge, experience and perspectives, a simple and responsive interface with highly understandable design that can be easily used by all participants is considered in the system implementation with an in-site navigation approach. The web map edit service is developed for those who do not have a smartphone with location feature or have no Internet access. Images obtained from the participants have great importance in order to analyze the landslides. A deep learning architecture has been developed and integrated to the application, which automatically detects and classifies the images whether the image contains landslides or not. The developed deep learning architecture overcome controlling data quality problem which is very important in CitSci projects and eliminate the manual labor. The system is currently being adapted for earthquake researches for the purpose of disaster mitigation and management; and to flood mapping in order to support public safety and reduce the risks and losses.
How to cite: Can, R., Kocaman, S., and Gokceoglu, C.: Geocitsci.com: A citsci platform for geological hazards, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4964, https://doi.org/10.5194/egusphere-egu2020-4964, 2020.
EGU2020-18119 | Displays | ITS1.8/SSS1.1 | Highlight
Remote sensing and citizen science observatories: a promising partnership for phenology monitoringCristina Domingo-Marimon, Ester Prat, Pau Guzmán, Alaitz Zabala, and Joan Masó
Changes in the rhythm of nature are recognized as a useful proxy for detecting climate change and a very interesting source of data for scientists investigating its effects on the natural ecosystems. In this sense, phenology is the science that observes and studies the phases of the life cycling of living organisms and how the seasonal and interannual variations of climate affect them.
Traditionally, farmers or naturalists and scientists recorded phenological observations on paper for decades. Most of these observations correspond to practices today associated to Citizen Science. So far, in-situ observations were reduced to small traditional specimens closely located to the observer home, such as garden plants or fruit trees, butterflies, swallows or storks and, in general, the volunteers efforts were a bit biased towards accessible locations (close to the roads or urban areas). However, the strong variability of the vegetation phenology across biomes requires having more data to improve the knowledge about these changes. Despite its limitations, local, regional or national networks are dedicated to the collection of evidences on changes of vegetation phenology. At sub-national level in Catalonia (north-east of the Iberian Peninsula), the Catalan weather service deployed the FenoCat initiative and in the H2020 Groundtruth 2.0 project, RitmeNatura.cat (www.ritmenatura.cat) was co-designed as a phenological Citizen Observatory that has a community of phenology observers collecting either occasional or regular observations. It monitors 12 species and provides observers with species-phenophase guidance. Fortunately, scientists have found another ally to increase the collection of vegetation phenology data at global level: remote sensing.
Remote Sensing (RS) provides several products with different spatial and spectral resolutions. MODIS with a daily revisit is ideal for detecting phenology in vegetation but in many areas of the world, a spatial resolution of 250 m (MODIS) is too coarse to account for small heterogeneous landscapes. In the other extreme high resolution imagery such as Landsat has a limited temporal resolution of only two revisiting periods per month being too low to generate a regular (and dense enough) time series once cloud cover is masked. Sentinel 2A and B with higher resolution, global coverage and 5 days temporal revisiting offer a good compromise. Still, what was obtainable from space differs methodologically from the in-situ observations and both are hardly comparable. The PhenoTandem Project (http://www.ritmenatura.cat/projects/phenotandem/index-eng.htm), part of the CSEOL initiative funded by ESA, provides an innovation consisting in co-designing a new protocol with citizen scientists that will make in-situ observations interoperate with remote sensing products by selecting the areas and habitats where traditional phenological in-situ observations done by volunteers can be also be observed in Sentinel 2 imagery
And so harmonizing citizens’ science and remote sensing observations promoted through observatories ensures a promising partnership for phenology monitoring.
How to cite: Domingo-Marimon, C., Prat, E., Guzmán, P., Zabala, A., and Masó, J.: Remote sensing and citizen science observatories: a promising partnership for phenology monitoring, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18119, https://doi.org/10.5194/egusphere-egu2020-18119, 2020.
Changes in the rhythm of nature are recognized as a useful proxy for detecting climate change and a very interesting source of data for scientists investigating its effects on the natural ecosystems. In this sense, phenology is the science that observes and studies the phases of the life cycling of living organisms and how the seasonal and interannual variations of climate affect them.
Traditionally, farmers or naturalists and scientists recorded phenological observations on paper for decades. Most of these observations correspond to practices today associated to Citizen Science. So far, in-situ observations were reduced to small traditional specimens closely located to the observer home, such as garden plants or fruit trees, butterflies, swallows or storks and, in general, the volunteers efforts were a bit biased towards accessible locations (close to the roads or urban areas). However, the strong variability of the vegetation phenology across biomes requires having more data to improve the knowledge about these changes. Despite its limitations, local, regional or national networks are dedicated to the collection of evidences on changes of vegetation phenology. At sub-national level in Catalonia (north-east of the Iberian Peninsula), the Catalan weather service deployed the FenoCat initiative and in the H2020 Groundtruth 2.0 project, RitmeNatura.cat (www.ritmenatura.cat) was co-designed as a phenological Citizen Observatory that has a community of phenology observers collecting either occasional or regular observations. It monitors 12 species and provides observers with species-phenophase guidance. Fortunately, scientists have found another ally to increase the collection of vegetation phenology data at global level: remote sensing.
Remote Sensing (RS) provides several products with different spatial and spectral resolutions. MODIS with a daily revisit is ideal for detecting phenology in vegetation but in many areas of the world, a spatial resolution of 250 m (MODIS) is too coarse to account for small heterogeneous landscapes. In the other extreme high resolution imagery such as Landsat has a limited temporal resolution of only two revisiting periods per month being too low to generate a regular (and dense enough) time series once cloud cover is masked. Sentinel 2A and B with higher resolution, global coverage and 5 days temporal revisiting offer a good compromise. Still, what was obtainable from space differs methodologically from the in-situ observations and both are hardly comparable. The PhenoTandem Project (http://www.ritmenatura.cat/projects/phenotandem/index-eng.htm), part of the CSEOL initiative funded by ESA, provides an innovation consisting in co-designing a new protocol with citizen scientists that will make in-situ observations interoperate with remote sensing products by selecting the areas and habitats where traditional phenological in-situ observations done by volunteers can be also be observed in Sentinel 2 imagery
And so harmonizing citizens’ science and remote sensing observations promoted through observatories ensures a promising partnership for phenology monitoring.
How to cite: Domingo-Marimon, C., Prat, E., Guzmán, P., Zabala, A., and Masó, J.: Remote sensing and citizen science observatories: a promising partnership for phenology monitoring, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18119, https://doi.org/10.5194/egusphere-egu2020-18119, 2020.
EGU2020-21328 | Displays | ITS1.8/SSS1.1
From crowdsourcing environmental measurements to their integration in the GEOSS portalValantis Tsiakos, Maria Krommyda, Athanasia Tsertou, and Angelos Amditis
Environmental monitoring is based on time-series of data collected over long periods of time from expensive and hard to maintain in-situ sensors available only in specific areas. Due to the climate change it is important to monitor extended areas of interest. This need has raised the question of whether such monitoring can be complemented or replaced by Citizen Science.
Crowdsourced measurements from low-cost and easy to use portable sensors and devices can facilitate the collection of the needed information with the support of volunteers, enabling the monitoring of environmental ecosystems and extended areas of interest. In particular, during the last years there has been a rapid increase of citizen-generated knowledge that has been facilitated by the wider use of mobile devices and low-cost portable sensors. To enable their easy integration to existing models and systems as well as their utilisation in the context of new applications, citizen science data should be easily discoverable, re-usable, accessible and available for future use.
The Global Earth Observation System of Systems (GEOSS) offers a single access point to Earth Observation data (GEOSS Portal), connecting users to various environmental monitoring systems around the world while promoting the use of common technical standards to support their utilisation.
Such a connection was demonstrated in the context of SCENT project. SCENT is a EU project which has implemented an integrated toolbox of smart collaborative and innovating technologies that allows volunteers to collect environmental measurements as part of their everyday activities.
These measurements may include images that include information about the land cover and land use of the area, air temperature and soil moisture measurements from low-cost portable environmental sensors or river measurements, water level and water velocity extracted from multimedia, images and video, through dedicated tools.
The collected measurements are provided to policy makers and scientists to facilitate the decision making regarding needed actions and infrastructure improvements as well as the monitoring of environmental phenomena like floods through the crowdsourced information.
In order to ensure that the provided measurements are of high quality, a dedicated control mechanism has been implemented that uses a custom mechanism, based on spatial and temporal clustering, to identify biased or low quality contributions and remove them from the system.
Finally, recognising the importance of making the collected data available all the validated measurements are modelled, stored and provisioned using the Open Geospatial Consortium (OGC) standards Web Feature Service (WFS) and Web Map Service (WMS) as applicable.
This allows the spatial and temporal discovery of information among the collected measurements, encourages their re-usability and allows their integration to systems and platforms utilizing the same standards. The data collected by the SCENT Campaigns organized at the Kifisos river basin and the Danube Delta can be found at the GEOSS portal under the WFS here https://www.geoportal.org/?f:sources=wfsscentID and under the WMS here https://www.geoportal.org/?f:sources=wmsSCENTID.
This activity is showcased as part of WeObserve project that has received funding from the European Union’ s Horizon 2020 research and innovation programme under grant agreement No 776740.
How to cite: Tsiakos, V., Krommyda, M., Tsertou, A., and Amditis, A.: From crowdsourcing environmental measurements to their integration in the GEOSS portal, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21328, https://doi.org/10.5194/egusphere-egu2020-21328, 2020.
Environmental monitoring is based on time-series of data collected over long periods of time from expensive and hard to maintain in-situ sensors available only in specific areas. Due to the climate change it is important to monitor extended areas of interest. This need has raised the question of whether such monitoring can be complemented or replaced by Citizen Science.
Crowdsourced measurements from low-cost and easy to use portable sensors and devices can facilitate the collection of the needed information with the support of volunteers, enabling the monitoring of environmental ecosystems and extended areas of interest. In particular, during the last years there has been a rapid increase of citizen-generated knowledge that has been facilitated by the wider use of mobile devices and low-cost portable sensors. To enable their easy integration to existing models and systems as well as their utilisation in the context of new applications, citizen science data should be easily discoverable, re-usable, accessible and available for future use.
The Global Earth Observation System of Systems (GEOSS) offers a single access point to Earth Observation data (GEOSS Portal), connecting users to various environmental monitoring systems around the world while promoting the use of common technical standards to support their utilisation.
Such a connection was demonstrated in the context of SCENT project. SCENT is a EU project which has implemented an integrated toolbox of smart collaborative and innovating technologies that allows volunteers to collect environmental measurements as part of their everyday activities.
These measurements may include images that include information about the land cover and land use of the area, air temperature and soil moisture measurements from low-cost portable environmental sensors or river measurements, water level and water velocity extracted from multimedia, images and video, through dedicated tools.
The collected measurements are provided to policy makers and scientists to facilitate the decision making regarding needed actions and infrastructure improvements as well as the monitoring of environmental phenomena like floods through the crowdsourced information.
In order to ensure that the provided measurements are of high quality, a dedicated control mechanism has been implemented that uses a custom mechanism, based on spatial and temporal clustering, to identify biased or low quality contributions and remove them from the system.
Finally, recognising the importance of making the collected data available all the validated measurements are modelled, stored and provisioned using the Open Geospatial Consortium (OGC) standards Web Feature Service (WFS) and Web Map Service (WMS) as applicable.
This allows the spatial and temporal discovery of information among the collected measurements, encourages their re-usability and allows their integration to systems and platforms utilizing the same standards. The data collected by the SCENT Campaigns organized at the Kifisos river basin and the Danube Delta can be found at the GEOSS portal under the WFS here https://www.geoportal.org/?f:sources=wfsscentID and under the WMS here https://www.geoportal.org/?f:sources=wmsSCENTID.
This activity is showcased as part of WeObserve project that has received funding from the European Union’ s Horizon 2020 research and innovation programme under grant agreement No 776740.
How to cite: Tsiakos, V., Krommyda, M., Tsertou, A., and Amditis, A.: From crowdsourcing environmental measurements to their integration in the GEOSS portal, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21328, https://doi.org/10.5194/egusphere-egu2020-21328, 2020.
EGU2020-7310 | Displays | ITS1.8/SSS1.1
Alternative Interfaces for Improved Representation and Cultural Inclusion in Web-Based PPGISTimna Denwood, Jonathan Huck, and Sarah Lindley
The development of new infrastructure (such as wind farms) often faces opposition from local citizens and other stakeholders due to concerns over the trade-off between cultural and provisioning services. PPGIS (Public Participatory Geographic Information Science) can be used to collect areas of conflict, as well as obtain qualitative data on existing or proposed infrastructure and therefore minimise disruption at later stages of the planning process. Despite PPGIS being designed to increase democracy in the decision making process, the tools to do so are often lacking. This can result in the data collected being ignored or misinterpreted as it fails to adequately represent the views of citizens as well as the exclusion of certain parties due to digital divides. One way in which current tools are lacking is in the un-critical use of spatial primitives such as points and polygons. They dominate PPGIS tools yet can, in some circumstances, offer a poor representation of the complex relationships between people and place. This research explores three ways in which citizens’ views might be better represented by using alternative PPGIS interfaces. User surveys and interviews were carried out through a case study on the isles of Barra and Vatersay, Outer Hebrides, UK.
Firstly, we address the challenge of generalisation in line-based PPGIS by asking participants where they would like to see new footpaths. It replaces the traditional line digitisation model with one in which user-generated ‘anchor points’ are joined not with straight edges, but rather with least cost paths. This approach means that the level of generalisation of each line is standardised, based upon the resolution of the underlying elevation data. The standardised level of generalisation also means that similar inputs will follow the same route, avoiding the need for path bundling, which can draw results away from their intended location. As such, realistic and representative outputs can be produced with minimal effort required of the participant. Secondly, we use viewsheds as a spatial unit, drawn in real-time when the user clicks on the map. Participants are asked to click on locations from which they would not wish to be able to see a turbine (e.g. their house), and the map will then be populated with a viewshed delineating the areas in which a turbine could not therefore be placed. This approach is therefore able to better reflect how citizens would experience the installation in real life, rather than simply adding points at locations that they believe to be suitable or unsuitable without any contextual information. Finally, we consider the same questions again, but this time using a paper-based interface instead of the digital. This enables an assessment of how a non-digital PPGIS interface might influence participant accessibility and subsequent analysis.
We present preliminary results, and explore how alternative spatial units and interfaces might permit researchers to gain greater insight into participants’ spatial thoughts and feelings for more inclusive and representative environmental decision-making.
How to cite: Denwood, T., Huck, J., and Lindley, S.: Alternative Interfaces for Improved Representation and Cultural Inclusion in Web-Based PPGIS, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7310, https://doi.org/10.5194/egusphere-egu2020-7310, 2020.
The development of new infrastructure (such as wind farms) often faces opposition from local citizens and other stakeholders due to concerns over the trade-off between cultural and provisioning services. PPGIS (Public Participatory Geographic Information Science) can be used to collect areas of conflict, as well as obtain qualitative data on existing or proposed infrastructure and therefore minimise disruption at later stages of the planning process. Despite PPGIS being designed to increase democracy in the decision making process, the tools to do so are often lacking. This can result in the data collected being ignored or misinterpreted as it fails to adequately represent the views of citizens as well as the exclusion of certain parties due to digital divides. One way in which current tools are lacking is in the un-critical use of spatial primitives such as points and polygons. They dominate PPGIS tools yet can, in some circumstances, offer a poor representation of the complex relationships between people and place. This research explores three ways in which citizens’ views might be better represented by using alternative PPGIS interfaces. User surveys and interviews were carried out through a case study on the isles of Barra and Vatersay, Outer Hebrides, UK.
Firstly, we address the challenge of generalisation in line-based PPGIS by asking participants where they would like to see new footpaths. It replaces the traditional line digitisation model with one in which user-generated ‘anchor points’ are joined not with straight edges, but rather with least cost paths. This approach means that the level of generalisation of each line is standardised, based upon the resolution of the underlying elevation data. The standardised level of generalisation also means that similar inputs will follow the same route, avoiding the need for path bundling, which can draw results away from their intended location. As such, realistic and representative outputs can be produced with minimal effort required of the participant. Secondly, we use viewsheds as a spatial unit, drawn in real-time when the user clicks on the map. Participants are asked to click on locations from which they would not wish to be able to see a turbine (e.g. their house), and the map will then be populated with a viewshed delineating the areas in which a turbine could not therefore be placed. This approach is therefore able to better reflect how citizens would experience the installation in real life, rather than simply adding points at locations that they believe to be suitable or unsuitable without any contextual information. Finally, we consider the same questions again, but this time using a paper-based interface instead of the digital. This enables an assessment of how a non-digital PPGIS interface might influence participant accessibility and subsequent analysis.
We present preliminary results, and explore how alternative spatial units and interfaces might permit researchers to gain greater insight into participants’ spatial thoughts and feelings for more inclusive and representative environmental decision-making.
How to cite: Denwood, T., Huck, J., and Lindley, S.: Alternative Interfaces for Improved Representation and Cultural Inclusion in Web-Based PPGIS, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7310, https://doi.org/10.5194/egusphere-egu2020-7310, 2020.
EGU2020-4761 | Displays | ITS1.8/SSS1.1
Open data in building resilience to recurrent natural hazards in remote mountainous communities of NepalBinod Prasad Parajuli, Puja Shakya, Prakash Khadka, Wei Liu, and Uttam Pudasaini
The concept of using open data in development planning and resilience building to frequent environmental hazards has gained substantial momentum in recent years. It is helpful in better understanding local capacities and associated risks to develop appropriate risk reduction strategies. Currently, lack of accurate and sufficient data has contributed to increased environmental risks, preventing local planners the opportunity to consider these risks in advance. To fulfil this gap, this study presents an innovative approach of using openly available platforms to map locally available resources and associated risks in two remote communities of Nepal. The study also highlights the possibility of using the combined knowledge of technical persons and citizen scientists to collect geo-spatial data to support proper decision making. We harnessed the power of citizen scientists to collect geo-spatial data by training them on currently available tools and platforms. Also, we equipped these communities with the necessary instruments to collect location based data. Later, these data collected by citizen scientists were uploaded in the online platforms. The collected data are freely accessible to community members, government and humanitarian actors which could be used for development planning and risk reduction. Moreover, the information co-generated by local communities and scientists could be crucial for local government bodies to plan activities related to disaster risk reduction. Through the piloting in two communities of Nepal, we have found that using open data platforms for collecting and analysing location based data has a mutual benefit to researchers and communities. These data could be vital in understanding the local landscape of development, environmental risk and distribution of resources. Furthermore, it enables both researchers and local people to transfer the technical knowledge, collect location specific data and use them in better decision making.
How to cite: Parajuli, B. P., Shakya, P., Khadka, P., Liu, W., and Pudasaini, U.: Open data in building resilience to recurrent natural hazards in remote mountainous communities of Nepal , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4761, https://doi.org/10.5194/egusphere-egu2020-4761, 2020.
The concept of using open data in development planning and resilience building to frequent environmental hazards has gained substantial momentum in recent years. It is helpful in better understanding local capacities and associated risks to develop appropriate risk reduction strategies. Currently, lack of accurate and sufficient data has contributed to increased environmental risks, preventing local planners the opportunity to consider these risks in advance. To fulfil this gap, this study presents an innovative approach of using openly available platforms to map locally available resources and associated risks in two remote communities of Nepal. The study also highlights the possibility of using the combined knowledge of technical persons and citizen scientists to collect geo-spatial data to support proper decision making. We harnessed the power of citizen scientists to collect geo-spatial data by training them on currently available tools and platforms. Also, we equipped these communities with the necessary instruments to collect location based data. Later, these data collected by citizen scientists were uploaded in the online platforms. The collected data are freely accessible to community members, government and humanitarian actors which could be used for development planning and risk reduction. Moreover, the information co-generated by local communities and scientists could be crucial for local government bodies to plan activities related to disaster risk reduction. Through the piloting in two communities of Nepal, we have found that using open data platforms for collecting and analysing location based data has a mutual benefit to researchers and communities. These data could be vital in understanding the local landscape of development, environmental risk and distribution of resources. Furthermore, it enables both researchers and local people to transfer the technical knowledge, collect location specific data and use them in better decision making.
How to cite: Parajuli, B. P., Shakya, P., Khadka, P., Liu, W., and Pudasaini, U.: Open data in building resilience to recurrent natural hazards in remote mountainous communities of Nepal , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4761, https://doi.org/10.5194/egusphere-egu2020-4761, 2020.
EGU2020-7870 | Displays | ITS1.8/SSS1.1 | Highlight
Monitoring of land use change by citizens: The FotoQuest experienceJuan Carlos Laso Bayas, Linda See, Tobias Sturn, Mathias Karner, Dilek Fraisl, Inian Moorthy, Anto Subash, Ivelina Georgieva, Gerid Hager, Myroslava Lesiv, Hadi Hadi, Olha Danylo, Santosh Karanam, Martina Dürauer, Domian Dahlia, Dmitry Shchepashchenko, Ian McCallum, and Steffen Fritz
Almost 6 years ago, the now Center for Earth Observation and Citizen Science (EOCS) at the International Institute for Applied Systems Analysis (IIASA) pioneered a crowdsourcing mobile app that allowed citizens to report land use and land cover at specific locations across Austria. The app is called FotoQuest Austria (and FotoQuest Go Europe when extended outside of Austria) and uses the GPS capabilities of mobile phones to allow citizens to visit locations near to them and then provide information on various land-related characteristics. A subset of the locations in FotoQuest Austria matched those used in the three-yearly Land Use and Coverage Area frame Survey (LUCAS) from Eurostat. The interface was developed to mimic part of the same protocol that LUCAS surveyors use when visiting locations across Europe, but in this case allowing any citizen to record land use and land cover characteristics observed at these locations. Over a period of 4 years, the FotoQuest project continued to improve: In the 2015 FotoQuest Austria version, 76 citizens collected data at over 600 LUCAS locations, although only 300 were used for comparison, mostly due to quality reasons (Laso Bayas et al. 2016). In the 2018 FotoQuest Go Europe campaign, 140 users from 18 different countries visited 1600 locations, with almost 1400 being currently used for analysis. Apart from the increased number of countries and locations, the user interface, experience and interaction with the app was continuously enhanced. Although LUCAS happened only twice in this period (2015 and 2018), FotoQuest had 3 official campaigns, which allowed us to introduce improvements in each campaign, but it also enabled citizens to continue providing land use change information in between campaigns. In 2015, the agreement between the main land cover classes in LUCAS and FotoQuest Austria was 69% whereas in the 2018 FotoQuest Go Europe campaign, it was over 90%. Currently, data from all campaigns are being compiled and will be freely available through the Geo-Wiki open platform (www.geo-wiki.org). The current presentation will describe the development of the FotoQuest project, as an example of a citizen science project that provides open data, including engagement strategies, improvements to the user interface and experience, and the lessons learnt from the uptake and the match of the crowdsourced data against the official LUCAS results. We hope the lessons we have learned during the project can help other citizen science projects share their data more openly and increase citizen participation.
Related publication:
Laso Bayas, J C, L See, S Fritz, T Sturn, C Perger, M Dürauer, M Karner, et al. 2016. “Crowdsourcing In-Situ Data on Land Cover and Land Use Using Gamification and Mobile Technology.” Remote Sensing 8 (11): e905. https://doi.org/10.3390/rs8110905.
How to cite: Laso Bayas, J. C., See, L., Sturn, T., Karner, M., Fraisl, D., Moorthy, I., Subash, A., Georgieva, I., Hager, G., Lesiv, M., Hadi, H., Danylo, O., Karanam, S., Dürauer, M., Dahlia, D., Shchepashchenko, D., McCallum, I., and Fritz, S.: Monitoring of land use change by citizens: The FotoQuest experience, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7870, https://doi.org/10.5194/egusphere-egu2020-7870, 2020.
Almost 6 years ago, the now Center for Earth Observation and Citizen Science (EOCS) at the International Institute for Applied Systems Analysis (IIASA) pioneered a crowdsourcing mobile app that allowed citizens to report land use and land cover at specific locations across Austria. The app is called FotoQuest Austria (and FotoQuest Go Europe when extended outside of Austria) and uses the GPS capabilities of mobile phones to allow citizens to visit locations near to them and then provide information on various land-related characteristics. A subset of the locations in FotoQuest Austria matched those used in the three-yearly Land Use and Coverage Area frame Survey (LUCAS) from Eurostat. The interface was developed to mimic part of the same protocol that LUCAS surveyors use when visiting locations across Europe, but in this case allowing any citizen to record land use and land cover characteristics observed at these locations. Over a period of 4 years, the FotoQuest project continued to improve: In the 2015 FotoQuest Austria version, 76 citizens collected data at over 600 LUCAS locations, although only 300 were used for comparison, mostly due to quality reasons (Laso Bayas et al. 2016). In the 2018 FotoQuest Go Europe campaign, 140 users from 18 different countries visited 1600 locations, with almost 1400 being currently used for analysis. Apart from the increased number of countries and locations, the user interface, experience and interaction with the app was continuously enhanced. Although LUCAS happened only twice in this period (2015 and 2018), FotoQuest had 3 official campaigns, which allowed us to introduce improvements in each campaign, but it also enabled citizens to continue providing land use change information in between campaigns. In 2015, the agreement between the main land cover classes in LUCAS and FotoQuest Austria was 69% whereas in the 2018 FotoQuest Go Europe campaign, it was over 90%. Currently, data from all campaigns are being compiled and will be freely available through the Geo-Wiki open platform (www.geo-wiki.org). The current presentation will describe the development of the FotoQuest project, as an example of a citizen science project that provides open data, including engagement strategies, improvements to the user interface and experience, and the lessons learnt from the uptake and the match of the crowdsourced data against the official LUCAS results. We hope the lessons we have learned during the project can help other citizen science projects share their data more openly and increase citizen participation.
Related publication:
Laso Bayas, J C, L See, S Fritz, T Sturn, C Perger, M Dürauer, M Karner, et al. 2016. “Crowdsourcing In-Situ Data on Land Cover and Land Use Using Gamification and Mobile Technology.” Remote Sensing 8 (11): e905. https://doi.org/10.3390/rs8110905.
How to cite: Laso Bayas, J. C., See, L., Sturn, T., Karner, M., Fraisl, D., Moorthy, I., Subash, A., Georgieva, I., Hager, G., Lesiv, M., Hadi, H., Danylo, O., Karanam, S., Dürauer, M., Dahlia, D., Shchepashchenko, D., McCallum, I., and Fritz, S.: Monitoring of land use change by citizens: The FotoQuest experience, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7870, https://doi.org/10.5194/egusphere-egu2020-7870, 2020.
EGU2020-7613 | Displays | ITS1.8/SSS1.1
Estimation of soil moisture content using Citizen observatory data -lessons learnt from GROW Observatory projectEndre Dobos, Károly Kovács, Daniel Kibirige, and Péter Vadnai
Soil moisture is a crucial factor for agricultural activity, but also an important factor for weather forecast and climate science. Despite of the technological development in soil moisture sensing, no full coverage global or continental or even national scale soil moisture monitoring system exist. There is a new European initiative to demonstrate the feasibility of a citizen observatory based soil moisture monitoring system. The aim of this study is to characterize this new monitoring approach and provide provisional results on the interpretation and system performance.
GROW Observatory is a project funded under the European Union’s Horizon 2020 research and innovation program. Its aim is to establish a large scale (>20,000 participants), resilient and integrated ‘Citizen Observatory’ (CO) and community for environmental monitoring that is self-sustaining beyond the life of the project. This article describes how the initial framework and tools were developed to evolve, bring together and train such a community; raising interest, engaging participants, and educating to support reliable observations, measurements and documentation, and considerations with a special focus on the reliability of the resulting dataset for scientific purposes. The scientific purposes of GROW observatory are to test the data quality and the spatial representativity of a citizen engagement driven spatial distribution as reliably inputs for soil moisture monitoring and to create timely series of gridded soil moisture products based on citizens’ observations using low cost soil moisture (SM) sensors, and to provide an extensive dataset of in-situ soil moisture observations which can serve as a reference to validate satellite-based SM products and support the Copernicus in-situ component. This article aims to showcase the design, tools and the digital soil mapping approaches of the final soil moisture product.
How to cite: Dobos, E., Kovács, K., Kibirige, D., and Vadnai, P.: Estimation of soil moisture content using Citizen observatory data -lessons learnt from GROW Observatory project, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7613, https://doi.org/10.5194/egusphere-egu2020-7613, 2020.
Soil moisture is a crucial factor for agricultural activity, but also an important factor for weather forecast and climate science. Despite of the technological development in soil moisture sensing, no full coverage global or continental or even national scale soil moisture monitoring system exist. There is a new European initiative to demonstrate the feasibility of a citizen observatory based soil moisture monitoring system. The aim of this study is to characterize this new monitoring approach and provide provisional results on the interpretation and system performance.
GROW Observatory is a project funded under the European Union’s Horizon 2020 research and innovation program. Its aim is to establish a large scale (>20,000 participants), resilient and integrated ‘Citizen Observatory’ (CO) and community for environmental monitoring that is self-sustaining beyond the life of the project. This article describes how the initial framework and tools were developed to evolve, bring together and train such a community; raising interest, engaging participants, and educating to support reliable observations, measurements and documentation, and considerations with a special focus on the reliability of the resulting dataset for scientific purposes. The scientific purposes of GROW observatory are to test the data quality and the spatial representativity of a citizen engagement driven spatial distribution as reliably inputs for soil moisture monitoring and to create timely series of gridded soil moisture products based on citizens’ observations using low cost soil moisture (SM) sensors, and to provide an extensive dataset of in-situ soil moisture observations which can serve as a reference to validate satellite-based SM products and support the Copernicus in-situ component. This article aims to showcase the design, tools and the digital soil mapping approaches of the final soil moisture product.
How to cite: Dobos, E., Kovács, K., Kibirige, D., and Vadnai, P.: Estimation of soil moisture content using Citizen observatory data -lessons learnt from GROW Observatory project, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7613, https://doi.org/10.5194/egusphere-egu2020-7613, 2020.
EGU2020-3068 | Displays | ITS1.8/SSS1.1 | Highlight
TeaTime4App – Raising awareness about the role of soils with the educational “Tea Bag Index App”Julia Miloczki, Anna Wawra, Markus Gansberger, Philipp Hummer, and Taru Sandén
With the Tea Bag Index (TBI) App, we aim to foster awareness of the importance of soils and their ecosystem services to students above the age of 10. The TBI app consists of three categories of hands-on activities: Basic soil attributes, Soil observations, and Tea Bag Index. Basic soil attributes include land use, soil colour and soil life, whereas soil observations go further to Texture by Feel, Spade Test and observation of soil pollution. The Tea Bag Index (Keuskamp et al., 2013) provides an easy and scientifically recognized way to measure decomposition rates and stabilisation of organic matter in soils. The method consists of burying tea bags and measuring the degradation of organic material after three months’ time. Each of the methods includes clear instructions and extra information in the app. Data gathered are interactively shown on a map in the App as well as online. Hence, students are encouraged to gain hands-on science experience and to witness how science connects across regions, countries and cultures. By using playful tools such as rewards, badges and a point system, we attract and maintain the interest of students. Social media channels are used to exchange and share their results as well as to reach teachers and citizen scientists in order to inspire them to use the educational App.
Having this awareness on soil and its functions, citizen scientists can make valuable contributions to the sustainable use of soils. They also have the opportunity to participate in a global scientific initiative, acquire skills in conducting a scientific experiment and gain knowledge on soil functions. The science community, on the other hand, increases its understanding of factors influencing decomposition (and associated soil functions) at different times and in different places globally.
Moreover, the TBI App can be used for „Content Language Integrated Lessons“ (CLIL), which is the use of a foreign language for the integrative teaching of content and language competence outside of language teaching in agricultural schools in Austria. Individual learning outcomes (ILOs) of an agricultural school class testing the TBI App were evaluated in an online questionnaire. Results showed high appreciation of activities offered by the TBI App and high motivation of students to contribute to science.
Keuskamp, J.A., Dingemans, B.J.J., Lehtinen, T., Sarneel, J.M. and Hefting, M.M. (2013), Tea Bag Index: a novel approach to collect uniform decomposition data across ecosystems. Methods Ecol Evol, 4: 1070-1075. doi:10.1111/2041-210X.12097
How to cite: Miloczki, J., Wawra, A., Gansberger, M., Hummer, P., and Sandén, T.: TeaTime4App – Raising awareness about the role of soils with the educational “Tea Bag Index App”, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3068, https://doi.org/10.5194/egusphere-egu2020-3068, 2020.
With the Tea Bag Index (TBI) App, we aim to foster awareness of the importance of soils and their ecosystem services to students above the age of 10. The TBI app consists of three categories of hands-on activities: Basic soil attributes, Soil observations, and Tea Bag Index. Basic soil attributes include land use, soil colour and soil life, whereas soil observations go further to Texture by Feel, Spade Test and observation of soil pollution. The Tea Bag Index (Keuskamp et al., 2013) provides an easy and scientifically recognized way to measure decomposition rates and stabilisation of organic matter in soils. The method consists of burying tea bags and measuring the degradation of organic material after three months’ time. Each of the methods includes clear instructions and extra information in the app. Data gathered are interactively shown on a map in the App as well as online. Hence, students are encouraged to gain hands-on science experience and to witness how science connects across regions, countries and cultures. By using playful tools such as rewards, badges and a point system, we attract and maintain the interest of students. Social media channels are used to exchange and share their results as well as to reach teachers and citizen scientists in order to inspire them to use the educational App.
Having this awareness on soil and its functions, citizen scientists can make valuable contributions to the sustainable use of soils. They also have the opportunity to participate in a global scientific initiative, acquire skills in conducting a scientific experiment and gain knowledge on soil functions. The science community, on the other hand, increases its understanding of factors influencing decomposition (and associated soil functions) at different times and in different places globally.
Moreover, the TBI App can be used for „Content Language Integrated Lessons“ (CLIL), which is the use of a foreign language for the integrative teaching of content and language competence outside of language teaching in agricultural schools in Austria. Individual learning outcomes (ILOs) of an agricultural school class testing the TBI App were evaluated in an online questionnaire. Results showed high appreciation of activities offered by the TBI App and high motivation of students to contribute to science.
Keuskamp, J.A., Dingemans, B.J.J., Lehtinen, T., Sarneel, J.M. and Hefting, M.M. (2013), Tea Bag Index: a novel approach to collect uniform decomposition data across ecosystems. Methods Ecol Evol, 4: 1070-1075. doi:10.1111/2041-210X.12097
How to cite: Miloczki, J., Wawra, A., Gansberger, M., Hummer, P., and Sandén, T.: TeaTime4App – Raising awareness about the role of soils with the educational “Tea Bag Index App”, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3068, https://doi.org/10.5194/egusphere-egu2020-3068, 2020.
EGU2020-9023 | Displays | ITS1.8/SSS1.1
The SECOSTA Project. Citizen science to monitor beach topography with low cost instrumentsGabriel Jordà, Miguel Agulles, Joaquím Tomàs-Ferrer, and Joan Puigdefabregas
Beach monitoring plays a fundamental role both for the knowledge of coastal morphodynamics and to assess the risk of coastal flooding. This an very relevant topic for areas in which economies are based on coastal activities like maritime transport or coastal tourim. Unfortunately, up to now the instrumentation and the means required to carry out such monitoring involve very high costs. In consequence, only a limited number of beaches can be studied in detail.
One of the main objectives of the European project SOCLIMPACT is to quantitatively assess the loss of beach surface in the European islands due to projected climate change under different emission scenarios. The main handicap of that activity is to gather accurate information of beach characteristics (topography, bathymetry, granulometry). In order to sort out that problem, the SECOSTA citizen science project has been launched with the support of the Balearic Islands regional government.
In the SECOSTA project, low cost instrumentation based on ARDUINO technology has been developed to measure both the topography and the bathymetry of the beaches. Then, an educational programme has been launched in secondary schools to teach the students to build those instruments and to perform several observational campaigns to characterize sandy beaches along the Balearic Islands. In summary, more than 20 different secondary schools have participated involving more than 2000 students in the construction of devices, acquisition and processing of data. The results have then used as the observational basis for a scientific study about projections of beach retreat in the European islands. Also, both the educational programme and the scientific results have received a broad coverage in the media. With this project, different sectors of citizenship (high school students, teachers, technicians, local government, press etc.,) are directly involved addressing one of the major challenges our society is facing (i.e. sea level rise impacts). The same approach could be translated to other fields developing suitable instrumentation.
How to cite: Jordà, G., Agulles, M., Tomàs-Ferrer, J., and Puigdefabregas, J.: The SECOSTA Project. Citizen science to monitor beach topography with low cost instruments, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9023, https://doi.org/10.5194/egusphere-egu2020-9023, 2020.
Beach monitoring plays a fundamental role both for the knowledge of coastal morphodynamics and to assess the risk of coastal flooding. This an very relevant topic for areas in which economies are based on coastal activities like maritime transport or coastal tourim. Unfortunately, up to now the instrumentation and the means required to carry out such monitoring involve very high costs. In consequence, only a limited number of beaches can be studied in detail.
One of the main objectives of the European project SOCLIMPACT is to quantitatively assess the loss of beach surface in the European islands due to projected climate change under different emission scenarios. The main handicap of that activity is to gather accurate information of beach characteristics (topography, bathymetry, granulometry). In order to sort out that problem, the SECOSTA citizen science project has been launched with the support of the Balearic Islands regional government.
In the SECOSTA project, low cost instrumentation based on ARDUINO technology has been developed to measure both the topography and the bathymetry of the beaches. Then, an educational programme has been launched in secondary schools to teach the students to build those instruments and to perform several observational campaigns to characterize sandy beaches along the Balearic Islands. In summary, more than 20 different secondary schools have participated involving more than 2000 students in the construction of devices, acquisition and processing of data. The results have then used as the observational basis for a scientific study about projections of beach retreat in the European islands. Also, both the educational programme and the scientific results have received a broad coverage in the media. With this project, different sectors of citizenship (high school students, teachers, technicians, local government, press etc.,) are directly involved addressing one of the major challenges our society is facing (i.e. sea level rise impacts). The same approach could be translated to other fields developing suitable instrumentation.
How to cite: Jordà, G., Agulles, M., Tomàs-Ferrer, J., and Puigdefabregas, J.: The SECOSTA Project. Citizen science to monitor beach topography with low cost instruments, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9023, https://doi.org/10.5194/egusphere-egu2020-9023, 2020.
EGU2020-19043 | Displays | ITS1.8/SSS1.1
Building up local knowledge on restoration: lessons learnt from organizing a set of crowdsourcing campaignsOlha Danylo, Hadi Hadi, Thoha Zulkarnain, Neha Joshi, Andree Ekadinata, Tobias Sturn, Fathir Mohamad, Bunga Goib, Ping Yowargana, Ian McCallum, Inian Moorthy, Linda See, Steffen Fritz, and Florian Kraxner
Restoration of degraded land is an important national goal to achieve Indonesia’s environmental targets. To map both land cover and land degradation, Indonesia needs timely, high quality data and the necessary tools. We have addressed this issue by running a sequence of crowdsourcing campaigns. Our aim is not only to collect the data but to also potentially present a way for citizens to contribute to larger environmental policies and strategies.
Focusing on land cover identification and tree cover change, we planned and ran a set of pilot crowdsourcing campaigns in two provinces in Indonesia. We analysed the data from these pilot campaigns, and then used the insights obtained in the subsequent crowdsourcing campaign on land cover identification, upscaled to national level, which is currently ongoing. The campaigns were run using a mobile application developed as part of the RESTORE+ project. Through this application, we presented volunteers with simple microtasks by showing them satellite images and asking a simple yes/no question as to whether the image shows a particular land cover class. The application implemented a scoring system, which additionally performs a quality control of the data contributed by the crowd, and users competed with each other to classify the satellite images displayed by the application. 692 volunteers have actively engaged in the pilot crowdsourcing campaigns and have contributed more than 2.5 million satellite image interpretations.
Based on the insights from the pilot campaigns, as well as an expert consultation session in Indonesia, the crowdsourcing application was modified to ensure, first, a uniform number of interpretations across the images, and secondly, higher quality data by allowing users to focus on geographical areas familiar to them, as well as to see the larger area surrounding the target sample.
We analyzed the data collected and will present issues regarding data quality, comparing the accuracy of the contributions from the volunteers with the accuracy of the data collected by a set of experts. We show that a citizen science approach is promising and can complement scientific analyses and can provide potential inputs to policies on landscape restoration. A crowdsourcing approach to image interpretation can also help to shorten the time needed for data collection, making the process more cost-effective. In addition, the collective ownership of the results ensures their legitimacy and increases the chances of data acceptance.
We also focus on transparency and the importance of open data. We present how we have made data generated by the crowd accessible in order to empower citizens in exploring and process the data further, thereby actively participating in environmental decision making.
How to cite: Danylo, O., Hadi, H., Zulkarnain, T., Joshi, N., Ekadinata, A., Sturn, T., Mohamad, F., Goib, B., Yowargana, P., McCallum, I., Moorthy, I., See, L., Fritz, S., and Kraxner, F.: Building up local knowledge on restoration: lessons learnt from organizing a set of crowdsourcing campaigns , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19043, https://doi.org/10.5194/egusphere-egu2020-19043, 2020.
Restoration of degraded land is an important national goal to achieve Indonesia’s environmental targets. To map both land cover and land degradation, Indonesia needs timely, high quality data and the necessary tools. We have addressed this issue by running a sequence of crowdsourcing campaigns. Our aim is not only to collect the data but to also potentially present a way for citizens to contribute to larger environmental policies and strategies.
Focusing on land cover identification and tree cover change, we planned and ran a set of pilot crowdsourcing campaigns in two provinces in Indonesia. We analysed the data from these pilot campaigns, and then used the insights obtained in the subsequent crowdsourcing campaign on land cover identification, upscaled to national level, which is currently ongoing. The campaigns were run using a mobile application developed as part of the RESTORE+ project. Through this application, we presented volunteers with simple microtasks by showing them satellite images and asking a simple yes/no question as to whether the image shows a particular land cover class. The application implemented a scoring system, which additionally performs a quality control of the data contributed by the crowd, and users competed with each other to classify the satellite images displayed by the application. 692 volunteers have actively engaged in the pilot crowdsourcing campaigns and have contributed more than 2.5 million satellite image interpretations.
Based on the insights from the pilot campaigns, as well as an expert consultation session in Indonesia, the crowdsourcing application was modified to ensure, first, a uniform number of interpretations across the images, and secondly, higher quality data by allowing users to focus on geographical areas familiar to them, as well as to see the larger area surrounding the target sample.
We analyzed the data collected and will present issues regarding data quality, comparing the accuracy of the contributions from the volunteers with the accuracy of the data collected by a set of experts. We show that a citizen science approach is promising and can complement scientific analyses and can provide potential inputs to policies on landscape restoration. A crowdsourcing approach to image interpretation can also help to shorten the time needed for data collection, making the process more cost-effective. In addition, the collective ownership of the results ensures their legitimacy and increases the chances of data acceptance.
We also focus on transparency and the importance of open data. We present how we have made data generated by the crowd accessible in order to empower citizens in exploring and process the data further, thereby actively participating in environmental decision making.
How to cite: Danylo, O., Hadi, H., Zulkarnain, T., Joshi, N., Ekadinata, A., Sturn, T., Mohamad, F., Goib, B., Yowargana, P., McCallum, I., Moorthy, I., See, L., Fritz, S., and Kraxner, F.: Building up local knowledge on restoration: lessons learnt from organizing a set of crowdsourcing campaigns , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19043, https://doi.org/10.5194/egusphere-egu2020-19043, 2020.
EGU2020-12526 | Displays | ITS1.8/SSS1.1
Connecting the green dots: Enabling micro-scale participatory mapping and planning for citizen stewards of biodiversityNatasha Pauli, Clare Mouat, Mariana Atkins, Julia Föllmer, Cristina Estima Ramalho, and Emma Ligtermoet
Within cities, vegetation along road corridors (variously referred to as nature strips, street verges or easements) can play a key role in providing habitat for wildlife and green space benefits for urban dwellers. In the city of Perth, Australia, many local government authorities (LGAs) now permit residents to convert the publicly owned land along the street in front of their dwelling from ‘traditional’ (yet exotic) turf to low growing, native gardens. ‘Verge gardens’ are perceived to require less water and better reflect a local sense of place by using plants endemic to the biodiversity hotspot in which Perth is situated. While interest in native verge gardens is growing rapidly within the community, there is relatively little supporting, spatially-based information for residents. The uncertainty of not knowing where to start is keenly felt by those residents for whom verge gardening is their first foray into gardening with native Western Australian plants in the sandy, nutrient-poor soils of Perth’s Swan Coastal Plain.
Two LGAs in the city of Perth, Western Australia, were the focus of this research, both of which have deployed incentive programs to encourage residents to plant native verge gardens over many years. We conducted detailed semi-structured interviews and participatory verge garden mapping with 22 households who had converted their verges to native gardens over the last ten years, gauging residents’ views on verge gardening, nature, wildlife, community and sense of belonging. A small number of respondents were already highly knowledgeable on the topic of native plants before planting their gardens, while the majority of the respondents had increased their knowledge of native plants from a low initial level through the process of verge gardening. Verge gardens were mapped to highlight plant species diversity, age of garden and garden design style. Some residents had already drawn their own maps by hand, and shared these with us. Others kept detailed records of water usage, maintenance, plant growth and turnover, and insect and bird visitors to the gardens.
A consistent theme that emerged from interviews with the majority of residents who claimed limited familiarity with native plants was a desire for more readily available information to help support their efforts. Information needs included: environmental data on soils, landforms, flora and fauna; knowledge of which plants would grow well in their soil type; where to source locally endemic plants; the most appropriate water and nutrient regime to care for the plants; and nearby examples of successful gardens from which to draw inspiration. Drawing on the results of interviews and participatory mapping, we present a prototype design for a public participatory mobile application that can provide geospatial and ecological information to help support residents, allow for initial planning and progressive micro-scale mapping of verge gardens, and provide the possibility for sharing information on exemplar gardens. Our research feeds into larger conversations among local-level policy makers and planners on urban greening, increasing social cohesion within suburban areas, and providing habitat for wildlife under conditions of environmental change and increasing population density.
How to cite: Pauli, N., Mouat, C., Atkins, M., Föllmer, J., Estima Ramalho, C., and Ligtermoet, E.: Connecting the green dots: Enabling micro-scale participatory mapping and planning for citizen stewards of biodiversity, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12526, https://doi.org/10.5194/egusphere-egu2020-12526, 2020.
Within cities, vegetation along road corridors (variously referred to as nature strips, street verges or easements) can play a key role in providing habitat for wildlife and green space benefits for urban dwellers. In the city of Perth, Australia, many local government authorities (LGAs) now permit residents to convert the publicly owned land along the street in front of their dwelling from ‘traditional’ (yet exotic) turf to low growing, native gardens. ‘Verge gardens’ are perceived to require less water and better reflect a local sense of place by using plants endemic to the biodiversity hotspot in which Perth is situated. While interest in native verge gardens is growing rapidly within the community, there is relatively little supporting, spatially-based information for residents. The uncertainty of not knowing where to start is keenly felt by those residents for whom verge gardening is their first foray into gardening with native Western Australian plants in the sandy, nutrient-poor soils of Perth’s Swan Coastal Plain.
Two LGAs in the city of Perth, Western Australia, were the focus of this research, both of which have deployed incentive programs to encourage residents to plant native verge gardens over many years. We conducted detailed semi-structured interviews and participatory verge garden mapping with 22 households who had converted their verges to native gardens over the last ten years, gauging residents’ views on verge gardening, nature, wildlife, community and sense of belonging. A small number of respondents were already highly knowledgeable on the topic of native plants before planting their gardens, while the majority of the respondents had increased their knowledge of native plants from a low initial level through the process of verge gardening. Verge gardens were mapped to highlight plant species diversity, age of garden and garden design style. Some residents had already drawn their own maps by hand, and shared these with us. Others kept detailed records of water usage, maintenance, plant growth and turnover, and insect and bird visitors to the gardens.
A consistent theme that emerged from interviews with the majority of residents who claimed limited familiarity with native plants was a desire for more readily available information to help support their efforts. Information needs included: environmental data on soils, landforms, flora and fauna; knowledge of which plants would grow well in their soil type; where to source locally endemic plants; the most appropriate water and nutrient regime to care for the plants; and nearby examples of successful gardens from which to draw inspiration. Drawing on the results of interviews and participatory mapping, we present a prototype design for a public participatory mobile application that can provide geospatial and ecological information to help support residents, allow for initial planning and progressive micro-scale mapping of verge gardens, and provide the possibility for sharing information on exemplar gardens. Our research feeds into larger conversations among local-level policy makers and planners on urban greening, increasing social cohesion within suburban areas, and providing habitat for wildlife under conditions of environmental change and increasing population density.
How to cite: Pauli, N., Mouat, C., Atkins, M., Föllmer, J., Estima Ramalho, C., and Ligtermoet, E.: Connecting the green dots: Enabling micro-scale participatory mapping and planning for citizen stewards of biodiversity, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12526, https://doi.org/10.5194/egusphere-egu2020-12526, 2020.
EGU2020-21324 | Displays | ITS1.8/SSS1.1
Mapping the soft and ethical dimensions of sea level rise in southern SwedenLisa Van Well, Anette Björlin, Per Danielsson, Godefroid Godefroid Ndayikengurukiye, and Gunnel Göransson
Sea level rise poses profound challenges within current municipal and regional governance since it requires unusually long planning horizons, is surrounded by great uncertainties, and gives rise to novel ethical challenges. Adaptation to climate change is fundamentally an ethical issue because the aim of any proposed adaptation measure is to protect that which is valued in society. One of the most salient ethical issues discussed in the adaptation literature relates to the distribution of climate related risks, vulnerabilities and benefits across populations and over time. Raising sea-walls is typically associated with high costs and potentially negative ecological impacts as well as substantial equity concerns; managed retreat or realignment often causes problems related to property rights; and migration out of low-lying areas can involve the loss of sense and cultural identity and impact on receiving communities.
How can the soft and ethical dimensions of rising mean sea levels be characterized and how can their consequences be mapped? To help municipalities to understand the values and ethics attached to measures to deal with long-term rising sea levels in southern Sweden, we are developing a methodology of soft or ethical values to complement to GIS-mapping of coastal vulnerability based on coastal characteristics and socio-economic factors.
Rather than determining these values a priori, they are being discerned through workshops with relevant stakeholders and in interviews with citizens residing in and utilizing the coastal areas. The methodology attempts to determine the place-based of values within coastal communities with a focus on “whose” values, “what” values, and the long-term or short-term nature of values. It builds on an analytical framework developed to acquire information on the behavior, knowledge, perception and feelings of people living, working and enjoying the coastal areas. In turn this stakeholder-based information is used to co-create “story maps” as tools to communicate complicated vulnerability analyses, highlight the ethical dimensions of various adaptation measures, raise awareness and aid decisionmakers in taking uncomfortable decisions to “wicked” planning problems around the negative effects of sea level rise, coastal erosion and urban flooding.
This paper presents the methodological development of the task as well as the results the study in four Swedish municipalities. The representation of the “soft” and ethical values provides an opportunity to help clarify these values to policymakers and increase resilience to rising sea levels.
How to cite: Van Well, L., Björlin, A., Danielsson, P., Godefroid Ndayikengurukiye, G., and Göransson, G.: Mapping the soft and ethical dimensions of sea level rise in southern Sweden, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21324, https://doi.org/10.5194/egusphere-egu2020-21324, 2020.
Sea level rise poses profound challenges within current municipal and regional governance since it requires unusually long planning horizons, is surrounded by great uncertainties, and gives rise to novel ethical challenges. Adaptation to climate change is fundamentally an ethical issue because the aim of any proposed adaptation measure is to protect that which is valued in society. One of the most salient ethical issues discussed in the adaptation literature relates to the distribution of climate related risks, vulnerabilities and benefits across populations and over time. Raising sea-walls is typically associated with high costs and potentially negative ecological impacts as well as substantial equity concerns; managed retreat or realignment often causes problems related to property rights; and migration out of low-lying areas can involve the loss of sense and cultural identity and impact on receiving communities.
How can the soft and ethical dimensions of rising mean sea levels be characterized and how can their consequences be mapped? To help municipalities to understand the values and ethics attached to measures to deal with long-term rising sea levels in southern Sweden, we are developing a methodology of soft or ethical values to complement to GIS-mapping of coastal vulnerability based on coastal characteristics and socio-economic factors.
Rather than determining these values a priori, they are being discerned through workshops with relevant stakeholders and in interviews with citizens residing in and utilizing the coastal areas. The methodology attempts to determine the place-based of values within coastal communities with a focus on “whose” values, “what” values, and the long-term or short-term nature of values. It builds on an analytical framework developed to acquire information on the behavior, knowledge, perception and feelings of people living, working and enjoying the coastal areas. In turn this stakeholder-based information is used to co-create “story maps” as tools to communicate complicated vulnerability analyses, highlight the ethical dimensions of various adaptation measures, raise awareness and aid decisionmakers in taking uncomfortable decisions to “wicked” planning problems around the negative effects of sea level rise, coastal erosion and urban flooding.
This paper presents the methodological development of the task as well as the results the study in four Swedish municipalities. The representation of the “soft” and ethical values provides an opportunity to help clarify these values to policymakers and increase resilience to rising sea levels.
How to cite: Van Well, L., Björlin, A., Danielsson, P., Godefroid Ndayikengurukiye, G., and Göransson, G.: Mapping the soft and ethical dimensions of sea level rise in southern Sweden, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21324, https://doi.org/10.5194/egusphere-egu2020-21324, 2020.
EGU2020-2380 | Displays | ITS1.8/SSS1.1
Heavy Metal City-Zen. Exploring the potential risk of heavy metal contamination of food crop plants in urban gardening contexts using a citizen science approach.Elisabeth Ziss, Wolfgang Friesl-Hanl, Christoph Noller, Andrea Watzinger, and Rebecca Hood-Nowotny
Urban Gardening has become increasingly popular globally in the past two decades as urbanites begin to recognise the benefits of growing their own food and the sense of community these gardening activities engender. These activities grow as citizens reclaim derelict land and are increasingly using roof top gardens and novel containers, providing much needed green oases in the city, concepts which are particularly popular with the “share” generation. However, many such sites are in areas of high traffic density, on brown field sites or on sites overlying landfill, as a result of their urban location. The proximity to such sites may lead to worries about the food safety and reduction of the adoption of such healthy urban gardening practices. One of the main concerns is the transfer of urban pollutants into the consumer’s food chain. Trace metals are one of the contaminants frequently found in urban crops and soils. Perceived concerns about the effects of these heavy metal contaminants on human health often outweigh the true risk; part of the problem is the lack of data in the urban production context. Moreover, collection of city-wide data on the health of the soil is often difficult and expensive to collect. In this project we intend to attempt to overcome these issues by recruiting citizens to conduct simple common collaborative experiments in their urban gardens, from these data we will create a city map of soil health status and providing information on potential risk of heavy metal contaminants and ways in which to mitigate those risks in an Urban Gardening context. We chose a citizen science approach in this project, not only as it will allow us to gather a wealth of data but also it will empower us to jointly generate useful information for the greater public good which can contribute towards creating green sustainable cities.
This project will place the citizen at the heart of the experimental process in contrast to more traditional observational data collection. Using an experimental approach really exposes the citizens to the scientific process and enables them to gain tacit knowledge of how scientists overcome variance, bias and arrives at scientifically sound evidence based conclusions. As a result, citizen science can provide reassurance to the public about the rigour and process of scientific enquiry. In doing so it can inspire confidence and understanding of the nuances of political bias; putting contextual knowledge together, in learning by doing.
How to cite: Ziss, E., Friesl-Hanl, W., Noller, C., Watzinger, A., and Hood-Nowotny, R.: Heavy Metal City-Zen. Exploring the potential risk of heavy metal contamination of food crop plants in urban gardening contexts using a citizen science approach., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2380, https://doi.org/10.5194/egusphere-egu2020-2380, 2020.
Urban Gardening has become increasingly popular globally in the past two decades as urbanites begin to recognise the benefits of growing their own food and the sense of community these gardening activities engender. These activities grow as citizens reclaim derelict land and are increasingly using roof top gardens and novel containers, providing much needed green oases in the city, concepts which are particularly popular with the “share” generation. However, many such sites are in areas of high traffic density, on brown field sites or on sites overlying landfill, as a result of their urban location. The proximity to such sites may lead to worries about the food safety and reduction of the adoption of such healthy urban gardening practices. One of the main concerns is the transfer of urban pollutants into the consumer’s food chain. Trace metals are one of the contaminants frequently found in urban crops and soils. Perceived concerns about the effects of these heavy metal contaminants on human health often outweigh the true risk; part of the problem is the lack of data in the urban production context. Moreover, collection of city-wide data on the health of the soil is often difficult and expensive to collect. In this project we intend to attempt to overcome these issues by recruiting citizens to conduct simple common collaborative experiments in their urban gardens, from these data we will create a city map of soil health status and providing information on potential risk of heavy metal contaminants and ways in which to mitigate those risks in an Urban Gardening context. We chose a citizen science approach in this project, not only as it will allow us to gather a wealth of data but also it will empower us to jointly generate useful information for the greater public good which can contribute towards creating green sustainable cities.
This project will place the citizen at the heart of the experimental process in contrast to more traditional observational data collection. Using an experimental approach really exposes the citizens to the scientific process and enables them to gain tacit knowledge of how scientists overcome variance, bias and arrives at scientifically sound evidence based conclusions. As a result, citizen science can provide reassurance to the public about the rigour and process of scientific enquiry. In doing so it can inspire confidence and understanding of the nuances of political bias; putting contextual knowledge together, in learning by doing.
How to cite: Ziss, E., Friesl-Hanl, W., Noller, C., Watzinger, A., and Hood-Nowotny, R.: Heavy Metal City-Zen. Exploring the potential risk of heavy metal contamination of food crop plants in urban gardening contexts using a citizen science approach., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2380, https://doi.org/10.5194/egusphere-egu2020-2380, 2020.
EGU2020-17950 | Displays | ITS1.8/SSS1.1
ChessWatch: Observations on a Citizen Science approach to catchment management.Catherine Heppell, Angela Bartlett, Allen Beechey, Paul Jennings, and Helena Souteriou
River Chess is a chalk stream in South East England (UK), under unprecedented pressure from over-abstraction, urbanisation and climate change, which currently fails to meet good ecological status. Citizen Scientists have been active in the catchment for 9 years carrying out riverfly monitoring due chiefly to concerns about water quality and poor fish populations. The River Chess is also a pilot river for a new catchment-based ‘Smarter Water Catchments’ programme run by the region’s wastewater treatment company (Thames Water) which aims to work with local communities and regulators to deliver improvements to the river by tackling multiple challenges together. The community-led ChessWatch project is a part of this initiative, and is designed to raise public awareness of threats to the River Chess and involve the public in river management activities using a sensor network as a platform. In 2018 four water quality sensors were installed in the river to provide stakeholders with real-time water quality data (15-minute intervals) to support catchment management activities. The dataset from the project is intended to support future decision-making in the catchment as part of the five-year ‘Smarter Water Catchments’ approach.
Our presentation will review the successes and drawbacks of the ChessWatch project to date and examine the challenges of linking the data collected by the project to policy and practice in a catchment with multiple stakeholder groups. We present the results of a participatory mapping exercise held at local community events to capture the public use of, and concerns for, the river revealing concerns for low flows and water quality issues linked to abstraction and runoff. We show how dissolved oxygen, temperature, turbidity, chlorophyll-a and tryptophan measurements made by the sensors are enabling local stakeholders to better understand the threats to the river arising from urban runoff and changing rainfall patterns, and we examine the challenges of data presentation, sharing and usage in an urbanised catchment with high water demand and multiple conflicting interests.
How to cite: Heppell, C., Bartlett, A., Beechey, A., Jennings, P., and Souteriou, H.: ChessWatch: Observations on a Citizen Science approach to catchment management., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-17950, https://doi.org/10.5194/egusphere-egu2020-17950, 2020.
River Chess is a chalk stream in South East England (UK), under unprecedented pressure from over-abstraction, urbanisation and climate change, which currently fails to meet good ecological status. Citizen Scientists have been active in the catchment for 9 years carrying out riverfly monitoring due chiefly to concerns about water quality and poor fish populations. The River Chess is also a pilot river for a new catchment-based ‘Smarter Water Catchments’ programme run by the region’s wastewater treatment company (Thames Water) which aims to work with local communities and regulators to deliver improvements to the river by tackling multiple challenges together. The community-led ChessWatch project is a part of this initiative, and is designed to raise public awareness of threats to the River Chess and involve the public in river management activities using a sensor network as a platform. In 2018 four water quality sensors were installed in the river to provide stakeholders with real-time water quality data (15-minute intervals) to support catchment management activities. The dataset from the project is intended to support future decision-making in the catchment as part of the five-year ‘Smarter Water Catchments’ approach.
Our presentation will review the successes and drawbacks of the ChessWatch project to date and examine the challenges of linking the data collected by the project to policy and practice in a catchment with multiple stakeholder groups. We present the results of a participatory mapping exercise held at local community events to capture the public use of, and concerns for, the river revealing concerns for low flows and water quality issues linked to abstraction and runoff. We show how dissolved oxygen, temperature, turbidity, chlorophyll-a and tryptophan measurements made by the sensors are enabling local stakeholders to better understand the threats to the river arising from urban runoff and changing rainfall patterns, and we examine the challenges of data presentation, sharing and usage in an urbanised catchment with high water demand and multiple conflicting interests.
How to cite: Heppell, C., Bartlett, A., Beechey, A., Jennings, P., and Souteriou, H.: ChessWatch: Observations on a Citizen Science approach to catchment management., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-17950, https://doi.org/10.5194/egusphere-egu2020-17950, 2020.
EGU2020-20766 * | Displays | ITS1.8/SSS1.1 | Highlight
Openness in geoscience - a quantitative assessmentJeroen Bosman
There is growing consensus that making our research process and outputs more open is necessary to increase transparency, efficiency, reproducibility and relevance of research. With that we should be better able to contribute to answering important questions and overcoming grand challenges. Despite considerable attention for open science, including citizen science, there is no overall baseline showing the current state of openness in our field. This presentation shows results from research that quantitatively charts the adoption of open practices across the geosciences, mostly globally and across the full research workflow. They range from setting research priorities, collaboration with global south researchers and researchers in other disciplines, sharing code and data, sharing posters online, sharing early versions of papers as preprints, publishing open access, opening up peer review, using open licenses when sharing, to engaging with potential stakeholders of research outcomes and reaching out to the wider public. The assessment uses scientometric data, publication data, data from sharing platforms and journals, altmetrics data, and mining of abstracts and other outputs, aiming to address the breadth of open science practices. The resulting images show that open science application is not marginal anymore, but at the same time certainly not mainstream. It also shows that limited sharing, limited use of open licenses and limited use of permanent IDs makes this type of assessment very hard. Insights derived from the study are relevant inputs in science policy discussions on data requirements, open access, researcher training and involvement of societal partners.
How to cite: Bosman, J.: Openness in geoscience - a quantitative assessment, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20766, https://doi.org/10.5194/egusphere-egu2020-20766, 2020.
There is growing consensus that making our research process and outputs more open is necessary to increase transparency, efficiency, reproducibility and relevance of research. With that we should be better able to contribute to answering important questions and overcoming grand challenges. Despite considerable attention for open science, including citizen science, there is no overall baseline showing the current state of openness in our field. This presentation shows results from research that quantitatively charts the adoption of open practices across the geosciences, mostly globally and across the full research workflow. They range from setting research priorities, collaboration with global south researchers and researchers in other disciplines, sharing code and data, sharing posters online, sharing early versions of papers as preprints, publishing open access, opening up peer review, using open licenses when sharing, to engaging with potential stakeholders of research outcomes and reaching out to the wider public. The assessment uses scientometric data, publication data, data from sharing platforms and journals, altmetrics data, and mining of abstracts and other outputs, aiming to address the breadth of open science practices. The resulting images show that open science application is not marginal anymore, but at the same time certainly not mainstream. It also shows that limited sharing, limited use of open licenses and limited use of permanent IDs makes this type of assessment very hard. Insights derived from the study are relevant inputs in science policy discussions on data requirements, open access, researcher training and involvement of societal partners.
How to cite: Bosman, J.: Openness in geoscience - a quantitative assessment, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20766, https://doi.org/10.5194/egusphere-egu2020-20766, 2020.
EGU2020-9100 | Displays | ITS1.8/SSS1.1
Green spaces for recreation in densifying urban landscapes through the eyes of the residents – mental mapping in southern Stockholm, SwedenJacqueline Otto, Sara Borgström, and Dagmar Haase
The increasing densification in cities worldwide has led to various challenges, one of them being the loss of green spaces, which leads to increasing and diversifying pressure on the remaining green infrastructure. As city´s green infrastructure delivers important ecosystem services, crucial to its resident´s wellbeing, it is of uttermost importance to secure a resilient flow of benefits from the remaining green spaces. To find suitable ways to navigate the current challenges and create sustainable and resilient urban landscapes, urban land use planning, decision-making and practical management need to gain deeper insights to how residents perceive and interact with their surrounding landscapes, particularly the remaining green spaces. How people value green spaces and what perceived barriers to these are can highly influence if or to what extent people use green spaces and hence have access to their potential benefits.
Until now, methods such as on-site visitor studies, online surveys or public participation GIS have been applied to understand how people perceive and use green spaces. Nevertheless, complementary methods are needed that address what people perceive in a landscape based on their own presentation and associations. The mental mapping approach has the potential to add new layers of knowledge (e.g. local knowledge, tacit knowledge) about how residents perceive their surrounding city landscape and how different perceptions can evolve from a landscape. By applying this method as a participatory mapping tool and accessing additional sources of knowledge, decisions in urban planning and practical management can be improved and potential land use conflicts proactively detected and navigated.
The city of Stockholm, a rapidly densifying urban environment, was chosen as a case study area to analyze how the mental mapping method can contribute to understanding people's perceptions of their surrounding green spaces focusing on recreational purposes. In summer 2018, about 90 residents in two neighboring districts in Stockholm, were asked to draw a sketch map of outdoor, green places they go to for recreational purposes, afterwards answering a few interview questions regarding perceived benefits and barriers to green space and sense of place. The collected mental maps showed the resident´s spatial perceptions, orientations, preferences, and important landscape elements. Repeatedly drawn landscape elements provided information about a shared geographical imaginary and important hot-spots in the study areas. Ongoing transformation processes through densification were already impacting respondent´s perceptions and their sense of place. In conclusion, this study showcases in what ways mental mapping has a great potential to improve the understanding of people's perceptions of the landscape and its green spaces, which can in turn support a resilient and locally adjusted landscape planning process, design and practical management.
-------------------------------------------------------------------------------------------------------------------------------
The study is part of the Enable research project: http://projectenable.eu/
This research was funded through the 2015–2016 BiodivERsA COFUND call for research proposals, with the national funders the Swedish Research Council for Environment, Agricultural Sciences, and Spatial Planning; the Swedish Environmental Protection Agency; the German Aerospace Center; the National Science Centre (Poland; grant no. 2016/22/Z/NZ8/00003); the Research Council of Norway; and the Spanish Ministry of Economy and Competitiveness.
How to cite: Otto, J., Borgström, S., and Haase, D.: Green spaces for recreation in densifying urban landscapes through the eyes of the residents – mental mapping in southern Stockholm, Sweden, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9100, https://doi.org/10.5194/egusphere-egu2020-9100, 2020.
The increasing densification in cities worldwide has led to various challenges, one of them being the loss of green spaces, which leads to increasing and diversifying pressure on the remaining green infrastructure. As city´s green infrastructure delivers important ecosystem services, crucial to its resident´s wellbeing, it is of uttermost importance to secure a resilient flow of benefits from the remaining green spaces. To find suitable ways to navigate the current challenges and create sustainable and resilient urban landscapes, urban land use planning, decision-making and practical management need to gain deeper insights to how residents perceive and interact with their surrounding landscapes, particularly the remaining green spaces. How people value green spaces and what perceived barriers to these are can highly influence if or to what extent people use green spaces and hence have access to their potential benefits.
Until now, methods such as on-site visitor studies, online surveys or public participation GIS have been applied to understand how people perceive and use green spaces. Nevertheless, complementary methods are needed that address what people perceive in a landscape based on their own presentation and associations. The mental mapping approach has the potential to add new layers of knowledge (e.g. local knowledge, tacit knowledge) about how residents perceive their surrounding city landscape and how different perceptions can evolve from a landscape. By applying this method as a participatory mapping tool and accessing additional sources of knowledge, decisions in urban planning and practical management can be improved and potential land use conflicts proactively detected and navigated.
The city of Stockholm, a rapidly densifying urban environment, was chosen as a case study area to analyze how the mental mapping method can contribute to understanding people's perceptions of their surrounding green spaces focusing on recreational purposes. In summer 2018, about 90 residents in two neighboring districts in Stockholm, were asked to draw a sketch map of outdoor, green places they go to for recreational purposes, afterwards answering a few interview questions regarding perceived benefits and barriers to green space and sense of place. The collected mental maps showed the resident´s spatial perceptions, orientations, preferences, and important landscape elements. Repeatedly drawn landscape elements provided information about a shared geographical imaginary and important hot-spots in the study areas. Ongoing transformation processes through densification were already impacting respondent´s perceptions and their sense of place. In conclusion, this study showcases in what ways mental mapping has a great potential to improve the understanding of people's perceptions of the landscape and its green spaces, which can in turn support a resilient and locally adjusted landscape planning process, design and practical management.
-------------------------------------------------------------------------------------------------------------------------------
The study is part of the Enable research project: http://projectenable.eu/
This research was funded through the 2015–2016 BiodivERsA COFUND call for research proposals, with the national funders the Swedish Research Council for Environment, Agricultural Sciences, and Spatial Planning; the Swedish Environmental Protection Agency; the German Aerospace Center; the National Science Centre (Poland; grant no. 2016/22/Z/NZ8/00003); the Research Council of Norway; and the Spanish Ministry of Economy and Competitiveness.
How to cite: Otto, J., Borgström, S., and Haase, D.: Green spaces for recreation in densifying urban landscapes through the eyes of the residents – mental mapping in southern Stockholm, Sweden, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9100, https://doi.org/10.5194/egusphere-egu2020-9100, 2020.
EGU2020-1472 | Displays | ITS1.8/SSS1.1
A Citizen Science Web Portal for Interdisciplinary Research on Climate Change (BAYSICS): Development and EvaluationAnudari Batsaikhan and Jens Weismüller
Citizen science can be used to collect vast and timely data, while promoting active learning on selected topics. The Bavarian Citizen Science Portal for Climate Research and Science Communication (BAYSICS) is a scientific project which started in 2018 with 10 partner institutions in Bavaria. It aims to achieve (1) citizens’ participation in climate change research through innovative digital forms, (2) transfer of knowledge on the complexity of climate change and its local consequences, and (3) joint scientific and environmental education goals.
Within the BAYSICS project, a web portal has been developed that builds the interface between researchers and citizens. In the initial phase, the interests from the different research disciplines participating in the project were identified. Currently, the IT structure for the web portal is developed based on the needs of the project. Free tools such as PostgreSQL, Django, Gunicorn and Nginx are used. The researchers involved have the opportunity to integrate research topic specific questions and data collection guidelines for citizens.
On the web portal, users are able to choose a topic from four different areas (phenology, pollen, tree, and animals) and submit their observations in multiple data types (pictures, geolocations, and texts). The observation data is visualized on a map of the web portal. The data collected within the project is freely available for download on the web portal, while protecting user’s privacy. Application Programming Interface (API) is developed to enable interaction with other software products and services.
A first test phase within the project members start at the beginning of 2020. Afterwards, a second test phase is planned involving potential users (e.g. school students and teachers). The outcomes from the test phases will be used for evaluation.
How to cite: Batsaikhan, A. and Weismüller, J.: A Citizen Science Web Portal for Interdisciplinary Research on Climate Change (BAYSICS): Development and Evaluation, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1472, https://doi.org/10.5194/egusphere-egu2020-1472, 2020.
Citizen science can be used to collect vast and timely data, while promoting active learning on selected topics. The Bavarian Citizen Science Portal for Climate Research and Science Communication (BAYSICS) is a scientific project which started in 2018 with 10 partner institutions in Bavaria. It aims to achieve (1) citizens’ participation in climate change research through innovative digital forms, (2) transfer of knowledge on the complexity of climate change and its local consequences, and (3) joint scientific and environmental education goals.
Within the BAYSICS project, a web portal has been developed that builds the interface between researchers and citizens. In the initial phase, the interests from the different research disciplines participating in the project were identified. Currently, the IT structure for the web portal is developed based on the needs of the project. Free tools such as PostgreSQL, Django, Gunicorn and Nginx are used. The researchers involved have the opportunity to integrate research topic specific questions and data collection guidelines for citizens.
On the web portal, users are able to choose a topic from four different areas (phenology, pollen, tree, and animals) and submit their observations in multiple data types (pictures, geolocations, and texts). The observation data is visualized on a map of the web portal. The data collected within the project is freely available for download on the web portal, while protecting user’s privacy. Application Programming Interface (API) is developed to enable interaction with other software products and services.
A first test phase within the project members start at the beginning of 2020. Afterwards, a second test phase is planned involving potential users (e.g. school students and teachers). The outcomes from the test phases will be used for evaluation.
How to cite: Batsaikhan, A. and Weismüller, J.: A Citizen Science Web Portal for Interdisciplinary Research on Climate Change (BAYSICS): Development and Evaluation, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1472, https://doi.org/10.5194/egusphere-egu2020-1472, 2020.
EGU2020-13593 | Displays | ITS1.8/SSS1.1
Observation and Reporting of Landforms and Landscape Dynamics by CitizensDaniel Hölbling, Sabine Hennig, Lorena Abad, Simon Ecke, and Dirk Tiede
The observation and reporting of flora and fauna with the help of citizen scientists has a long tradition. However, citizen science projects have also a high potential for the reporting and mapping of landforms, as well as for observing landscape dynamics. While remote sensing has opened up new mapping and monitoring possibilities at high spatial and temporal resolutions, there is still a growing demand for gathering (spatial) data directly in the field (reporting on actual events, landform characteristics, and landscape changes, provision of reference data and photos). This becomes even more relevant since climate change effects (e.g. glacier retreat, shift of precipitation regime, melting of permafrost) will likely result in more significant morphological changes with an impact on the landscape.
In the project citizenMorph (Observation and Reporting of Landscape Dynamics by Citizens; http://citizenmorph.sbg.ac.at) we developed a pilot web-based interactive application that allows and supports citizens to map and contribute field data (spatial data, in-situ information, geotagged photos) on landforms. Such features are, for example, mass movements (e.g. rockfall, landslide, debris flow), glacial features (e.g. rock glacier, moraine, drumlin), volcanic features (e.g. lava flow, lahar, mudpot), or coastal features (e.g. cliff, coastal erosion, skerry). To design and implement a system that fully matches experts’ and citizens’ requirements, that ensures that citizens benefit from participating in citizenMorph, and that provides extensive, high-quality data, citizen representatives (mainly high school students, students, and seniors) actively and directly took part in the development process. These users are considered as particularly critical, sensitive to usability and accessibility issues, and demanding when it comes to using information and communication technology (ICT). In line with the concept of participatory design, citizen representatives were involved in all steps of the development process: specification of requirements, design, implementation, and testing of the system. The generation of a pilot was done using Survey123 for ArcGIS, a survey to collect data in the field, i.e. type and location of the landform, overview image and image series of the landform, and the content management system WordPress to create a website to inform, guide and support the participants. Throughout the survey (https://arcg.is/15WPKv0) and the website, different kinds of information (e.g. project information, guidelines for data collection and reporting, data protection information) are given to the participants. The final citizenMorph system was tested and discussed on several events with citizen representatives in Austria, Germany, and Iceland. Feedback from the tests was gathered using techniques such as observation, focus groups, and interviews/questionnaires. This allowed us to evaluate and improve the system as a whole.
The collected data, particularly the image series, are used for 3D reconstruction of the surface using Structure from Motion (SfM) and dense image matching (DIM) methods. Moreover, the collected data can be helpful for enriching and validating remote sensing based mapping results and increasing their detail and information content. Having a comprehensive database, holding field data and remote sensing data together, is of importance for any subsequent analysis and for broadening our knowledge about geomorphological landscape dynamics and the prevalence of landforms.
How to cite: Hölbling, D., Hennig, S., Abad, L., Ecke, S., and Tiede, D.: Observation and Reporting of Landforms and Landscape Dynamics by Citizens, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13593, https://doi.org/10.5194/egusphere-egu2020-13593, 2020.
The observation and reporting of flora and fauna with the help of citizen scientists has a long tradition. However, citizen science projects have also a high potential for the reporting and mapping of landforms, as well as for observing landscape dynamics. While remote sensing has opened up new mapping and monitoring possibilities at high spatial and temporal resolutions, there is still a growing demand for gathering (spatial) data directly in the field (reporting on actual events, landform characteristics, and landscape changes, provision of reference data and photos). This becomes even more relevant since climate change effects (e.g. glacier retreat, shift of precipitation regime, melting of permafrost) will likely result in more significant morphological changes with an impact on the landscape.
In the project citizenMorph (Observation and Reporting of Landscape Dynamics by Citizens; http://citizenmorph.sbg.ac.at) we developed a pilot web-based interactive application that allows and supports citizens to map and contribute field data (spatial data, in-situ information, geotagged photos) on landforms. Such features are, for example, mass movements (e.g. rockfall, landslide, debris flow), glacial features (e.g. rock glacier, moraine, drumlin), volcanic features (e.g. lava flow, lahar, mudpot), or coastal features (e.g. cliff, coastal erosion, skerry). To design and implement a system that fully matches experts’ and citizens’ requirements, that ensures that citizens benefit from participating in citizenMorph, and that provides extensive, high-quality data, citizen representatives (mainly high school students, students, and seniors) actively and directly took part in the development process. These users are considered as particularly critical, sensitive to usability and accessibility issues, and demanding when it comes to using information and communication technology (ICT). In line with the concept of participatory design, citizen representatives were involved in all steps of the development process: specification of requirements, design, implementation, and testing of the system. The generation of a pilot was done using Survey123 for ArcGIS, a survey to collect data in the field, i.e. type and location of the landform, overview image and image series of the landform, and the content management system WordPress to create a website to inform, guide and support the participants. Throughout the survey (https://arcg.is/15WPKv0) and the website, different kinds of information (e.g. project information, guidelines for data collection and reporting, data protection information) are given to the participants. The final citizenMorph system was tested and discussed on several events with citizen representatives in Austria, Germany, and Iceland. Feedback from the tests was gathered using techniques such as observation, focus groups, and interviews/questionnaires. This allowed us to evaluate and improve the system as a whole.
The collected data, particularly the image series, are used for 3D reconstruction of the surface using Structure from Motion (SfM) and dense image matching (DIM) methods. Moreover, the collected data can be helpful for enriching and validating remote sensing based mapping results and increasing their detail and information content. Having a comprehensive database, holding field data and remote sensing data together, is of importance for any subsequent analysis and for broadening our knowledge about geomorphological landscape dynamics and the prevalence of landforms.
How to cite: Hölbling, D., Hennig, S., Abad, L., Ecke, S., and Tiede, D.: Observation and Reporting of Landforms and Landscape Dynamics by Citizens, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13593, https://doi.org/10.5194/egusphere-egu2020-13593, 2020.
EGU2020-22523 | Displays | ITS1.8/SSS1.1
NoiseCap: a citizen science experiment to raise awareness of noise environments with cell phonesLorenzo Bigagli, Roberto Salzano, and Massimiliano Olivieri
The NoiseCap experiment was an unfunded follow up of the Energic-OD project (European NEtwork for Redistributing Geospatial Information to user Communities - Open Data), which had started in October 2014 and ended in September 2017 and had been supported by the European Union under the Competitiveness and Innovation framework Programme (CIP).
The project built on one of Energic-OD outcomes, the NoiseCapture Android app, allowing cell phone users to measure their outdoor noise environment and optionally share their measurements on the free and open-source Noise-Planet platform and scientific toolset for environmental noise assessment. Each noise measurement is annotated with its location and can be displayed in interactive noise maps, within the app and on the Noise-Planet portal.
In NoiseCap, we were primarily interested in extending the NoiseCapture use case to indoor settings, hence we chose to focus on air traffic noise (namely landing events), which is well characterized and identifiable by citizens living in airport surroundings. Our experiment targeted the neighbourhood of the airport of Florence, Italy, but may be easily reproduced in any similar community. We were also interested in assessing the reliability of commercial cell phone in measuring indoor noise, by comparing collected data with appropriate reference measurement.
User participation in NoiseCap was on a completely voluntary basis, e.g. volunteers were free to choose whether to measure any given landing event, during the period of the campaign, which lasted for several weeks. Participants were mainly enrolled through the local network of environmental activists and were asked to follow a simple protocol, to ensure their individual measurements would be taken in nearly identical conditions, in particular from the same spot, specified by the volunteer during registration.
From a technological viewpoint, the implementation of NoiseCap has highlighted a substantial lack of open Event-Driven standards and solutions in contemporary Spatial Data Infrastructures, e.g. for processing spatial time series, identify events and apply event pattern matching. We have developed a customized architectural approach, including a notification service based on raw ADS-B Mode S data processing and a proprietary solution (Telegram-based push messages), to alert the volunteers with individual time-before-overflight estimations.
In conclusion, the NoiseCap experiment has provided useful insights on Event-Driven Architectures, as well as on the application of citizen science to sensitive issues in local communities.
How to cite: Bigagli, L., Salzano, R., and Olivieri, M.: NoiseCap: a citizen science experiment to raise awareness of noise environments with cell phones, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22523, https://doi.org/10.5194/egusphere-egu2020-22523, 2020.
The NoiseCap experiment was an unfunded follow up of the Energic-OD project (European NEtwork for Redistributing Geospatial Information to user Communities - Open Data), which had started in October 2014 and ended in September 2017 and had been supported by the European Union under the Competitiveness and Innovation framework Programme (CIP).
The project built on one of Energic-OD outcomes, the NoiseCapture Android app, allowing cell phone users to measure their outdoor noise environment and optionally share their measurements on the free and open-source Noise-Planet platform and scientific toolset for environmental noise assessment. Each noise measurement is annotated with its location and can be displayed in interactive noise maps, within the app and on the Noise-Planet portal.
In NoiseCap, we were primarily interested in extending the NoiseCapture use case to indoor settings, hence we chose to focus on air traffic noise (namely landing events), which is well characterized and identifiable by citizens living in airport surroundings. Our experiment targeted the neighbourhood of the airport of Florence, Italy, but may be easily reproduced in any similar community. We were also interested in assessing the reliability of commercial cell phone in measuring indoor noise, by comparing collected data with appropriate reference measurement.
User participation in NoiseCap was on a completely voluntary basis, e.g. volunteers were free to choose whether to measure any given landing event, during the period of the campaign, which lasted for several weeks. Participants were mainly enrolled through the local network of environmental activists and were asked to follow a simple protocol, to ensure their individual measurements would be taken in nearly identical conditions, in particular from the same spot, specified by the volunteer during registration.
From a technological viewpoint, the implementation of NoiseCap has highlighted a substantial lack of open Event-Driven standards and solutions in contemporary Spatial Data Infrastructures, e.g. for processing spatial time series, identify events and apply event pattern matching. We have developed a customized architectural approach, including a notification service based on raw ADS-B Mode S data processing and a proprietary solution (Telegram-based push messages), to alert the volunteers with individual time-before-overflight estimations.
In conclusion, the NoiseCap experiment has provided useful insights on Event-Driven Architectures, as well as on the application of citizen science to sensitive issues in local communities.
How to cite: Bigagli, L., Salzano, R., and Olivieri, M.: NoiseCap: a citizen science experiment to raise awareness of noise environments with cell phones, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22523, https://doi.org/10.5194/egusphere-egu2020-22523, 2020.
EGU2020-2805 | Displays | ITS1.8/SSS1.1
qConut: A mobile geospatial application for promoting sustainable and climate-smart Pacific Island agricultural landscapesEloise Biggs, Bryan Boruff, Michael Boyland, Eleanor Bruce, Kevin Davies, John Duncan, Clemens Grünbühel, Viliami Manu, Jalesi Mateboto, Pyone Myat Thu, Andreas Neef, John Oakeshott, Natasha Pauli, Helena Shojaei, Renata Varea, and Nathan Wales
Successful ‘smart’ agricultural interventions provide mutually positive impacts to inhabitants’ livelihoods, landscape sustainability, and the capacity of a system to respond effectively to climate variability. Geospatial technological tools have the potential for accurate and timely locational monitoring within multifunctional landscapes. Information derived from using such tools can substantially inform environmental management, policy, and climate-resilient practice. Our research is developing a mobile geospatial application for contemporary data collection and monitoring, allowing the dynamic capture of landscape information. Through community consultations, stakeholder engagement activities, and Information and Communications Technologies for Development (ICT4D) user requirements analysis, we have mapped government data flows and information needs of smallholder farmers in the Pacific Island nations of Fiji and Tonga. Subsequently, the barriers experienced by landscape users to access and understand relevant, reliable and usable environmental data and information were identified. We then designed an open-source mobile geospatial application to facilitate knowledge sharing between different landscape stakeholders. Our multi-user open source application – qConut – is being co-developed with the Ministry of Forests in Fiji and the Ministry of Agriculture, Food and Forests in Tonga, alongside collaborative participatory contributions from the wider farming communities. Here we present the methodological approach, application functionality, and prototype usability outcomes from field testing undertaken in the Ba Catchment, Fiji, and Tongatapu, Tonga. The qConut application has a current target user focus on agricultural extension officers who are trialling the application within cropping and forestry sectors. Results of trial usage highlight the importance of understanding the specific needs and capacities of all stakeholder groups in developing effective digitally-enabled climate information services. By utilising mobile geospatial technologies our research is helping to address shortcomings in location-targeted information delivery, environmental monitoring, and data sharing within Pacific Island agricultural communities. See www.livelihoodsandlandscapes.com for further information.
How to cite: Biggs, E., Boruff, B., Boyland, M., Bruce, E., Davies, K., Duncan, J., Grünbühel, C., Manu, V., Mateboto, J., Myat Thu, P., Neef, A., Oakeshott, J., Pauli, N., Shojaei, H., Varea, R., and Wales, N.: qConut: A mobile geospatial application for promoting sustainable and climate-smart Pacific Island agricultural landscapes, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2805, https://doi.org/10.5194/egusphere-egu2020-2805, 2020.
Successful ‘smart’ agricultural interventions provide mutually positive impacts to inhabitants’ livelihoods, landscape sustainability, and the capacity of a system to respond effectively to climate variability. Geospatial technological tools have the potential for accurate and timely locational monitoring within multifunctional landscapes. Information derived from using such tools can substantially inform environmental management, policy, and climate-resilient practice. Our research is developing a mobile geospatial application for contemporary data collection and monitoring, allowing the dynamic capture of landscape information. Through community consultations, stakeholder engagement activities, and Information and Communications Technologies for Development (ICT4D) user requirements analysis, we have mapped government data flows and information needs of smallholder farmers in the Pacific Island nations of Fiji and Tonga. Subsequently, the barriers experienced by landscape users to access and understand relevant, reliable and usable environmental data and information were identified. We then designed an open-source mobile geospatial application to facilitate knowledge sharing between different landscape stakeholders. Our multi-user open source application – qConut – is being co-developed with the Ministry of Forests in Fiji and the Ministry of Agriculture, Food and Forests in Tonga, alongside collaborative participatory contributions from the wider farming communities. Here we present the methodological approach, application functionality, and prototype usability outcomes from field testing undertaken in the Ba Catchment, Fiji, and Tongatapu, Tonga. The qConut application has a current target user focus on agricultural extension officers who are trialling the application within cropping and forestry sectors. Results of trial usage highlight the importance of understanding the specific needs and capacities of all stakeholder groups in developing effective digitally-enabled climate information services. By utilising mobile geospatial technologies our research is helping to address shortcomings in location-targeted information delivery, environmental monitoring, and data sharing within Pacific Island agricultural communities. See www.livelihoodsandlandscapes.com for further information.
How to cite: Biggs, E., Boruff, B., Boyland, M., Bruce, E., Davies, K., Duncan, J., Grünbühel, C., Manu, V., Mateboto, J., Myat Thu, P., Neef, A., Oakeshott, J., Pauli, N., Shojaei, H., Varea, R., and Wales, N.: qConut: A mobile geospatial application for promoting sustainable and climate-smart Pacific Island agricultural landscapes, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2805, https://doi.org/10.5194/egusphere-egu2020-2805, 2020.
EGU2020-18531 | Displays | ITS1.8/SSS1.1 | Highlight
Advancing in Citizen Science Interoperability by testing standard components between Citizen ObservatoriesJoan Masó, Ester Prat, Andy Cobley, Andreas Matheus, Núria Julià, Simon Jirka, Friederike Klan, Valantis Tsiakos, and Sven Schade
The first phase of the citizen science Interoperability Experiment organized by the Interoperability Community of Practice in the EU H2020 WeObserve project under the Open Geospatial Consortium (OGC) innovation program and supported by the four H2020 Citizen Observatories projects (SCENT, GROW, LandSense & GroundTruth 2.0) as well as the EU H2020 NEXTGEOSS project has finalized with the release of an Engineering Report in the OGC website. The activity, initiated by the European Space Agency (ESA), EC Joint Research Center (JRC), the Wilson Center, International Institute for Applied Systems Analysis (IIASA) and CREAF wanted to covered aspects of data sharing architectures for citizen science data, data quality, data definitions and user authentication.
The final aim is to propose solutions for Citizen Science data to be integrated in the Global Earth Observation System of Systems (GEOSS). The solution is necessarily a combination of technical and networking components, being the first ones the focus of this work. The applications of international geospatial standards in current citizen science and citizen observatory projects to improve interoperability and foster innovation is one of the main tasks in during the experiment to achieve the final aim.
The main result was to demonstrate that OGC Sensor Observing Service (SOS) standard can be used for citizen science data (as already proposed in the OGC SWE4CS discussion paper) by implementing it in servers that were combined by visualization clients showing Citizen Science observations from different projects together. The adoption of SOS opened new opportunities for creating interoperable components such as a quality assessment tool. In parallel, an authentication server was used to federate three project observers in a single community. Lessons learned will be used to define an architecture for the H2020 COS4Cloud project. The second phase of the Interoperability Experiment has already started and developments and tests will be conducted by participants in the next 9 months. Some open issues identified and document in the Engineering Report will be addressed in the second phase of the experiment, including the use of a Definitions Server and the adoption of the OGC SensorThings API as an alternative to SOS. The second phase will finalize in September 2020 with a presentation in the Munich OGC Technical Committee meeting. The call for participation and additional contributions will remain for the whole duration of the activity
How to cite: Masó, J., Prat, E., Cobley, A., Matheus, A., Julià, N., Jirka, S., Klan, F., Tsiakos, V., and Schade, S.: Advancing in Citizen Science Interoperability by testing standard components between Citizen Observatories, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18531, https://doi.org/10.5194/egusphere-egu2020-18531, 2020.
The first phase of the citizen science Interoperability Experiment organized by the Interoperability Community of Practice in the EU H2020 WeObserve project under the Open Geospatial Consortium (OGC) innovation program and supported by the four H2020 Citizen Observatories projects (SCENT, GROW, LandSense & GroundTruth 2.0) as well as the EU H2020 NEXTGEOSS project has finalized with the release of an Engineering Report in the OGC website. The activity, initiated by the European Space Agency (ESA), EC Joint Research Center (JRC), the Wilson Center, International Institute for Applied Systems Analysis (IIASA) and CREAF wanted to covered aspects of data sharing architectures for citizen science data, data quality, data definitions and user authentication.
The final aim is to propose solutions for Citizen Science data to be integrated in the Global Earth Observation System of Systems (GEOSS). The solution is necessarily a combination of technical and networking components, being the first ones the focus of this work. The applications of international geospatial standards in current citizen science and citizen observatory projects to improve interoperability and foster innovation is one of the main tasks in during the experiment to achieve the final aim.
The main result was to demonstrate that OGC Sensor Observing Service (SOS) standard can be used for citizen science data (as already proposed in the OGC SWE4CS discussion paper) by implementing it in servers that were combined by visualization clients showing Citizen Science observations from different projects together. The adoption of SOS opened new opportunities for creating interoperable components such as a quality assessment tool. In parallel, an authentication server was used to federate three project observers in a single community. Lessons learned will be used to define an architecture for the H2020 COS4Cloud project. The second phase of the Interoperability Experiment has already started and developments and tests will be conducted by participants in the next 9 months. Some open issues identified and document in the Engineering Report will be addressed in the second phase of the experiment, including the use of a Definitions Server and the adoption of the OGC SensorThings API as an alternative to SOS. The second phase will finalize in September 2020 with a presentation in the Munich OGC Technical Committee meeting. The call for participation and additional contributions will remain for the whole duration of the activity
How to cite: Masó, J., Prat, E., Cobley, A., Matheus, A., Julià, N., Jirka, S., Klan, F., Tsiakos, V., and Schade, S.: Advancing in Citizen Science Interoperability by testing standard components between Citizen Observatories, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18531, https://doi.org/10.5194/egusphere-egu2020-18531, 2020.
EGU2020-3240 | Displays | ITS1.8/SSS1.1
Assessing soil aggregate stability with mobile phonesMario Fajardo, Edward Jones, and Rèmi Wittig
A new methodology for the assessment of soil slaking using a mobile app named SLAKES was developed. The app uses an image recognition algorithm that measures the increasing area of soil aggregates immersed in water at regular intervals over a 10 minutes period. This method measures the kinetics of the slaking process and returns a continuous stability index from 0 (very stable) to higher numbers (higher than 7 as very unstable with 14 as a commonly observed maxima).
The methodology was originally presented in Fajardo et al. (2016) using a dataset covering a great part of the agro-ecological variability of New South Wales (NSW), Australia. By 2020 the app is already present in 36 countries from 6 continents in its Android version (released in 2017) and the iPhone version is gradually reaching an increasing audience (released in December 2019).
This work presents a study made in a medium sized farm in New South Wales, Australia. Top-soil (0-10 cm) samples were surveyed and analysed by undergraduate students using the app. Different maps of soil aggregate stability were created showing evident aggregate stability geographical patterns at medium scale. The use of SLAKES has shown reliability compared with traditional methods as shown in third party scientific publications. The simplicity of SLAKES makes this app a simple yet powerful way to assess aggregate stability and shows great potential to be included in both citizen and open science educational programs.
Fajardo, M., McBratney, A.B., Field, D.J., Minasny, B., 2016. Soil slaking assessment using image recognition. Soil and Tillage Research 163, 119-129.
How to cite: Fajardo, M., Jones, E., and Wittig, R.: Assessing soil aggregate stability with mobile phones, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3240, https://doi.org/10.5194/egusphere-egu2020-3240, 2020.
A new methodology for the assessment of soil slaking using a mobile app named SLAKES was developed. The app uses an image recognition algorithm that measures the increasing area of soil aggregates immersed in water at regular intervals over a 10 minutes period. This method measures the kinetics of the slaking process and returns a continuous stability index from 0 (very stable) to higher numbers (higher than 7 as very unstable with 14 as a commonly observed maxima).
The methodology was originally presented in Fajardo et al. (2016) using a dataset covering a great part of the agro-ecological variability of New South Wales (NSW), Australia. By 2020 the app is already present in 36 countries from 6 continents in its Android version (released in 2017) and the iPhone version is gradually reaching an increasing audience (released in December 2019).
This work presents a study made in a medium sized farm in New South Wales, Australia. Top-soil (0-10 cm) samples were surveyed and analysed by undergraduate students using the app. Different maps of soil aggregate stability were created showing evident aggregate stability geographical patterns at medium scale. The use of SLAKES has shown reliability compared with traditional methods as shown in third party scientific publications. The simplicity of SLAKES makes this app a simple yet powerful way to assess aggregate stability and shows great potential to be included in both citizen and open science educational programs.
Fajardo, M., McBratney, A.B., Field, D.J., Minasny, B., 2016. Soil slaking assessment using image recognition. Soil and Tillage Research 163, 119-129.
How to cite: Fajardo, M., Jones, E., and Wittig, R.: Assessing soil aggregate stability with mobile phones, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3240, https://doi.org/10.5194/egusphere-egu2020-3240, 2020.
EGU2020-3555 | Displays | ITS1.8/SSS1.1
The MacroSeismic-Sensor NetworkStefan Mertl, Ewald Brückl, Johanna Brückl, Peter Carniel, Karl Filz, Martin Krieger, and Gerald Stickler
The MacroSeismic Sensor network (MSS network) is a dense layout of 46 custom-built seismic low-cost sensors in populated area in the southern part of the Vienna Basin, Austria. The recorded ground-motion is sent to a central server using the Internet, processed on the server and then visualized in a web application in near real-time. The MSS network has been started 2014 by a funding program dedicated to the participation of young people and “Citizen Science” (Sparkling Science - a program of Federal Ministry of Education, Science and Research Austria) and has been further developed and kept in operation by private and public funding, participation of public schools as well as voluntary contribution of individuals.
The MSS uses 4.5 Hz geophones, 16bit analog-to-digital conversion (ADC) at a sampling rate of 100 samples per second and the Seedlink protocol for data transmission. A Raspberry Pi single board computer is used for controlling a custom-built ADC circuit board, data transmission and communication. Time synchronization is done using the Network Time Protocol.
For the visualization, the peak-ground-velocity is computed using 2 horizontal components at a sampling rate of 1 sample per second. An amplitude threshold algorithm using the Delaunay triangulation of the MSS network is used for the detection of seismic events and an amplitude-based localization method is used to compute the epicenter of the events.
The peak-ground-velocity and the detected events are presented on a map display by the web application with a focus of an intuitive presentation of the current state and the short-term history of the ground-motion within the area of the MSS network.
The output of the MSS network is used by public and private institutions. The the regional hazard warning center of Lower Austria (Landeswarnzentrale Niederösterreich) has integrated the MSS network visualization into their infrastructure to inform and warn the general public in case of a strong ground-motion in the area. A local quarry operator uses the data of the MSS network for a transparent monitoring and documentation of their blasting activity.
How to cite: Mertl, S., Brückl, E., Brückl, J., Carniel, P., Filz, K., Krieger, M., and Stickler, G.: The MacroSeismic-Sensor Network, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3555, https://doi.org/10.5194/egusphere-egu2020-3555, 2020.
The MacroSeismic Sensor network (MSS network) is a dense layout of 46 custom-built seismic low-cost sensors in populated area in the southern part of the Vienna Basin, Austria. The recorded ground-motion is sent to a central server using the Internet, processed on the server and then visualized in a web application in near real-time. The MSS network has been started 2014 by a funding program dedicated to the participation of young people and “Citizen Science” (Sparkling Science - a program of Federal Ministry of Education, Science and Research Austria) and has been further developed and kept in operation by private and public funding, participation of public schools as well as voluntary contribution of individuals.
The MSS uses 4.5 Hz geophones, 16bit analog-to-digital conversion (ADC) at a sampling rate of 100 samples per second and the Seedlink protocol for data transmission. A Raspberry Pi single board computer is used for controlling a custom-built ADC circuit board, data transmission and communication. Time synchronization is done using the Network Time Protocol.
For the visualization, the peak-ground-velocity is computed using 2 horizontal components at a sampling rate of 1 sample per second. An amplitude threshold algorithm using the Delaunay triangulation of the MSS network is used for the detection of seismic events and an amplitude-based localization method is used to compute the epicenter of the events.
The peak-ground-velocity and the detected events are presented on a map display by the web application with a focus of an intuitive presentation of the current state and the short-term history of the ground-motion within the area of the MSS network.
The output of the MSS network is used by public and private institutions. The the regional hazard warning center of Lower Austria (Landeswarnzentrale Niederösterreich) has integrated the MSS network visualization into their infrastructure to inform and warn the general public in case of a strong ground-motion in the area. A local quarry operator uses the data of the MSS network for a transparent monitoring and documentation of their blasting activity.
How to cite: Mertl, S., Brückl, E., Brückl, J., Carniel, P., Filz, K., Krieger, M., and Stickler, G.: The MacroSeismic-Sensor Network, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3555, https://doi.org/10.5194/egusphere-egu2020-3555, 2020.
EGU2020-18649 | Displays | ITS1.8/SSS1.1 | Highlight
On the barriers limiting the adoption of the Earth Observation Copernicus data and services and their integration with non-conventional (e.g. citizen) observations: the EU CoRdiNet project contribution.Teodosio Lacava, Lucio Bernardini Papalia, Iole Federica Paradiso, Monica Proto, and Nicola Pergola
The Copernicus User Uptake Initiative is part of the European Union’s strategy for increasing the level of awareness of the Copernicus Program at European and worldwide level, fostering the adoption of Copernicus-based data/solution in the everyday life of each kind of potential stakeholder, from Local Regional Authorities (LRA) to Big and/or Small Enterprises to normal citizens. The CoRdiNet (Copernicus Relays for digitalization spanning a Network) projects was funded in the frame of Horizon 2020 Space Hubs call (grant agreement n. 821911), to implement and reinforce the user uptake actions among the network of the so called Copernicus Relays. The latter, as part of the Space strategy for Europe of the European Commission, act as Copernicus Ambassadors, providing their contribution for a better dissemination and promotion of Copernicus-based solution at local/regional scale. Among the goals of the Cordinet project there are: i) Supporting, promoting and stimulating digitalization and new business solutions based on Earth observation data from the Copernicus project; ii) bundling the local expertise in the civil use of Earth observation close to the needs and offers of citizens, administration and businesses.
Earth Observation data from space, in fact, can provide products and services to citizens and can be profitably integrated with non-conventional data, e.g the ones coming from citizen observatories and sciences. However, presently Copernicus data and information are still under-exploited and further efforts are needed to engage stakeholders (including normal citizens), investigating the causes that have prevented from a more systematic and diffuse use of Copernicus/EO data so far. In fact, an increased awareness about the Copernicus program, its data, products and services, will allow for a better integration of non-conventional (e.g. citizen-based) observations, enabling new services and solutions, more close to the citizen needs and requirements for a better quality of life.
With this aim, one of the tasks of the project was specifically devoted to the identification and engagement of the stakeholders within the CoRdiNet partner geographic regions, including also the external ones involved by a specific call for expression of interest, and it was carried out by TeRN in collaboration with CNR-IMAA. In particular, after their engagement, stakeholders were asked to answer to a questionnaire aimed at analyzing their needs and capabilities and evaluating which barriers have prevented for a more systematic use of Copernicus solutions so far in their own activities. Results achieved analyzing collected feedback will be presented and discussed in this work, providing also a few preliminary recommendations about how to cope with the identified gaps.
How to cite: Lacava, T., Bernardini Papalia, L., Paradiso, I. F., Proto, M., and Pergola, N.: On the barriers limiting the adoption of the Earth Observation Copernicus data and services and their integration with non-conventional (e.g. citizen) observations: the EU CoRdiNet project contribution., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18649, https://doi.org/10.5194/egusphere-egu2020-18649, 2020.
The Copernicus User Uptake Initiative is part of the European Union’s strategy for increasing the level of awareness of the Copernicus Program at European and worldwide level, fostering the adoption of Copernicus-based data/solution in the everyday life of each kind of potential stakeholder, from Local Regional Authorities (LRA) to Big and/or Small Enterprises to normal citizens. The CoRdiNet (Copernicus Relays for digitalization spanning a Network) projects was funded in the frame of Horizon 2020 Space Hubs call (grant agreement n. 821911), to implement and reinforce the user uptake actions among the network of the so called Copernicus Relays. The latter, as part of the Space strategy for Europe of the European Commission, act as Copernicus Ambassadors, providing their contribution for a better dissemination and promotion of Copernicus-based solution at local/regional scale. Among the goals of the Cordinet project there are: i) Supporting, promoting and stimulating digitalization and new business solutions based on Earth observation data from the Copernicus project; ii) bundling the local expertise in the civil use of Earth observation close to the needs and offers of citizens, administration and businesses.
Earth Observation data from space, in fact, can provide products and services to citizens and can be profitably integrated with non-conventional data, e.g the ones coming from citizen observatories and sciences. However, presently Copernicus data and information are still under-exploited and further efforts are needed to engage stakeholders (including normal citizens), investigating the causes that have prevented from a more systematic and diffuse use of Copernicus/EO data so far. In fact, an increased awareness about the Copernicus program, its data, products and services, will allow for a better integration of non-conventional (e.g. citizen-based) observations, enabling new services and solutions, more close to the citizen needs and requirements for a better quality of life.
With this aim, one of the tasks of the project was specifically devoted to the identification and engagement of the stakeholders within the CoRdiNet partner geographic regions, including also the external ones involved by a specific call for expression of interest, and it was carried out by TeRN in collaboration with CNR-IMAA. In particular, after their engagement, stakeholders were asked to answer to a questionnaire aimed at analyzing their needs and capabilities and evaluating which barriers have prevented for a more systematic use of Copernicus solutions so far in their own activities. Results achieved analyzing collected feedback will be presented and discussed in this work, providing also a few preliminary recommendations about how to cope with the identified gaps.
How to cite: Lacava, T., Bernardini Papalia, L., Paradiso, I. F., Proto, M., and Pergola, N.: On the barriers limiting the adoption of the Earth Observation Copernicus data and services and their integration with non-conventional (e.g. citizen) observations: the EU CoRdiNet project contribution., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18649, https://doi.org/10.5194/egusphere-egu2020-18649, 2020.
EGU2020-6443 | Displays | ITS1.8/SSS1.1
Using Open Data and Citizen Science in Understanding Disaster Risk: Experience from Western parts of NepalPuja Shakya and Binod Prasad Parajuli
Nepal is highly vulnerable to multiple disasters due to its topography and geographic conditions. It also suffers with data deficiency in better understanding the impacts of disasters and existing capacities to cope with such disasters. This information scarcity severely hinders understanding the disasters and their associated risks in the areas. This also hampers local and regional risk reduction, preparedness and response, limiting rigorous and robust disaster risk modelling and assessment. For regions facing recurrent disaster, there is a strong need of more integrated and proactive perspective into the management of disaster risks and innovations. Recent advances on digital and spatial technologies, citizen science and open data are introducing opportunities through prompt data collection, analysis and visualization of locally relevant spatial data. These data could be used as evidence in local development planning as well as linking in different services of the areas. This will be helpful for sustained investment in disaster risk management and resilience building. In current federal structure of Nepal, there is an acute data deficiency at the local level (municipalities and wards) in terms of data about situation analysis, demographics, and statistics, disaster impacts (hazard, exposure and vulnerability) etc. This has caused hindrances to all the relevant stakeholders including government, non-government and donors in diagnosing the available resources, capacities for effective planning and managing disaster risks. In this context, we are piloting an approach to fulfil existing data gaps by mobilizing citizen science through the use of open data sources in Western Nepal. We have already tested it through trainings to the local authorities and the communities in using open data for data collection. Likewise, in one of our upcoming project on data innovations, we shall create a repository of available open data sources; develop analytical tools for risk assessment which will be able to provide climate related services. Later, upon testing the tools, these can be implemented at the local level for informed decision making.
How to cite: Shakya, P. and Parajuli, B. P.: Using Open Data and Citizen Science in Understanding Disaster Risk: Experience from Western parts of Nepal, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6443, https://doi.org/10.5194/egusphere-egu2020-6443, 2020.
Nepal is highly vulnerable to multiple disasters due to its topography and geographic conditions. It also suffers with data deficiency in better understanding the impacts of disasters and existing capacities to cope with such disasters. This information scarcity severely hinders understanding the disasters and their associated risks in the areas. This also hampers local and regional risk reduction, preparedness and response, limiting rigorous and robust disaster risk modelling and assessment. For regions facing recurrent disaster, there is a strong need of more integrated and proactive perspective into the management of disaster risks and innovations. Recent advances on digital and spatial technologies, citizen science and open data are introducing opportunities through prompt data collection, analysis and visualization of locally relevant spatial data. These data could be used as evidence in local development planning as well as linking in different services of the areas. This will be helpful for sustained investment in disaster risk management and resilience building. In current federal structure of Nepal, there is an acute data deficiency at the local level (municipalities and wards) in terms of data about situation analysis, demographics, and statistics, disaster impacts (hazard, exposure and vulnerability) etc. This has caused hindrances to all the relevant stakeholders including government, non-government and donors in diagnosing the available resources, capacities for effective planning and managing disaster risks. In this context, we are piloting an approach to fulfil existing data gaps by mobilizing citizen science through the use of open data sources in Western Nepal. We have already tested it through trainings to the local authorities and the communities in using open data for data collection. Likewise, in one of our upcoming project on data innovations, we shall create a repository of available open data sources; develop analytical tools for risk assessment which will be able to provide climate related services. Later, upon testing the tools, these can be implemented at the local level for informed decision making.
How to cite: Shakya, P. and Parajuli, B. P.: Using Open Data and Citizen Science in Understanding Disaster Risk: Experience from Western parts of Nepal, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6443, https://doi.org/10.5194/egusphere-egu2020-6443, 2020.
EGU2020-19952 | Displays | ITS1.8/SSS1.1
CITRAM - Citizen Science for Traffic ManagementBenedikt Gräler, Christoph Doll, Jürgen Mück, Albert Remke, Diana Schramm, Arne de Wall, and Herwig Wulffius
The CITRAM project aims at improving traffic quality in cities with the help of floating car data provided by citizens. During CITRAM, the citizen science platform enviroCar (https://www.enviroCar.org) has been extended and is used to collect floating car data in three German cities. Citizens are invited to collect data in designated field tests while driving their day-to-day routes. These collected trajectories are anonymised, stored and published under an open data policy in a central server.
Dedicated postprocessing services using new concepts for evaluation and visualization analyze the data on a daily basis deriving traffic quality characteristics. The raw data and the processed reports are used by the cities and their planners to assess the traffic quality and to deduce actions to improve traffic management.
The project also raises the awareness of an environmentally improved driving behavior through the collection of floating car data enriched with individual energy and fuel consumption along the recorded routes of electric and internal combustion engine driven cars. Through the integration of municipal information infrastructure into a dedicated real-time Smart City platform and a model accounting for the dynamic control of traffic light systems, a traffic light phase assistant app (ECOMAT) further supports the driver in a foresighted and energy optimized driving behavior by providing Green Light Optimised Speed Advisory (GLOSA) and Time To Green (TTG) information in real-time.
The motivation of CITRAM is the coupling of system components that enable scientists, traffic engineers and citizens to collaborate on knowledge acquisition concerning driving in motorized traffic. We will present the developed tool set and the results from the analysis of floating car data collected by citizens. The analysis assess the quality of traffic flow within the municipality as well as characteristics of individual trajectories or dedicated routes.
How to cite: Gräler, B., Doll, C., Mück, J., Remke, A., Schramm, D., de Wall, A., and Wulffius, H.: CITRAM - Citizen Science for Traffic Management, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19952, https://doi.org/10.5194/egusphere-egu2020-19952, 2020.
The CITRAM project aims at improving traffic quality in cities with the help of floating car data provided by citizens. During CITRAM, the citizen science platform enviroCar (https://www.enviroCar.org) has been extended and is used to collect floating car data in three German cities. Citizens are invited to collect data in designated field tests while driving their day-to-day routes. These collected trajectories are anonymised, stored and published under an open data policy in a central server.
Dedicated postprocessing services using new concepts for evaluation and visualization analyze the data on a daily basis deriving traffic quality characteristics. The raw data and the processed reports are used by the cities and their planners to assess the traffic quality and to deduce actions to improve traffic management.
The project also raises the awareness of an environmentally improved driving behavior through the collection of floating car data enriched with individual energy and fuel consumption along the recorded routes of electric and internal combustion engine driven cars. Through the integration of municipal information infrastructure into a dedicated real-time Smart City platform and a model accounting for the dynamic control of traffic light systems, a traffic light phase assistant app (ECOMAT) further supports the driver in a foresighted and energy optimized driving behavior by providing Green Light Optimised Speed Advisory (GLOSA) and Time To Green (TTG) information in real-time.
The motivation of CITRAM is the coupling of system components that enable scientists, traffic engineers and citizens to collaborate on knowledge acquisition concerning driving in motorized traffic. We will present the developed tool set and the results from the analysis of floating car data collected by citizens. The analysis assess the quality of traffic flow within the municipality as well as characteristics of individual trajectories or dedicated routes.
How to cite: Gräler, B., Doll, C., Mück, J., Remke, A., Schramm, D., de Wall, A., and Wulffius, H.: CITRAM - Citizen Science for Traffic Management, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19952, https://doi.org/10.5194/egusphere-egu2020-19952, 2020.
EGU2020-10702 | Displays | ITS1.8/SSS1.1
Using geotagged photographs and remote sensing to examine visual and recreational landscape values in EstoniaOleksandr Karasov, Mart Külvik, Stien Heremans, and Artem Domnich
Integrated use of citizen science (crowdsourcing in general) and remote sensing is essential to comprehend the complexity of the notion of landscape, based on subjective experience and objective structure of environment. Organisation-related landscape attributes, such as landscape diversity and orderliness, as well as the extent of colour harmony, greenness, and transport accessibility, were recently recognised as indicators for visual and recreational values of environment. However, it is currently an open research question, whether mentioned anthropocentric nature-related values are dependable on these landscape attributes, quantifiable with GIS and remote sensing, and accurate mapping of aesthetic and recreational landscape services is important to answer this question. Image hosting services and social networks provide a huge source of evidence on the aesthetic and recreational landscape experience, allowing for mapping the intangible anthropocentric values with publicly shared georeferenced photographs. Therefore, we aimed to apply automated image recognition with Clarifai service to assign each photograph with tags, reflecting its content, and further topic modelling (a variety of textual analysis) to group the tags into the categories.
In this study, we used combined Flickr and VK.com dataset for 2016-2018 years, collected via official APIs within the territory or Estonia; outdoor photographs were grouped into three classes: aesthetic landscape experience, outdoor recreation activities and wildlife watching. Non-relevant photographs and photographs with repeating content from the same author were excluded from analysis; a dataset of >10000 photographs was finally analysed. Cloud-free summertime Landsat-8 mosaic for 2018 was used to estimate the landscape diversity, orderliness, colour harmony extent, greenness and other metrics, whereas digital elevation model and land use/land cover model were used to map landscape coherence, terrain ruggedness, and indicate transport accessibility. Contrary to previous findings, users of Flickr and VK.com tend to take photographs of lower landscape diversity and lower greenness. We confirm that, according to the photographs being studied, water presence, terrain ruggedness, and transport accessibility are the best indicators of recreational experience. Colour harmony of land cover and landscape coherence are moderately higher for actual outdoor photographs.
Performance of the mentioned indicators varies among the groups of photographs, wildlife watching is the least predictable class of recreational landscape services. The applicability of remote sensing-based mapping of landscape attributes and textual analysis of tags, extracted for outdoor photographs, is examined and discussed. Our results contribute to the deeper understanding of landscape pattern and processes, responsible for visual and recreational values, as well as the methodology is based on the integrated quantitative approach, supporting evidence-based landscape science and decision-making.
How to cite: Karasov, O., Külvik, M., Heremans, S., and Domnich, A.: Using geotagged photographs and remote sensing to examine visual and recreational landscape values in Estonia, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10702, https://doi.org/10.5194/egusphere-egu2020-10702, 2020.
Integrated use of citizen science (crowdsourcing in general) and remote sensing is essential to comprehend the complexity of the notion of landscape, based on subjective experience and objective structure of environment. Organisation-related landscape attributes, such as landscape diversity and orderliness, as well as the extent of colour harmony, greenness, and transport accessibility, were recently recognised as indicators for visual and recreational values of environment. However, it is currently an open research question, whether mentioned anthropocentric nature-related values are dependable on these landscape attributes, quantifiable with GIS and remote sensing, and accurate mapping of aesthetic and recreational landscape services is important to answer this question. Image hosting services and social networks provide a huge source of evidence on the aesthetic and recreational landscape experience, allowing for mapping the intangible anthropocentric values with publicly shared georeferenced photographs. Therefore, we aimed to apply automated image recognition with Clarifai service to assign each photograph with tags, reflecting its content, and further topic modelling (a variety of textual analysis) to group the tags into the categories.
In this study, we used combined Flickr and VK.com dataset for 2016-2018 years, collected via official APIs within the territory or Estonia; outdoor photographs were grouped into three classes: aesthetic landscape experience, outdoor recreation activities and wildlife watching. Non-relevant photographs and photographs with repeating content from the same author were excluded from analysis; a dataset of >10000 photographs was finally analysed. Cloud-free summertime Landsat-8 mosaic for 2018 was used to estimate the landscape diversity, orderliness, colour harmony extent, greenness and other metrics, whereas digital elevation model and land use/land cover model were used to map landscape coherence, terrain ruggedness, and indicate transport accessibility. Contrary to previous findings, users of Flickr and VK.com tend to take photographs of lower landscape diversity and lower greenness. We confirm that, according to the photographs being studied, water presence, terrain ruggedness, and transport accessibility are the best indicators of recreational experience. Colour harmony of land cover and landscape coherence are moderately higher for actual outdoor photographs.
Performance of the mentioned indicators varies among the groups of photographs, wildlife watching is the least predictable class of recreational landscape services. The applicability of remote sensing-based mapping of landscape attributes and textual analysis of tags, extracted for outdoor photographs, is examined and discussed. Our results contribute to the deeper understanding of landscape pattern and processes, responsible for visual and recreational values, as well as the methodology is based on the integrated quantitative approach, supporting evidence-based landscape science and decision-making.
How to cite: Karasov, O., Külvik, M., Heremans, S., and Domnich, A.: Using geotagged photographs and remote sensing to examine visual and recreational landscape values in Estonia, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10702, https://doi.org/10.5194/egusphere-egu2020-10702, 2020.
EGU2020-20396 | Displays | ITS1.8/SSS1.1
The Digital Cultural Change within the Program Hamburg Open ScienceMartin Scharffenberg and Konstantin Olschofsky
Transparency in Hamburg's scientific community, the further development of Hamburg as a university location, open access to research results, and secure long-term data storage are the main objectives of the Hamburg Open Science program. Hamburg Open Science bundles eight inter-university projects to promote open science at Hamburg's six state universities, the University Medical Center Hamburg Eppendorf and the Hamburg Carl von Ossietzky State and University Library.
The program is funded by the city of Hamburg for the period 2018-2020 and is supported by the Ministry of Science, Research and Equality. The eight projects science data management, science information system, open access repositories, archive data storage, modern publishing, web platform, 3D and audio visual science data, and the digital cultural change are developing the basis for the long-term operation of Open Science services from 2021 onwards. The web platform www.openscience.hamburg.de provides access to research results from Hamburg and is to be expanded into an information platform on science in Hamburg.
The idea of Open Science in the program context is that the digitalization of science enables a complete redesign of basic science principles and turning them into reality under the principles of transparency, reproducibility, reusability, open communication and exchange.
Therefore, the project aims of the digital cultural change are to create an awareness of open science among researchers and to integrate openness into their everyday work so that they can continue to focus on their research. The Hamburg Open Science program thus supports all orientations of Open Science by actively supporting scientists in their working methods, structures and behavior patterns towards open science.
How to cite: Scharffenberg, M. and Olschofsky, K.: The Digital Cultural Change within the Program Hamburg Open Science, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20396, https://doi.org/10.5194/egusphere-egu2020-20396, 2020.
Transparency in Hamburg's scientific community, the further development of Hamburg as a university location, open access to research results, and secure long-term data storage are the main objectives of the Hamburg Open Science program. Hamburg Open Science bundles eight inter-university projects to promote open science at Hamburg's six state universities, the University Medical Center Hamburg Eppendorf and the Hamburg Carl von Ossietzky State and University Library.
The program is funded by the city of Hamburg for the period 2018-2020 and is supported by the Ministry of Science, Research and Equality. The eight projects science data management, science information system, open access repositories, archive data storage, modern publishing, web platform, 3D and audio visual science data, and the digital cultural change are developing the basis for the long-term operation of Open Science services from 2021 onwards. The web platform www.openscience.hamburg.de provides access to research results from Hamburg and is to be expanded into an information platform on science in Hamburg.
The idea of Open Science in the program context is that the digitalization of science enables a complete redesign of basic science principles and turning them into reality under the principles of transparency, reproducibility, reusability, open communication and exchange.
Therefore, the project aims of the digital cultural change are to create an awareness of open science among researchers and to integrate openness into their everyday work so that they can continue to focus on their research. The Hamburg Open Science program thus supports all orientations of Open Science by actively supporting scientists in their working methods, structures and behavior patterns towards open science.
How to cite: Scharffenberg, M. and Olschofsky, K.: The Digital Cultural Change within the Program Hamburg Open Science, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20396, https://doi.org/10.5194/egusphere-egu2020-20396, 2020.
EGU2020-17907 | Displays | ITS1.8/SSS1.1
Participatory mapping and collaborative action for inclusive and sustainable mountain landscape development in Far West NepalPrakash Khadka, Wei Liu, Binod Prasad Parajuli, and Uttam Pudasaini
Nepal is one of the world’s most vulnerable countries to the impacts of climate change due to its high-relief topography, heavy monsoon rainfall, and weak governance. Landslides are common across almost all Nepal’s vast Himalaya mountains, of which the Far Western region suffers most, and climate change, coupled with severe under-development is expected to exacerbate the situation. Deficiency in spatial data and information seriously hinder the design and effective implementation of development plans, especially in the least developed areas, such as Seti River Basin in Far Western Nepal, where landslides constantly devastate landscapes and communities. We adopted a participatory mapping process with emerging collaborative digital mapping techniques to tackle the problem of critical information gaps, especially spatial risk information at local levels which compromise efforts for sustainable landscape planning and uses in disaster prone regions. In short, participatory here refers to working with local stakeholders and collaborative refers to crowdsourced map information with citizens and professionals. Engaging a wide range of stakeholders and non-stakeholder citizens in this integrated mapping processes eventually structure human capital at local scales with skills and knowledge on maps and mapping techniques. Also, this approach increases spatial knowledge and their uses in development planning at the local level and eventually increases landscape resilience through improved information management. We will further discuss how this integrated approach may provide an effective link between planning, designing, and implementing development plans amid fast policy and environmental changes and implications for communities in the developing world, especially in the context of climate change and its cascading effects.
How to cite: Khadka, P., Liu, W., Parajuli, B. P., and Pudasaini, U.: Participatory mapping and collaborative action for inclusive and sustainable mountain landscape development in Far West Nepal, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-17907, https://doi.org/10.5194/egusphere-egu2020-17907, 2020.
Nepal is one of the world’s most vulnerable countries to the impacts of climate change due to its high-relief topography, heavy monsoon rainfall, and weak governance. Landslides are common across almost all Nepal’s vast Himalaya mountains, of which the Far Western region suffers most, and climate change, coupled with severe under-development is expected to exacerbate the situation. Deficiency in spatial data and information seriously hinder the design and effective implementation of development plans, especially in the least developed areas, such as Seti River Basin in Far Western Nepal, where landslides constantly devastate landscapes and communities. We adopted a participatory mapping process with emerging collaborative digital mapping techniques to tackle the problem of critical information gaps, especially spatial risk information at local levels which compromise efforts for sustainable landscape planning and uses in disaster prone regions. In short, participatory here refers to working with local stakeholders and collaborative refers to crowdsourced map information with citizens and professionals. Engaging a wide range of stakeholders and non-stakeholder citizens in this integrated mapping processes eventually structure human capital at local scales with skills and knowledge on maps and mapping techniques. Also, this approach increases spatial knowledge and their uses in development planning at the local level and eventually increases landscape resilience through improved information management. We will further discuss how this integrated approach may provide an effective link between planning, designing, and implementing development plans amid fast policy and environmental changes and implications for communities in the developing world, especially in the context of climate change and its cascading effects.
How to cite: Khadka, P., Liu, W., Parajuli, B. P., and Pudasaini, U.: Participatory mapping and collaborative action for inclusive and sustainable mountain landscape development in Far West Nepal, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-17907, https://doi.org/10.5194/egusphere-egu2020-17907, 2020.
EGU2020-22236 | Displays | ITS1.8/SSS1.1
Knowledge transfer through Citizen Science using the example of a forest inventory campaignChristian Thiel, Clémence Dubois, Friederike Klan, Carsten Pathe, Christiane Schmullius, Jussi Baade, Marlin Müller, and Felix Cremer
Citizen Science (CS) operates at the interface of engineering, natural and social sciences. The topic is currently gaining importance, which, from a political perspective, is based, among other things, on the hope of increasing the acceptance of science and scientific knowledge among the general public. The involvement of non-specialists in the conception and implementation of research projects enables and requires the development of innovative educational concepts that integrate knowledge transfer and added value to science, for example through citizen-based data acquisition. This win-win situation of active learning and the generation of research-relevant data can be implemented in educational institutions in particular by expanding didactic concepts with the integration of citizen science.
As an example, the project of the DLR in cooperation with the Friedrich Schiller University Jena will be presented. The campaign took place at site 15 km to the SE of Jena featuring planted and intensively managed forest. During the past two years the forest was affected by several stressors such as storm events, long drought periods (spring 2018 and 2019, summer 2018), and bark beetle attacks. Thus, forest management activities were conducted in June 2019 to remove stressed and infected trees. Two CS campaigns were conducted: one before (May) and one after (July) the management action (cross validation, check which trees were logged). The aim was to collect the stem circumference, the species, and other describing parameters. The citizens were “gathered“ from a university lecture for forthcoming Geography teachers. During the campaign a new approach for improved positioning under challenging GNSS conditions was tested (offset correction using Bluetooth low energy beacons – BLE).
How to cite: Thiel, C., Dubois, C., Klan, F., Pathe, C., Schmullius, C., Baade, J., Müller, M., and Cremer, F.: Knowledge transfer through Citizen Science using the example of a forest inventory campaign, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22236, https://doi.org/10.5194/egusphere-egu2020-22236, 2020.
Citizen Science (CS) operates at the interface of engineering, natural and social sciences. The topic is currently gaining importance, which, from a political perspective, is based, among other things, on the hope of increasing the acceptance of science and scientific knowledge among the general public. The involvement of non-specialists in the conception and implementation of research projects enables and requires the development of innovative educational concepts that integrate knowledge transfer and added value to science, for example through citizen-based data acquisition. This win-win situation of active learning and the generation of research-relevant data can be implemented in educational institutions in particular by expanding didactic concepts with the integration of citizen science.
As an example, the project of the DLR in cooperation with the Friedrich Schiller University Jena will be presented. The campaign took place at site 15 km to the SE of Jena featuring planted and intensively managed forest. During the past two years the forest was affected by several stressors such as storm events, long drought periods (spring 2018 and 2019, summer 2018), and bark beetle attacks. Thus, forest management activities were conducted in June 2019 to remove stressed and infected trees. Two CS campaigns were conducted: one before (May) and one after (July) the management action (cross validation, check which trees were logged). The aim was to collect the stem circumference, the species, and other describing parameters. The citizens were “gathered“ from a university lecture for forthcoming Geography teachers. During the campaign a new approach for improved positioning under challenging GNSS conditions was tested (offset correction using Bluetooth low energy beacons – BLE).
How to cite: Thiel, C., Dubois, C., Klan, F., Pathe, C., Schmullius, C., Baade, J., Müller, M., and Cremer, F.: Knowledge transfer through Citizen Science using the example of a forest inventory campaign, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22236, https://doi.org/10.5194/egusphere-egu2020-22236, 2020.
EGU2020-562 | Displays | ITS1.8/SSS1.1
The story of Skeptical Science: How citizen science helped to turn a website into a go-to resource for climate scienceBärbel Winkler and John Cook
Skeptical Science (SkS) is a website with international reach founded by John Cook in 2007. The main purpose of SkS is to debunk misconceptions and misinformation about human-caused climate change and features a database that currently has more than 200 rebuttals based on peer-reviewed literature. Over the years, SkS has evolved from a one-person operation to a team project with science-minded volunteers from around the globe. The Skeptical Science team also actively contribute to published research, with a highlight being the often cited 97% consensus paper published in 2013 (Cook et al. 2013) for which team members content-analysed about 12,000 abstracts in a study whose publication fee was crowd-funded by readers of the website.
The SkS author community formed in 2010 in response to the proposal to expand existing rebuttals to three levels: basic, intermediate, and advanced. Since then, team members regularly collaborate to write and review rebuttal and blog articles for the website. Volunteer translators from many countries have translated selected content into more than 20 languages including booklets such as The Debunking Handbook, The Uncertainty Handbook or The Consensus Handbook. In addition to the already mentioned consensus study, team members have helped with other research projects initiated by John Cook such as the efforts to train a computer to detect and classify climate change misinformation. Another significant project is the Massive Open Online Course (or MOOC) “Denial101x: Making Sense of Climate Science Denial” in collaboration with the University of Queensland, for which the SkS team produced numerous video lectures and for which forum moderators were recruited. Outreach activities such as the “97 Hours of Consensus” were crowdsourced with team members collecting and organising content and providing technical support.
Challenges: Due to the volunteer nature of people’s involvement, there are some challenges involved as not everybody is available to help with tasks all the time. People help as much – or as little – as their time allows and there’s always some turn-over with new people joining while others leave.
Skeptical Science (SkS): (accessed November 29, 2019)
Cook, J., Nuccitelli, D., Green, S. A., Richardson, M., Winkler, B., Painting, R., Way, R., Jacobs, P., & Skuce, A. (2013). . Environmental Research Letters, 8(2), 024024+.
Cook, J., Schuennemann, K., Nuccitelli, D., Jacobs, P., Cowtan, K., Green, S., Way, R., Richardson, M., Cawley, G., Mandia, S., Skuce, A., & Bedford, D. (April 2015). Denial101x: Making Sense of Climate Science Denial. edX.
Cook, J., & Lewandowsky, S. (2011). . St. Lucia, Australia: University of Queensland. ISBN 978-0-646-56812-6.
How to cite: Winkler, B. and Cook, J.: The story of Skeptical Science: How citizen science helped to turn a website into a go-to resource for climate science, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-562, https://doi.org/10.5194/egusphere-egu2020-562, 2020.
Skeptical Science (SkS) is a website with international reach founded by John Cook in 2007. The main purpose of SkS is to debunk misconceptions and misinformation about human-caused climate change and features a database that currently has more than 200 rebuttals based on peer-reviewed literature. Over the years, SkS has evolved from a one-person operation to a team project with science-minded volunteers from around the globe. The Skeptical Science team also actively contribute to published research, with a highlight being the often cited 97% consensus paper published in 2013 (Cook et al. 2013) for which team members content-analysed about 12,000 abstracts in a study whose publication fee was crowd-funded by readers of the website.
The SkS author community formed in 2010 in response to the proposal to expand existing rebuttals to three levels: basic, intermediate, and advanced. Since then, team members regularly collaborate to write and review rebuttal and blog articles for the website. Volunteer translators from many countries have translated selected content into more than 20 languages including booklets such as The Debunking Handbook, The Uncertainty Handbook or The Consensus Handbook. In addition to the already mentioned consensus study, team members have helped with other research projects initiated by John Cook such as the efforts to train a computer to detect and classify climate change misinformation. Another significant project is the Massive Open Online Course (or MOOC) “Denial101x: Making Sense of Climate Science Denial” in collaboration with the University of Queensland, for which the SkS team produced numerous video lectures and for which forum moderators were recruited. Outreach activities such as the “97 Hours of Consensus” were crowdsourced with team members collecting and organising content and providing technical support.
Challenges: Due to the volunteer nature of people’s involvement, there are some challenges involved as not everybody is available to help with tasks all the time. People help as much – or as little – as their time allows and there’s always some turn-over with new people joining while others leave.
Skeptical Science (SkS): (accessed November 29, 2019)
Cook, J., Nuccitelli, D., Green, S. A., Richardson, M., Winkler, B., Painting, R., Way, R., Jacobs, P., & Skuce, A. (2013). . Environmental Research Letters, 8(2), 024024+.
Cook, J., Schuennemann, K., Nuccitelli, D., Jacobs, P., Cowtan, K., Green, S., Way, R., Richardson, M., Cawley, G., Mandia, S., Skuce, A., & Bedford, D. (April 2015). Denial101x: Making Sense of Climate Science Denial. edX.
Cook, J., & Lewandowsky, S. (2011). . St. Lucia, Australia: University of Queensland. ISBN 978-0-646-56812-6.
How to cite: Winkler, B. and Cook, J.: The story of Skeptical Science: How citizen science helped to turn a website into a go-to resource for climate science, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-562, https://doi.org/10.5194/egusphere-egu2020-562, 2020.
EGU2020-20373 | Displays | ITS1.8/SSS1.1
Can we use citizen science to upscale soil data collection?Christian Schneider, Susanne Döhler, Luise Ohmann, and Ute Wollschläger
Citizen science approaches are still relatively rare in soil sciences. However, the Tea Bag Index (TBI) has been successfully implemented in projects all over the world.
Our citizen science project “Expedition ERDreich – Mit Teebeuteln den Boden erforschen” (EE) aims to upscale open soil data by applying the TBI as well as other soil assessment methods all over Germany. Beside the strong focus on creating awareness for soils and its functions we want to answer the following questions:
The project will combine aspects of co-production as well as environmental education. Co-production means, soil data will individually be compiled by citizen scientists with the support of a team of scientists from a network of project partners. While conducting various soil assessments and experiments participating citizen scientists will be given background information and guidance meant to educate and to raise awareness about soils and soil quality.
We are aiming to involve a broad spectrum of citizens from various backgrounds, for example school children, students, farmers, forest owners, gardeners, municipal administrations, and of course soil scientists.
Within the project citizen scientists will submit turnover data from their location, together with information on the sampling sites, as well as information on soil properties like pH value, soil texture, and soil color. This information will be complemented with climatic and geo-scientific co-variables by the scientific project team.
So far we identified the following main challenges:
-
How can citizens from various backgrounds and in various geographical locations be addressed and involved in the project?
-
How do we get high quality soil data while still teaching soil awareness?
-
How do we address the complexity of soils in soil education?
-
How do we manage the quality of data and identify potential errors?
-
How do we communicate data management procedures to keep the project as transparent as possible?
-
What and how can we give back an added value to citizen scientists?
-
How do we involve citizen scientists in the scientific progress beyond collecting data and beyond the current projects timeframe?
How to cite: Schneider, C., Döhler, S., Ohmann, L., and Wollschläger, U.: Can we use citizen science to upscale soil data collection?, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20373, https://doi.org/10.5194/egusphere-egu2020-20373, 2020.
Citizen science approaches are still relatively rare in soil sciences. However, the Tea Bag Index (TBI) has been successfully implemented in projects all over the world.
Our citizen science project “Expedition ERDreich – Mit Teebeuteln den Boden erforschen” (EE) aims to upscale open soil data by applying the TBI as well as other soil assessment methods all over Germany. Beside the strong focus on creating awareness for soils and its functions we want to answer the following questions:
The project will combine aspects of co-production as well as environmental education. Co-production means, soil data will individually be compiled by citizen scientists with the support of a team of scientists from a network of project partners. While conducting various soil assessments and experiments participating citizen scientists will be given background information and guidance meant to educate and to raise awareness about soils and soil quality.
We are aiming to involve a broad spectrum of citizens from various backgrounds, for example school children, students, farmers, forest owners, gardeners, municipal administrations, and of course soil scientists.
Within the project citizen scientists will submit turnover data from their location, together with information on the sampling sites, as well as information on soil properties like pH value, soil texture, and soil color. This information will be complemented with climatic and geo-scientific co-variables by the scientific project team.
So far we identified the following main challenges:
-
How can citizens from various backgrounds and in various geographical locations be addressed and involved in the project?
-
How do we get high quality soil data while still teaching soil awareness?
-
How do we address the complexity of soils in soil education?
-
How do we manage the quality of data and identify potential errors?
-
How do we communicate data management procedures to keep the project as transparent as possible?
-
What and how can we give back an added value to citizen scientists?
-
How do we involve citizen scientists in the scientific progress beyond collecting data and beyond the current projects timeframe?
How to cite: Schneider, C., Döhler, S., Ohmann, L., and Wollschläger, U.: Can we use citizen science to upscale soil data collection?, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20373, https://doi.org/10.5194/egusphere-egu2020-20373, 2020.
EGU2020-21726 | Displays | ITS1.8/SSS1.1 | Highlight
A multidisciplinary scientific outreach journal designed for and made by middle and high school students to bring research closer to the classroomBenjamin Dalmas, Barbara Goncalves, Lucie Poulet, Antoine Vernay, and Mathilde Vernay
One mission of a researcher is to share their work and results with the general public but there is a real challenge in accurately and effectively sharing scientific results with a broad audience. Indeed, they are published in scientific journals that are mostly available at high costs; the vocabulary used makes it hard for people outside of the field to understand the concepts; and sometimes there is a language barrier for non-English speakers.
However, to make informed decisions on a variety of scientific and societal topics, citizens need to have access to and keep up with these research results. To build critical thinking, this good practise should be developed from an early age. We created the journal DECODER (French for “to decode”, journal-decoder.fr), which enables a researcher and a class to work together on their own simplified research article. The middle and high school students can have the role of active reviewers on the researcher’s shorten article or they can write an outreach article on a given topic in which the researcher is a specialist. Articles are then published under a creative commons license and are freely available on the journal website to benefit a majority. Our partner researchers work in space agencies, in academia, or in industry, in a variety of disciplines from STEM to social sciences. The emphasis is set on multidisciplinarity to raise students’ awareness about research wideness and show them that research is not limited to STEM fields but also exists in economics and humanities. This points out the significance and ubiquity of transdisciplinarity in solving real world’s problems, such as global change issues, biological and physical questions or space exploration from different perspectives. In its first year and a half, the journal has already involved more than ten classes in five different schools and 18 articles have been submitted by ten researchers. The project allows a tight and direct interaction between students and researchers and it makes students responsible for the publication content over a large audience. Thanks to an easy procedure for classes and researchers and small-time requirement, our hope is to mobilize the largest scientific community to help people being more critics and having access to scientific results.
How to cite: Dalmas, B., Goncalves, B., Poulet, L., Vernay, A., and Vernay, M.: A multidisciplinary scientific outreach journal designed for and made by middle and high school students to bring research closer to the classroom, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21726, https://doi.org/10.5194/egusphere-egu2020-21726, 2020.
One mission of a researcher is to share their work and results with the general public but there is a real challenge in accurately and effectively sharing scientific results with a broad audience. Indeed, they are published in scientific journals that are mostly available at high costs; the vocabulary used makes it hard for people outside of the field to understand the concepts; and sometimes there is a language barrier for non-English speakers.
However, to make informed decisions on a variety of scientific and societal topics, citizens need to have access to and keep up with these research results. To build critical thinking, this good practise should be developed from an early age. We created the journal DECODER (French for “to decode”, journal-decoder.fr), which enables a researcher and a class to work together on their own simplified research article. The middle and high school students can have the role of active reviewers on the researcher’s shorten article or they can write an outreach article on a given topic in which the researcher is a specialist. Articles are then published under a creative commons license and are freely available on the journal website to benefit a majority. Our partner researchers work in space agencies, in academia, or in industry, in a variety of disciplines from STEM to social sciences. The emphasis is set on multidisciplinarity to raise students’ awareness about research wideness and show them that research is not limited to STEM fields but also exists in economics and humanities. This points out the significance and ubiquity of transdisciplinarity in solving real world’s problems, such as global change issues, biological and physical questions or space exploration from different perspectives. In its first year and a half, the journal has already involved more than ten classes in five different schools and 18 articles have been submitted by ten researchers. The project allows a tight and direct interaction between students and researchers and it makes students responsible for the publication content over a large audience. Thanks to an easy procedure for classes and researchers and small-time requirement, our hope is to mobilize the largest scientific community to help people being more critics and having access to scientific results.
How to cite: Dalmas, B., Goncalves, B., Poulet, L., Vernay, A., and Vernay, M.: A multidisciplinary scientific outreach journal designed for and made by middle and high school students to bring research closer to the classroom, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21726, https://doi.org/10.5194/egusphere-egu2020-21726, 2020.
EGU2020-1381 | Displays | ITS1.8/SSS1.1
Perception of Rural Tourism From the Perspective of Tourists. Case study Mountain area of Suceava CountyLiliana Daniela Diacon, Vasile Efros, and Cristian Ciubotaru
Rural tourism is an activity that protects the environment in comparison with the consumer industries, becoming an ally in the conservation of the environment. All of the rural areas of the country, the most consistent through potential is the mountain area, which is why we chose as a case study, the mountain region of Suceava county. Starting from the hypothesis that the tourist offer of the mountain area is attractive, the research aims at the degree of tourist satisfaction with the tourist offer of the rural area of ââSuceava county.
The methodology is based on the conducted survey on the basis of the questionnaire by the method of face-to-face interview, between September 1 and November 30, 2019.The questionnaire was anonymous in order to ensure the highest degree of sincerity of the answers and was applied to a number of 630 tourists from the mountain region of Suceava county.
The present study shows that most tourists who choose Suceava county as their destination, reside in neighboring counties, especially in the region of Moldova. An element of attractiveness is the lower prices compared to other tourist areas of the country. The economic facility of granting holiday vouchers and cards from the public domain in Romania, makes the tourist demand in non-polluting spaces increasing. On the other hand, the statistical data confirm that the number of the agrotourism pensions in Suceava county are increasing from year to year; Suceava is ranking in 2019 on the second place after Brasov county.The hypothesis confirms that rural tourism is a growing phenomenon, but the length of stay of tourists in the rural area is on average 1-3 days.
In conclusion, the following analysis of the results it is found that tourists are attracted by the beauty of the landscape of the existing cultural objectives, the local gastronomy, the hospitality of the hosts, all at a lower prices compared to areas of the great tourist interest in the country.
Keywords: Rural Tourism, Mountain area, tourists, Suceava County
How to cite: Diacon, L. D., Efros, V., and Ciubotaru, C.: Perception of Rural Tourism From the Perspective of Tourists. Case study Mountain area of Suceava County, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1381, https://doi.org/10.5194/egusphere-egu2020-1381, 2020.
Rural tourism is an activity that protects the environment in comparison with the consumer industries, becoming an ally in the conservation of the environment. All of the rural areas of the country, the most consistent through potential is the mountain area, which is why we chose as a case study, the mountain region of Suceava county. Starting from the hypothesis that the tourist offer of the mountain area is attractive, the research aims at the degree of tourist satisfaction with the tourist offer of the rural area of ââSuceava county.
The methodology is based on the conducted survey on the basis of the questionnaire by the method of face-to-face interview, between September 1 and November 30, 2019.The questionnaire was anonymous in order to ensure the highest degree of sincerity of the answers and was applied to a number of 630 tourists from the mountain region of Suceava county.
The present study shows that most tourists who choose Suceava county as their destination, reside in neighboring counties, especially in the region of Moldova. An element of attractiveness is the lower prices compared to other tourist areas of the country. The economic facility of granting holiday vouchers and cards from the public domain in Romania, makes the tourist demand in non-polluting spaces increasing. On the other hand, the statistical data confirm that the number of the agrotourism pensions in Suceava county are increasing from year to year; Suceava is ranking in 2019 on the second place after Brasov county.The hypothesis confirms that rural tourism is a growing phenomenon, but the length of stay of tourists in the rural area is on average 1-3 days.
In conclusion, the following analysis of the results it is found that tourists are attracted by the beauty of the landscape of the existing cultural objectives, the local gastronomy, the hospitality of the hosts, all at a lower prices compared to areas of the great tourist interest in the country.
Keywords: Rural Tourism, Mountain area, tourists, Suceava County
How to cite: Diacon, L. D., Efros, V., and Ciubotaru, C.: Perception of Rural Tourism From the Perspective of Tourists. Case study Mountain area of Suceava County, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1381, https://doi.org/10.5194/egusphere-egu2020-1381, 2020.
EGU2020-2190 | Displays | ITS1.8/SSS1.1 | Highlight
The Geogarden of the University Roma Tre: creation of a prototype of a Geological Garden of Lazio for the dissemination of Geological Sciences in RomeSveva Corrado, Andrea Bollati, and Marina Fabbri
Between 2017 and 2019, a prototype of a geological garden for the dissemination of Geological Sciences to the general public was created in the open-air spaces of the Department of Sciences of the Roma Tre University. This first nucleus is the result of a Citizen Science activity carried out by students of the High Schools of Rome and its province, conceived and guided by a group of University researchers and high school teachers, in collaboration with local institutions and some mining companies operating in the surroundings of Rome. Currently the prototype consists of six large rock samples representative of lithotypes cropping out in the Roman Campaign and in the nearby Central Apennines that allow to tell the evolution of the territory surrounding the city of Rome since about 15 Ma ago, with particular reference to the history of the Roman countryside in the Quaternary period. Guided tours for schools and a general public and events popularizing scientific culture at various scales have represented the main dissemination activities carried out so far. Currently the garden is being expanded and integrated with numerous plant species representative of the botanical heritage of the Lazio region.
How to cite: Corrado, S., Bollati, A., and Fabbri, M.: The Geogarden of the University Roma Tre: creation of a prototype of a Geological Garden of Lazio for the dissemination of Geological Sciences in Rome, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2190, https://doi.org/10.5194/egusphere-egu2020-2190, 2020.
Between 2017 and 2019, a prototype of a geological garden for the dissemination of Geological Sciences to the general public was created in the open-air spaces of the Department of Sciences of the Roma Tre University. This first nucleus is the result of a Citizen Science activity carried out by students of the High Schools of Rome and its province, conceived and guided by a group of University researchers and high school teachers, in collaboration with local institutions and some mining companies operating in the surroundings of Rome. Currently the prototype consists of six large rock samples representative of lithotypes cropping out in the Roman Campaign and in the nearby Central Apennines that allow to tell the evolution of the territory surrounding the city of Rome since about 15 Ma ago, with particular reference to the history of the Roman countryside in the Quaternary period. Guided tours for schools and a general public and events popularizing scientific culture at various scales have represented the main dissemination activities carried out so far. Currently the garden is being expanded and integrated with numerous plant species representative of the botanical heritage of the Lazio region.
How to cite: Corrado, S., Bollati, A., and Fabbri, M.: The Geogarden of the University Roma Tre: creation of a prototype of a Geological Garden of Lazio for the dissemination of Geological Sciences in Rome, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2190, https://doi.org/10.5194/egusphere-egu2020-2190, 2020.
ITS1.10/NH9.27 – Inter- and transdisciplinary research and practice: state of transformative knowledge to address global change challenges in mountain regions of the world
EGU2020-6453 | Displays | ITS1.10/NH9.27 | Highlight
Climate Change, Traditional Life Styles and Livelihood Questions: Socio-cultural and Physical Constraints of Remotely Located Societies of Western Himalaya.Virender Singh Negi
Himalaya presents a great range of lifestyle and livelihood base to its native communities. The extreme climatic condition imposes a restriction on the living conditions, local ownership, alternative sources of income, women's empowerment, and long-term sustainable livelihoods are main elements of community work. But improvements in communication and transportation system have improved the lifestyle of the people living in those regions. The breadth of natural biodiversity in the Himalayas is complemented by a rich mosaic of cultures, traditions and people. But the ethnic groups living in remote valleys of the Himalayan region have generally conserved their traditional cultural identities. Ancient traditions and livelihoods of many communities remain woven into the balanced use of natural resources. They depend on these resources for their livelihoods, and value ecosystem services such as freshwater, erosion control, and agricultural and subsistence harvests.
Forests are strained as demand continues to grow for timber and food crops. Himalaya’s communities have suffered a disastrous slump in production due to erratic weather in recent years, but the government is helping out with various insurance and relief scheme. For such remotely located communities of this part of Himalaya agriculture, nomadic herding, hunting and gathering are the main activities of the people who are unable to fulfil their basic requirements. The present paper investigates factors that have brought about physical and socio-economic changes in various parts of Indian region of Himalayas, interlinked with the fragile Himalayan environment by mapping, monitoring and change analysis with the help of remote sensing and GIS techniques.
How to cite: Negi, V. S.: Climate Change, Traditional Life Styles and Livelihood Questions: Socio-cultural and Physical Constraints of Remotely Located Societies of Western Himalaya., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6453, https://doi.org/10.5194/egusphere-egu2020-6453, 2020.
Himalaya presents a great range of lifestyle and livelihood base to its native communities. The extreme climatic condition imposes a restriction on the living conditions, local ownership, alternative sources of income, women's empowerment, and long-term sustainable livelihoods are main elements of community work. But improvements in communication and transportation system have improved the lifestyle of the people living in those regions. The breadth of natural biodiversity in the Himalayas is complemented by a rich mosaic of cultures, traditions and people. But the ethnic groups living in remote valleys of the Himalayan region have generally conserved their traditional cultural identities. Ancient traditions and livelihoods of many communities remain woven into the balanced use of natural resources. They depend on these resources for their livelihoods, and value ecosystem services such as freshwater, erosion control, and agricultural and subsistence harvests.
Forests are strained as demand continues to grow for timber and food crops. Himalaya’s communities have suffered a disastrous slump in production due to erratic weather in recent years, but the government is helping out with various insurance and relief scheme. For such remotely located communities of this part of Himalaya agriculture, nomadic herding, hunting and gathering are the main activities of the people who are unable to fulfil their basic requirements. The present paper investigates factors that have brought about physical and socio-economic changes in various parts of Indian region of Himalayas, interlinked with the fragile Himalayan environment by mapping, monitoring and change analysis with the help of remote sensing and GIS techniques.
How to cite: Negi, V. S.: Climate Change, Traditional Life Styles and Livelihood Questions: Socio-cultural and Physical Constraints of Remotely Located Societies of Western Himalaya., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6453, https://doi.org/10.5194/egusphere-egu2020-6453, 2020.
EGU2020-9255 | Displays | ITS1.10/NH9.27
Interdisciplinary collaboration and joint knowledge production in climate change adaptation in mountain regions in South Asia, Latin America and SwitzerlandChristian Huggel and Veruska Muccione and the knowledgeforclimate.net
The level of already committed climate change implies massive impacts and risks to natural and human systems on the planet which probably have been underestimated so far, as recent research and science-policy assessments such as from the IPCC indicate. Scenarios with less stringent emission reduction pose even greater risks of partly unknown dimensions. Adaptation to climate change is therefore of critical importance, in particular for countries with low adaptive capacity where climate change can seriously undermine efforts for sustainable development. Mountains are among the hotspots of climate impacts and adaptation.
Climate adaptation is fundamentally an interdisciplinary and transdisciplinary endeavor. Various sources of knowledge and perspectives need to be considered and integrated to produce actionable and solution-oriented knowledge. While experiences on joint knowledge production (JKP) has been increasing over recent years there is still missing clarity how to design and implement such a process in the context of climate adaptation.
Here we analyze experiences from a new initiative and network of climate adaptation in education and research with institutions from South Asia, the Andes and Central America, and Switzerland (knowledgeforclimate.net). Partners form a highly multi-disciplinary network with diverse cultural and institutional backgrounds which is both an important asset and challenge for interdisciplinary collaboration. A core of the collaboration are case studies conducted in all six countries in mountain contexts which are developed considering different disciplinary perspectives and represent the basis for both research and teaching. JKP takes place at different levels which need to be systematically and carefully analyzed.
We find that the processes of JKP are diverse, complex, and highly dependent on the interests and roles of actors within a network. To keep such processes alive, signposts in form of analysis and intermediary products along the network lifetime should be positioned as means of stocktaking and monitoring for the future.
We suggest that existing models of JKP need to be broadened to better accommodate the high diversity and non-linearity of JKP processes. JKP does not just happen as a product of interdisciplinary collaboration but needs continuous reflection, research, update and upgrade. Trust and a range of common interests among partners in the network have been identified as key aspects in the process. A particular challenge furthermore is to dedicate enough time and resources to the framing process but then clearly moving beyond into the action and solution space. Harmonizing different forms of knowledge pertinent to climate adaptation in mountains and harvesting the diversity while accepting possibly limited consensus is essential, yet, it is not a priori predictable where this balance lies.
How to cite: Huggel, C. and Muccione, V. and the knowledgeforclimate.net: Interdisciplinary collaboration and joint knowledge production in climate change adaptation in mountain regions in South Asia, Latin America and Switzerland, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9255, https://doi.org/10.5194/egusphere-egu2020-9255, 2020.
The level of already committed climate change implies massive impacts and risks to natural and human systems on the planet which probably have been underestimated so far, as recent research and science-policy assessments such as from the IPCC indicate. Scenarios with less stringent emission reduction pose even greater risks of partly unknown dimensions. Adaptation to climate change is therefore of critical importance, in particular for countries with low adaptive capacity where climate change can seriously undermine efforts for sustainable development. Mountains are among the hotspots of climate impacts and adaptation.
Climate adaptation is fundamentally an interdisciplinary and transdisciplinary endeavor. Various sources of knowledge and perspectives need to be considered and integrated to produce actionable and solution-oriented knowledge. While experiences on joint knowledge production (JKP) has been increasing over recent years there is still missing clarity how to design and implement such a process in the context of climate adaptation.
Here we analyze experiences from a new initiative and network of climate adaptation in education and research with institutions from South Asia, the Andes and Central America, and Switzerland (knowledgeforclimate.net). Partners form a highly multi-disciplinary network with diverse cultural and institutional backgrounds which is both an important asset and challenge for interdisciplinary collaboration. A core of the collaboration are case studies conducted in all six countries in mountain contexts which are developed considering different disciplinary perspectives and represent the basis for both research and teaching. JKP takes place at different levels which need to be systematically and carefully analyzed.
We find that the processes of JKP are diverse, complex, and highly dependent on the interests and roles of actors within a network. To keep such processes alive, signposts in form of analysis and intermediary products along the network lifetime should be positioned as means of stocktaking and monitoring for the future.
We suggest that existing models of JKP need to be broadened to better accommodate the high diversity and non-linearity of JKP processes. JKP does not just happen as a product of interdisciplinary collaboration but needs continuous reflection, research, update and upgrade. Trust and a range of common interests among partners in the network have been identified as key aspects in the process. A particular challenge furthermore is to dedicate enough time and resources to the framing process but then clearly moving beyond into the action and solution space. Harmonizing different forms of knowledge pertinent to climate adaptation in mountains and harvesting the diversity while accepting possibly limited consensus is essential, yet, it is not a priori predictable where this balance lies.
How to cite: Huggel, C. and Muccione, V. and the knowledgeforclimate.net: Interdisciplinary collaboration and joint knowledge production in climate change adaptation in mountain regions in South Asia, Latin America and Switzerland, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9255, https://doi.org/10.5194/egusphere-egu2020-9255, 2020.
EGU2020-12840 | Displays | ITS1.10/NH9.27
An integrative approach to assessing the resort and recreation potential of the highlands of the North Caucasus of the Russian Federation in the context of global urbanizationNatalia Efimenko, Elena Chalaya, and Nina Povolotskaya
The experience of interdisciplinary studies of the impact of urbanization on the resort and recreation potential of the mountainous territories of the North Caucasus (MTNC) for the purposes of medical balneology and recreational recreation is considered.
The State Register of Natural curative resources (NCR) of MTNC includes unique mineral waters and natural peloids of various physicochemical and microbiological composition, a favorable climate, and a picturesque mountain landscape that are integrated into the existing and rapidly developing complex socio-ecological resort and recreation infrastructure and system of spa treatment and recreation. The risks of the mountain resort and recreation ecosystem include high sensitivity to climate changes and anthropogenic impacts.
High demand for resort and recreational services of MTNC and increasing urbanization initiated the development of comprehensive monitoring studies of the dynamics of the state of NCR, experimental studies on action mechanisms of natural healing factors and the development of a model for ranking mountain areas by integrated resort and recreation potential (IRRP):
IRRP = ∑ (IMgmr + IMbkr + IMgl) // n, where IMgmr, IMbkr, IMgl are integrated modules (indicators) of hydromineral, bioclimatic and landscape resources.
The model includes three large blocks of monitoring studies of many natural parameters that characterize the properties of underground mineral waters and natural peloids; bioclimatic functions (comfort degree and biotropy), topographic features, vegetation, soils, picturesque and attractive mountain landscape. The modular approach adopted in balneology of medico-biological categorization of NCR parameters established in an experiment or in experimental researches made it possible to overcome the differences in units of measurement of the results of multifactor natural monitoring [1, 2].
Conclusion - the integrative approach adopted in the work to assess the resort and recreational potential of the highlands made it possible to evaluate contractivity (comfort, health and pathogenic functions), stability, diversity, attractiveness of natural complexes of the federal resorts of the North Caucasus, to substantiate the priorities for the territorial development of resort and recreational infrastructure in the resort region of Caucasian Mineral Waters, the Republic of North Ossetia-Alania, the Karachay-Cherkess Republic.
References
1.Resort study of Caucasian Mineralnye Vody region / Under the general edition of the prof. V.V. Uyba. Scientific publication. - Pyatigorsk. - 2011. – 368p.
2.A technique of balneological assessment of forest-park landscapes of mountain territories for climatic landscape therapy. A grant for doctors. – Pyatigorsk. - 2015. - 26p.
How to cite: Efimenko, N., Chalaya, E., and Povolotskaya, N.: An integrative approach to assessing the resort and recreation potential of the highlands of the North Caucasus of the Russian Federation in the context of global urbanization, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12840, https://doi.org/10.5194/egusphere-egu2020-12840, 2020.
The experience of interdisciplinary studies of the impact of urbanization on the resort and recreation potential of the mountainous territories of the North Caucasus (MTNC) for the purposes of medical balneology and recreational recreation is considered.
The State Register of Natural curative resources (NCR) of MTNC includes unique mineral waters and natural peloids of various physicochemical and microbiological composition, a favorable climate, and a picturesque mountain landscape that are integrated into the existing and rapidly developing complex socio-ecological resort and recreation infrastructure and system of spa treatment and recreation. The risks of the mountain resort and recreation ecosystem include high sensitivity to climate changes and anthropogenic impacts.
High demand for resort and recreational services of MTNC and increasing urbanization initiated the development of comprehensive monitoring studies of the dynamics of the state of NCR, experimental studies on action mechanisms of natural healing factors and the development of a model for ranking mountain areas by integrated resort and recreation potential (IRRP):
IRRP = ∑ (IMgmr + IMbkr + IMgl) // n, where IMgmr, IMbkr, IMgl are integrated modules (indicators) of hydromineral, bioclimatic and landscape resources.
The model includes three large blocks of monitoring studies of many natural parameters that characterize the properties of underground mineral waters and natural peloids; bioclimatic functions (comfort degree and biotropy), topographic features, vegetation, soils, picturesque and attractive mountain landscape. The modular approach adopted in balneology of medico-biological categorization of NCR parameters established in an experiment or in experimental researches made it possible to overcome the differences in units of measurement of the results of multifactor natural monitoring [1, 2].
Conclusion - the integrative approach adopted in the work to assess the resort and recreational potential of the highlands made it possible to evaluate contractivity (comfort, health and pathogenic functions), stability, diversity, attractiveness of natural complexes of the federal resorts of the North Caucasus, to substantiate the priorities for the territorial development of resort and recreational infrastructure in the resort region of Caucasian Mineral Waters, the Republic of North Ossetia-Alania, the Karachay-Cherkess Republic.
References
1.Resort study of Caucasian Mineralnye Vody region / Under the general edition of the prof. V.V. Uyba. Scientific publication. - Pyatigorsk. - 2011. – 368p.
2.A technique of balneological assessment of forest-park landscapes of mountain territories for climatic landscape therapy. A grant for doctors. – Pyatigorsk. - 2015. - 26p.
How to cite: Efimenko, N., Chalaya, E., and Povolotskaya, N.: An integrative approach to assessing the resort and recreation potential of the highlands of the North Caucasus of the Russian Federation in the context of global urbanization, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12840, https://doi.org/10.5194/egusphere-egu2020-12840, 2020.
EGU2020-17217 | Displays | ITS1.10/NH9.27
The Spatial Pattern of Ski Areas and Its Driving Factors in China: A Strategy for Healthy Development of the Ski IndustryHongmin An, Cunde Xiao, and Minghu Ding
The development of ski areas would bring socio-economic benefits to mountain regions. At present, the ski industry in China is developing rapidly, and the number of ski areas is increasing dramatically. However, the understanding of the spatial pattern and driving factors for these ski areas is limited. This study collected detailed data about ski areas and their surrounding natural and economic factors in China. Criteria for classification of ski areas were proposed, and a total of 589 alpine ski areas in China were classified into three types: ski resorts for vacationing (va-ski resorts), ski areas for learning (le-ski areas) and ski parks to experience skiing (ex-ski parks), with proportions of 2.1%, 15.4% and 82.5%, respectively, which indicated that the Chinese ski industry was still dominated by small-sized ski areas. The overall spatial patterns of ski areas were clustered with a nearest neighbor indicator (NNI) of 0.424, in which ex-ski parks and le-ski areas exhibited clustered distributions with NNIs of 0.44 and 0.51, respectively, and va-ski resorts were randomly distributed with an NNI of 1.04. The theory and method of spatial autocorrelation were first used to analyze the spatial pattern and driving factors of ski areas. The results showed that ski areas in cities had a positive spatial autocorrelation with a Moran’s index value of 0.25. The results of Local Indications of Spatial Association (LISA) showed that ski areas were mainly concentrated in 3 regions: the Beijing-centered Yanshan-Taihang Mountains and Shandong Hill areas, the Harbin-centered Changbai Mountain areas and the Urumqi-centered Tianshan-Altay Mountain areas. The first location was mainly driven by socio-economic factors, and the latter two locations were mainly driven by natural factors. Ski tourism in China still faces many challenges. The government sector should strengthen supervision, develop a ski industry alliance, and promote the healthy and sustainable development of the ski industry in the future.
How to cite: An, H., Xiao, C., and Ding, M.: The Spatial Pattern of Ski Areas and Its Driving Factors in China: A Strategy for Healthy Development of the Ski Industry, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-17217, https://doi.org/10.5194/egusphere-egu2020-17217, 2020.
The development of ski areas would bring socio-economic benefits to mountain regions. At present, the ski industry in China is developing rapidly, and the number of ski areas is increasing dramatically. However, the understanding of the spatial pattern and driving factors for these ski areas is limited. This study collected detailed data about ski areas and their surrounding natural and economic factors in China. Criteria for classification of ski areas were proposed, and a total of 589 alpine ski areas in China were classified into three types: ski resorts for vacationing (va-ski resorts), ski areas for learning (le-ski areas) and ski parks to experience skiing (ex-ski parks), with proportions of 2.1%, 15.4% and 82.5%, respectively, which indicated that the Chinese ski industry was still dominated by small-sized ski areas. The overall spatial patterns of ski areas were clustered with a nearest neighbor indicator (NNI) of 0.424, in which ex-ski parks and le-ski areas exhibited clustered distributions with NNIs of 0.44 and 0.51, respectively, and va-ski resorts were randomly distributed with an NNI of 1.04. The theory and method of spatial autocorrelation were first used to analyze the spatial pattern and driving factors of ski areas. The results showed that ski areas in cities had a positive spatial autocorrelation with a Moran’s index value of 0.25. The results of Local Indications of Spatial Association (LISA) showed that ski areas were mainly concentrated in 3 regions: the Beijing-centered Yanshan-Taihang Mountains and Shandong Hill areas, the Harbin-centered Changbai Mountain areas and the Urumqi-centered Tianshan-Altay Mountain areas. The first location was mainly driven by socio-economic factors, and the latter two locations were mainly driven by natural factors. Ski tourism in China still faces many challenges. The government sector should strengthen supervision, develop a ski industry alliance, and promote the healthy and sustainable development of the ski industry in the future.
How to cite: An, H., Xiao, C., and Ding, M.: The Spatial Pattern of Ski Areas and Its Driving Factors in China: A Strategy for Healthy Development of the Ski Industry, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-17217, https://doi.org/10.5194/egusphere-egu2020-17217, 2020.
EGU2020-19480 | Displays | ITS1.10/NH9.27 | Highlight
Pilot study on the potential impact of climate variability on sedimentation in Andean reservoirs, based on data from the Cañete catchment, Peruvian Coastal Range.Miluska A. Rosas, Veerle Vanacker, Willem Viveen, Ronald R. Gutierrez, and Christian Huggel
The global water storage capacity of hydroelectric reservoirs is decreasing annually while the economic activity, the hydropower industry and the world population continue to grow strongly. The boom in hydropower development in Andean river basins was identified as one of the top 15 global conservation issues. For this region, the electricity generation might increase by 550% from 2005 to the year 2050, thereby needing an increase in water volume from 70.5 billion m3 to 150.7 billion m³. Of the Andean countries, Peru has the highest numbers of existing and proposed hydropower projects, because of its rapidly evolving energy demands (estimated at 8% growth per year) and regulatory framework that aims at promoting renewable energy. Despite initial efforts, studies that describe the impact of changing sediment transfer due to climate change to the hydroelectric infrastructural system are still limited.
This paper evaluates the potential impact of climate variability on the water storage capacity of hydroelectric reservoirs in Andean countries, via a case study of the Cañete River in the Peruvian Coastal Range. It houses the 220 MW El Platanal hydroelectric plant and the Capillucas reservoir that provide the surrounding areas with water and energy. We used a hydrological model (HEC-HMS) coupled with a sediment transport model (HEC-RAS) to simulate future changes in river discharge and sediment load. Ten scenarios were developed, a combination of two different precipitation patterns and five different precipitation rates.
The average sediment load of the Cañete River was estimated at 981 kTon/yr upstream of the Capillucas reservoir, which is in agreement with published erosion rates for the area. Our results show that the lifespan of the Capillucas reservoir ranges from 7 years for the most pessimistic scenario to 31 years for the most optimistic scenario. This is much shorter than the projected lifespan of 50 years. This pilot study illustrates the vulnerability of Andean hydroelectric reservoirs against future climate change.
How to cite: Rosas, M. A., Vanacker, V., Viveen, W., Gutierrez, R. R., and Huggel, C.: Pilot study on the potential impact of climate variability on sedimentation in Andean reservoirs, based on data from the Cañete catchment, Peruvian Coastal Range., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19480, https://doi.org/10.5194/egusphere-egu2020-19480, 2020.
The global water storage capacity of hydroelectric reservoirs is decreasing annually while the economic activity, the hydropower industry and the world population continue to grow strongly. The boom in hydropower development in Andean river basins was identified as one of the top 15 global conservation issues. For this region, the electricity generation might increase by 550% from 2005 to the year 2050, thereby needing an increase in water volume from 70.5 billion m3 to 150.7 billion m³. Of the Andean countries, Peru has the highest numbers of existing and proposed hydropower projects, because of its rapidly evolving energy demands (estimated at 8% growth per year) and regulatory framework that aims at promoting renewable energy. Despite initial efforts, studies that describe the impact of changing sediment transfer due to climate change to the hydroelectric infrastructural system are still limited.
This paper evaluates the potential impact of climate variability on the water storage capacity of hydroelectric reservoirs in Andean countries, via a case study of the Cañete River in the Peruvian Coastal Range. It houses the 220 MW El Platanal hydroelectric plant and the Capillucas reservoir that provide the surrounding areas with water and energy. We used a hydrological model (HEC-HMS) coupled with a sediment transport model (HEC-RAS) to simulate future changes in river discharge and sediment load. Ten scenarios were developed, a combination of two different precipitation patterns and five different precipitation rates.
The average sediment load of the Cañete River was estimated at 981 kTon/yr upstream of the Capillucas reservoir, which is in agreement with published erosion rates for the area. Our results show that the lifespan of the Capillucas reservoir ranges from 7 years for the most pessimistic scenario to 31 years for the most optimistic scenario. This is much shorter than the projected lifespan of 50 years. This pilot study illustrates the vulnerability of Andean hydroelectric reservoirs against future climate change.
How to cite: Rosas, M. A., Vanacker, V., Viveen, W., Gutierrez, R. R., and Huggel, C.: Pilot study on the potential impact of climate variability on sedimentation in Andean reservoirs, based on data from the Cañete catchment, Peruvian Coastal Range., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19480, https://doi.org/10.5194/egusphere-egu2020-19480, 2020.
EGU2020-19971 | Displays | ITS1.10/NH9.27
The Alpine Environmental Data Analysis Centre (www.alpendac.eu) – A Component of the Virtual Alpine Observatory (VAO) (www.vao.bayern.de)Michael Bittner, Dominik Laux, Oleg Goussev, Sabine Wüst, Jana Handschuih, Alexaner Götz, Helmut Heller, Johannes Munke, Roland Mair, Bianca Wittmann, Inga Beck, Markus Neumann, and Till Rehm
The “Alpine Environmental Data Analysis Centre” (AlpEnDAC) is a research data management and analysis platform for research facilities around the Alps and similar mountain ranges. It provides the computational infrastructure for the Virtual Alpine Observatory (VAO), which is a research network of European high-altitude research stations (http://www.vao.bayern.de).
Within the scope of previous work, the platform was developed with the focus on research data and metadata management as well as analysis and simulation tools. It offers the possibility to store and retrieve data securely (data-on-demand), to share it with other scientists and to interpret it with the help of computing-on-demand solutions via a user friendly web-based graphical user interface. The AlpEnDAC allows the analysis and consolidation of heterogeneous data sets from ground-based to satellite instruments.
In a further development phase, launched on 1 August 2019, the existing services of the AlpEnDAC will be supplemented by new components in the fields of user support and quality assurance. Furthermore, the modelling and analysis software portfolio will be extended, focusing on the development of innovative services in the fields of service-on-demand and operating-on-demand as well as the integration of new data sources and measurement instruments.
The AlpEnDAC helps environmental scientists to benefit from modern data management, data analysis, and simulation techniques. The VAO network, now including ten countries (Austria, France, Germany, Georgia, Italy, Norway, Slovenia, Switzerland, Bulgaria, and the Czech Republic) is an ideal and exciting context for developing the AlpEnDAC with researchers.
This project receives funding from the Bavarian State Ministry of the Environment and Consumer Protection.
How to cite: Bittner, M., Laux, D., Goussev, O., Wüst, S., Handschuih, J., Götz, A., Heller, H., Munke, J., Mair, R., Wittmann, B., Beck, I., Neumann, M., and Rehm, T.: The Alpine Environmental Data Analysis Centre (www.alpendac.eu) – A Component of the Virtual Alpine Observatory (VAO) (www.vao.bayern.de), EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19971, https://doi.org/10.5194/egusphere-egu2020-19971, 2020.
The “Alpine Environmental Data Analysis Centre” (AlpEnDAC) is a research data management and analysis platform for research facilities around the Alps and similar mountain ranges. It provides the computational infrastructure for the Virtual Alpine Observatory (VAO), which is a research network of European high-altitude research stations (http://www.vao.bayern.de).
Within the scope of previous work, the platform was developed with the focus on research data and metadata management as well as analysis and simulation tools. It offers the possibility to store and retrieve data securely (data-on-demand), to share it with other scientists and to interpret it with the help of computing-on-demand solutions via a user friendly web-based graphical user interface. The AlpEnDAC allows the analysis and consolidation of heterogeneous data sets from ground-based to satellite instruments.
In a further development phase, launched on 1 August 2019, the existing services of the AlpEnDAC will be supplemented by new components in the fields of user support and quality assurance. Furthermore, the modelling and analysis software portfolio will be extended, focusing on the development of innovative services in the fields of service-on-demand and operating-on-demand as well as the integration of new data sources and measurement instruments.
The AlpEnDAC helps environmental scientists to benefit from modern data management, data analysis, and simulation techniques. The VAO network, now including ten countries (Austria, France, Germany, Georgia, Italy, Norway, Slovenia, Switzerland, Bulgaria, and the Czech Republic) is an ideal and exciting context for developing the AlpEnDAC with researchers.
This project receives funding from the Bavarian State Ministry of the Environment and Consumer Protection.
How to cite: Bittner, M., Laux, D., Goussev, O., Wüst, S., Handschuih, J., Götz, A., Heller, H., Munke, J., Mair, R., Wittmann, B., Beck, I., Neumann, M., and Rehm, T.: The Alpine Environmental Data Analysis Centre (www.alpendac.eu) – A Component of the Virtual Alpine Observatory (VAO) (www.vao.bayern.de), EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19971, https://doi.org/10.5194/egusphere-egu2020-19971, 2020.
EGU2020-20460 | Displays | ITS1.10/NH9.27
A transdisciplinary approach to mass-movements mitigation strategies in volcanic habitats: an approach from complex systemsNatalia Pardo, Miguel Cabrera, Catalina Gonzalez, Monica Espinosa, Ricardo Camacho, Nancy Palacios, Susana Salazar, Leonardo Parra, and Sonia Archila
Volcanic habitats host a dynamic environment for sudden and long-lasting relationships between nature and culture, becoming an archetypal case for the study of resilient communities. In these habitats, the study of the occurring phenomena is often addressed independently and in disciplinary isolation, focusing on the uncertainty and contingency of geohazards, the abrupt and recurrent resetting of biophysical conditions due to natural disturbances, or the intrinsic repercussions on the anthropogenic memory. Under this perspective, mass-movements within a volcanic habitat can be addressed as a complex system built over various generations of interacting and interdependent human societies, ecological systems, climate and geological processes. Understanding this multivariable and multi-scalar coexistence becomes central in how mass-movements are perceived. In this work, we propose a transdisciplinary approach for the formulation and design of alternative strategies in the mitigation of mass-movements hazards, by responsibly collaborating between geoscientists, social scientists, and local actors.
Mass-movement mitigation strategies rarely take into account the cultural relationship of the inhabitants with their territories and the complexity of the local knowledge and capabilities of the communities to resolve their condition [2]. This limits the effectiveness in the response capacity and resilience of communities and ecosystems to extreme events [2]. Through this research, we aim at finding ways to democratize knowledge, and change academic practices within a geoethical context, recognizing and valuing the local perspectives. In this work, we study an area within the Doña Juana-Cascabel volcanic-complex, located in SW Colombia, and focus on the processes in the vicinity to the Humadal stream and neighbouring communities. This stream is recognized as the main preoccupation of the inhabitants with the recent occurrence of mass-movements in its basin. We address this issue through a team consisting of key local social actors and researchers in anthropology, archaeology, biology, design, engineering, geology, pedagogy, and pedology. We collaborate within a Historical Ecology framework, aiming to the empowerment of sociological resilience-based decision making [3]. This work started with the site recognition, mapping the geological, biological, and social settings. In parallel, we listened and valued the local knowledge about physical geography, ecosystems, and mass-movements in an active volcanic habitat, and merge it with the scientific knowledge. Moreover, this local knowledge enlighted key aspects on the interaction between the inhabitants and the State’s agencies and governmental processes, which underlay the dynamics of any reliable policy and sustainibile process.
In this particular site, we identified the organizational capacity to work on reforestation, road maintenance, and weaving as fundamental capabilities for connecting with the design, potential implementation, and sustainability of a set of potential mitigation strategies. With this case study, we invite the multiple actors involved in disaster risk reduction to find common languages beyond disciplinary boundiaries aiming to horizontalize knowledge with the local actors in risk. Through this excercise, we avoid the victimization of the communities, reduce power relationships, and empower resilience.
[1]Martin, Martin & Kent, (2009). Journal of environmental management, 91(2), 489-498.
[2]Gaillard, (2008). Journal of volcanology and geothermal research, 172(3-4), 315-328.
[3]Brierley, (2010). Area, 42(1), 76-85.
How to cite: Pardo, N., Cabrera, M., Gonzalez, C., Espinosa, M., Camacho, R., Palacios, N., Salazar, S., Parra, L., and Archila, S.: A transdisciplinary approach to mass-movements mitigation strategies in volcanic habitats: an approach from complex systems, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20460, https://doi.org/10.5194/egusphere-egu2020-20460, 2020.
Volcanic habitats host a dynamic environment for sudden and long-lasting relationships between nature and culture, becoming an archetypal case for the study of resilient communities. In these habitats, the study of the occurring phenomena is often addressed independently and in disciplinary isolation, focusing on the uncertainty and contingency of geohazards, the abrupt and recurrent resetting of biophysical conditions due to natural disturbances, or the intrinsic repercussions on the anthropogenic memory. Under this perspective, mass-movements within a volcanic habitat can be addressed as a complex system built over various generations of interacting and interdependent human societies, ecological systems, climate and geological processes. Understanding this multivariable and multi-scalar coexistence becomes central in how mass-movements are perceived. In this work, we propose a transdisciplinary approach for the formulation and design of alternative strategies in the mitigation of mass-movements hazards, by responsibly collaborating between geoscientists, social scientists, and local actors.
Mass-movement mitigation strategies rarely take into account the cultural relationship of the inhabitants with their territories and the complexity of the local knowledge and capabilities of the communities to resolve their condition [2]. This limits the effectiveness in the response capacity and resilience of communities and ecosystems to extreme events [2]. Through this research, we aim at finding ways to democratize knowledge, and change academic practices within a geoethical context, recognizing and valuing the local perspectives. In this work, we study an area within the Doña Juana-Cascabel volcanic-complex, located in SW Colombia, and focus on the processes in the vicinity to the Humadal stream and neighbouring communities. This stream is recognized as the main preoccupation of the inhabitants with the recent occurrence of mass-movements in its basin. We address this issue through a team consisting of key local social actors and researchers in anthropology, archaeology, biology, design, engineering, geology, pedagogy, and pedology. We collaborate within a Historical Ecology framework, aiming to the empowerment of sociological resilience-based decision making [3]. This work started with the site recognition, mapping the geological, biological, and social settings. In parallel, we listened and valued the local knowledge about physical geography, ecosystems, and mass-movements in an active volcanic habitat, and merge it with the scientific knowledge. Moreover, this local knowledge enlighted key aspects on the interaction between the inhabitants and the State’s agencies and governmental processes, which underlay the dynamics of any reliable policy and sustainibile process.
In this particular site, we identified the organizational capacity to work on reforestation, road maintenance, and weaving as fundamental capabilities for connecting with the design, potential implementation, and sustainability of a set of potential mitigation strategies. With this case study, we invite the multiple actors involved in disaster risk reduction to find common languages beyond disciplinary boundiaries aiming to horizontalize knowledge with the local actors in risk. Through this excercise, we avoid the victimization of the communities, reduce power relationships, and empower resilience.
[1]Martin, Martin & Kent, (2009). Journal of environmental management, 91(2), 489-498.
[2]Gaillard, (2008). Journal of volcanology and geothermal research, 172(3-4), 315-328.
[3]Brierley, (2010). Area, 42(1), 76-85.
How to cite: Pardo, N., Cabrera, M., Gonzalez, C., Espinosa, M., Camacho, R., Palacios, N., Salazar, S., Parra, L., and Archila, S.: A transdisciplinary approach to mass-movements mitigation strategies in volcanic habitats: an approach from complex systems, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20460, https://doi.org/10.5194/egusphere-egu2020-20460, 2020.
EGU2020-21235 | Displays | ITS1.10/NH9.27
The Environmental Research Station SchneefernerhausInga Beck and Markus Neumann
The Environmental Research Station Schneefernerhaus (UFS) is Germany’s highest research station, located at an altitude of 2650 m. For over 20 years, many different institutions have been working here on a variety of permanent studies on an inter- and trans-disciplinary basis. Eight key scientific activities were assigned in 2007. These are:
- Satellite-based observations and early detection
- Regional climate and atmosphere
- Cosmic radiation and radioactivity
- Hydrology
- Environmental and high-altitude medicine
- Global Atmosphere Watch (GAW)
- Biosphere and Geosphere
- Cloud dynamics
Beyond these permanent research activities, around 80 temporary projects have been conducted with over 13 international partners. Over 30 projects are currently running at the UFS. The ‘size’ of the projects varies from small research groups and very short-term studies, to large research consortiums and long-term projects. The UFS is, furthermore, a partner in a number of international networks of (mountain) observatories, such as the Global Atmosphere Watch program of the WMO. International exchange is therefore guaranteed.
The operating company of the UFS makes the high quality of research possible. It takes care of the needs of the researchers, such as logistics, data transfer and exchange, and outreach. The UFS also serves as a meeting and educational center for research teams, operators of other stations and early- career scientists. The UFS is easily accessible all year round: a train line runs up the mountain, directly into the UFS building and allows the transportation of heavy material and devices. A well-equipped workshop allows the in-situ repair of instruments.
Beside some general information about the UFS, the presentation will highlight some recent projects in which the UFS has been involved. It will also show how to use the UFS for your own research ideas.
How to cite: Beck, I. and Neumann, M.: The Environmental Research Station Schneefernerhaus, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21235, https://doi.org/10.5194/egusphere-egu2020-21235, 2020.
The Environmental Research Station Schneefernerhaus (UFS) is Germany’s highest research station, located at an altitude of 2650 m. For over 20 years, many different institutions have been working here on a variety of permanent studies on an inter- and trans-disciplinary basis. Eight key scientific activities were assigned in 2007. These are:
- Satellite-based observations and early detection
- Regional climate and atmosphere
- Cosmic radiation and radioactivity
- Hydrology
- Environmental and high-altitude medicine
- Global Atmosphere Watch (GAW)
- Biosphere and Geosphere
- Cloud dynamics
Beyond these permanent research activities, around 80 temporary projects have been conducted with over 13 international partners. Over 30 projects are currently running at the UFS. The ‘size’ of the projects varies from small research groups and very short-term studies, to large research consortiums and long-term projects. The UFS is, furthermore, a partner in a number of international networks of (mountain) observatories, such as the Global Atmosphere Watch program of the WMO. International exchange is therefore guaranteed.
The operating company of the UFS makes the high quality of research possible. It takes care of the needs of the researchers, such as logistics, data transfer and exchange, and outreach. The UFS also serves as a meeting and educational center for research teams, operators of other stations and early- career scientists. The UFS is easily accessible all year round: a train line runs up the mountain, directly into the UFS building and allows the transportation of heavy material and devices. A well-equipped workshop allows the in-situ repair of instruments.
Beside some general information about the UFS, the presentation will highlight some recent projects in which the UFS has been involved. It will also show how to use the UFS for your own research ideas.
How to cite: Beck, I. and Neumann, M.: The Environmental Research Station Schneefernerhaus, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21235, https://doi.org/10.5194/egusphere-egu2020-21235, 2020.
ITS1.11/OS1.14 – Interdisciplinary and intercultural approaches for addressing scientific and socio-economic challenges in the North Atlantic region
EGU2020-1937 | Displays | ITS1.11/OS1.14
Reliability of foraminiferal Na/Ca as a direct paleo-salinity proxy in various planktonic species from the eastern tropical North AtlanticJacqueline Bertlich, Dirk Nürnberg, Ed Hathorne, Michael Siccha, Jeroen Groeneveld, Julie Meilland, and Michal Kucera
Foraminiferal Na/Ca in planktonic and benthic foraminifers is a promising new method to assess directly past seawater salinities, which complements existing approaches (e.g., paired shell Mg/Ca and δ18O, shell Ba/Ca). Recent culture and field calibration studies have shown a significant positive relationship of Na incorporation into foraminiferal calcite shells with increasing salinity [1, 2], as confirmed by our culture study of Trilobatus sacculifer [3]. However, we note that the sensitivity of Na/Ca in response to salinity changes is species-specific and regional dependent, whereas temperature could be excluded as a secondary influencing factor [2, 3, 5]. Na/Ca values vary from 1–3 mmol/mol for the same salinity within and between foraminiferal species, suggesting a dominant biological control.
To further evaluate the robustness of Na/Ca for its application as a reliable proxy, we here examine possible secondary controls on foraminiferal Na/Ca with new data for commonly used species for paleoreconstructions (Globigerinoides elongatus, G. ruber (pink), Orbulina universa, Globigerina bulloides, Neogloboquadrina dutertrei) collected by plankton tows in the eastern tropical North Atlantic during R/V Meteor cruise M140. We performed laser ablation ICP-MS measurements on single foraminiferal shells from depth-resolved plankton tows in 20 m net-intervals from locations where salinity was essentially constant, while seawater pH and total alkalinity differed by ~0.5 and 100 µmol/kg, respectively. Plankton tow samples provide new insights into the possible effects of natural variations in carbonate system parameters on Na incorporation into calcite tests with increasing water depth. The comparison of living foraminifers to sedimentary shells gives further information about the preservation state of Na/Ca in calcite shells over time, whereas fossil shells have mostly undergone gametogenesis during their life-time, or were affected post mortem by early diagenesis (sedimentation) processes. Those foraminifers were collected from surface sediments (M65-1) located in proximity to plankton tows. Our results show that all measured species, either from plankton tows or buried in the sediment, are within the Na/Ca range of previous studies [1-5], which increases the confidence for a robust Na/Ca to salinity proxy. However, the offset of ~2-5 mmol/mol between living foraminifers collected in surface waters (0-20 m) and fossil assemblages of the same species could be related to spine loss at the end of a foraminiferal life cycle [4]. In addition, the usage of inconsistent test sizes could further influence the foraminiferal Na/Ca signal. Our results reveal significant (R = -0.97, p<0.03) decreasing Na/Ca values with increasing test sizes between 180-250 µm for G. ruber (pink, white), N. dutertei and T. sacculifer, whereas values increase again with larger size classes >355 µm (R = 0.87, p<0.02).
[1] Wit et al. (2013) Biogeosciences 10, 6375-6387. [2] Mezger et al. (2016) Paleoceanography 31, 1562-1582. [3] Bertlich et al. (2018) Biogeosciences 15, 5991–6018.[4] Mezger et al. (2019) Biogeosciences 16, 1147-1165, 2019. [5] Allen et al. (2016) Geochim. Cosmochim. Acta 193, 197-221.
How to cite: Bertlich, J., Nürnberg, D., Hathorne, E., Siccha, M., Groeneveld, J., Meilland, J., and Kucera, M.: Reliability of foraminiferal Na/Ca as a direct paleo-salinity proxy in various planktonic species from the eastern tropical North Atlantic, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1937, https://doi.org/10.5194/egusphere-egu2020-1937, 2020.
Foraminiferal Na/Ca in planktonic and benthic foraminifers is a promising new method to assess directly past seawater salinities, which complements existing approaches (e.g., paired shell Mg/Ca and δ18O, shell Ba/Ca). Recent culture and field calibration studies have shown a significant positive relationship of Na incorporation into foraminiferal calcite shells with increasing salinity [1, 2], as confirmed by our culture study of Trilobatus sacculifer [3]. However, we note that the sensitivity of Na/Ca in response to salinity changes is species-specific and regional dependent, whereas temperature could be excluded as a secondary influencing factor [2, 3, 5]. Na/Ca values vary from 1–3 mmol/mol for the same salinity within and between foraminiferal species, suggesting a dominant biological control.
To further evaluate the robustness of Na/Ca for its application as a reliable proxy, we here examine possible secondary controls on foraminiferal Na/Ca with new data for commonly used species for paleoreconstructions (Globigerinoides elongatus, G. ruber (pink), Orbulina universa, Globigerina bulloides, Neogloboquadrina dutertrei) collected by plankton tows in the eastern tropical North Atlantic during R/V Meteor cruise M140. We performed laser ablation ICP-MS measurements on single foraminiferal shells from depth-resolved plankton tows in 20 m net-intervals from locations where salinity was essentially constant, while seawater pH and total alkalinity differed by ~0.5 and 100 µmol/kg, respectively. Plankton tow samples provide new insights into the possible effects of natural variations in carbonate system parameters on Na incorporation into calcite tests with increasing water depth. The comparison of living foraminifers to sedimentary shells gives further information about the preservation state of Na/Ca in calcite shells over time, whereas fossil shells have mostly undergone gametogenesis during their life-time, or were affected post mortem by early diagenesis (sedimentation) processes. Those foraminifers were collected from surface sediments (M65-1) located in proximity to plankton tows. Our results show that all measured species, either from plankton tows or buried in the sediment, are within the Na/Ca range of previous studies [1-5], which increases the confidence for a robust Na/Ca to salinity proxy. However, the offset of ~2-5 mmol/mol between living foraminifers collected in surface waters (0-20 m) and fossil assemblages of the same species could be related to spine loss at the end of a foraminiferal life cycle [4]. In addition, the usage of inconsistent test sizes could further influence the foraminiferal Na/Ca signal. Our results reveal significant (R = -0.97, p<0.03) decreasing Na/Ca values with increasing test sizes between 180-250 µm for G. ruber (pink, white), N. dutertei and T. sacculifer, whereas values increase again with larger size classes >355 µm (R = 0.87, p<0.02).
[1] Wit et al. (2013) Biogeosciences 10, 6375-6387. [2] Mezger et al. (2016) Paleoceanography 31, 1562-1582. [3] Bertlich et al. (2018) Biogeosciences 15, 5991–6018.[4] Mezger et al. (2019) Biogeosciences 16, 1147-1165, 2019. [5] Allen et al. (2016) Geochim. Cosmochim. Acta 193, 197-221.
How to cite: Bertlich, J., Nürnberg, D., Hathorne, E., Siccha, M., Groeneveld, J., Meilland, J., and Kucera, M.: Reliability of foraminiferal Na/Ca as a direct paleo-salinity proxy in various planktonic species from the eastern tropical North Atlantic, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1937, https://doi.org/10.5194/egusphere-egu2020-1937, 2020.
EGU2020-3160 | Displays | ITS1.11/OS1.14
Surface and subsurface Labrador Shelf water mass conditions during the last 6,000 yearsAnnalena Lochte, Ralph Schneider, Janne Repschläger, Markus Kienast, Thomas Blanz, Dieter Garbe-Schönberg, and Nils Andersen
The Labrador Sea is important for the modern global thermohaline circulation system through the formation of intermediate Labrador Sea Water (LSW) that has been hypothesized to stabilize the modern mode of North Atlantic deep-water circulation. The rate of LSW formation is controlled by the amount of winter heat loss to the atmosphere, the expanse of freshwater in the convection region and the inflow of saline waters from the Atlantic. The Labrador Sea, today, receives freshwater through the East and West Greenland Currents (EGC, WGC) and the Labrador Current (LC). Several studies have suggested the WGC to be the main supplier of freshwater to the Labrador Sea, but the role of the southward flowing LC in Labrador Sea convection is still debated. At the same time, many paleoceanographic reconstructions from the Labrador Shelf focussed on late Deglacial to early Holocene meltwater run-off from the Laurentide Ice Sheet (LIS), whereas little information exists about LC variability since the final melting of the LIS about 7,000 years ago. In order to enable better assessment of the role of the LC in deep-water formation and its importance for Holocene climate variability in Atlantic Canada, this study presents high-resolution middle to late Holocene records of sea surface and bottom water temperatures, freshening and sea ice cover on the Labrador Shelf during the last 6,000 years. Our records reveal that the LC underwent three major oceanographic phases from the Mid- to Late Holocene. From 6.2 to 5.6 ka BP, the LC experienced a cold episode that was followed by warmer conditions between 5.6 and 2.1 ka BP, possibly associated with the late Holocene Thermal Maximum. Although surface waters on the Labrador Shelf cooled gradually after 3 ka BP in response to the Neoglaciation, Labrador Shelf subsurface/bottom waters show a shift to warmer temperatures after 2.1 ka BP. Although such an inverse stratification by cooling of surface and warming of subsurface waters on the Labrador Shelf would suggest a diminished convection during the last two millennia compared to the mid-Holocene, it remains difficult to assess whether hydrographic conditions in the LC have had a significant impact on Labrador Sea deep-water formation. This study was conducted within the HOSST research school with the aim to improve our understanding of the critical processes involved in the North Altantic thermohaline circulation, which is particularly important in light of current climate change.
How to cite: Lochte, A., Schneider, R., Repschläger, J., Kienast, M., Blanz, T., Garbe-Schönberg, D., and Andersen, N.: Surface and subsurface Labrador Shelf water mass conditions during the last 6,000 years , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3160, https://doi.org/10.5194/egusphere-egu2020-3160, 2020.
The Labrador Sea is important for the modern global thermohaline circulation system through the formation of intermediate Labrador Sea Water (LSW) that has been hypothesized to stabilize the modern mode of North Atlantic deep-water circulation. The rate of LSW formation is controlled by the amount of winter heat loss to the atmosphere, the expanse of freshwater in the convection region and the inflow of saline waters from the Atlantic. The Labrador Sea, today, receives freshwater through the East and West Greenland Currents (EGC, WGC) and the Labrador Current (LC). Several studies have suggested the WGC to be the main supplier of freshwater to the Labrador Sea, but the role of the southward flowing LC in Labrador Sea convection is still debated. At the same time, many paleoceanographic reconstructions from the Labrador Shelf focussed on late Deglacial to early Holocene meltwater run-off from the Laurentide Ice Sheet (LIS), whereas little information exists about LC variability since the final melting of the LIS about 7,000 years ago. In order to enable better assessment of the role of the LC in deep-water formation and its importance for Holocene climate variability in Atlantic Canada, this study presents high-resolution middle to late Holocene records of sea surface and bottom water temperatures, freshening and sea ice cover on the Labrador Shelf during the last 6,000 years. Our records reveal that the LC underwent three major oceanographic phases from the Mid- to Late Holocene. From 6.2 to 5.6 ka BP, the LC experienced a cold episode that was followed by warmer conditions between 5.6 and 2.1 ka BP, possibly associated with the late Holocene Thermal Maximum. Although surface waters on the Labrador Shelf cooled gradually after 3 ka BP in response to the Neoglaciation, Labrador Shelf subsurface/bottom waters show a shift to warmer temperatures after 2.1 ka BP. Although such an inverse stratification by cooling of surface and warming of subsurface waters on the Labrador Shelf would suggest a diminished convection during the last two millennia compared to the mid-Holocene, it remains difficult to assess whether hydrographic conditions in the LC have had a significant impact on Labrador Sea deep-water formation. This study was conducted within the HOSST research school with the aim to improve our understanding of the critical processes involved in the North Altantic thermohaline circulation, which is particularly important in light of current climate change.
How to cite: Lochte, A., Schneider, R., Repschläger, J., Kienast, M., Blanz, T., Garbe-Schönberg, D., and Andersen, N.: Surface and subsurface Labrador Shelf water mass conditions during the last 6,000 years , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3160, https://doi.org/10.5194/egusphere-egu2020-3160, 2020.
EGU2020-3541 | Displays | ITS1.11/OS1.14
Eco-development response to climate change and the isostatic uplift of southwestern Finland: Case study of the Nordsund areaGuido J. M. Verstraeten and Willem W. Verstraeten
Remediation of climate change induced by anthropogenic emissions of greenhouse gasses and
it precursors is the main focus today. However, less known is that the environment may also
be subjected to relatively fast geological dynamical phenomena such as the isostatic uplift of
Fennoscandia, parts of Canada and northwestern Russia. This uplift affects the archipelago
along the coast of southwestern Finland and Sweden and causes the relocation of human
activities.
In this study we investigate the on-ground observed regression of the Gulf of Bothnia on the
coasts of southwestern Finland and its implications on the country-side activities in the
framework of the eco-development paradigm. We focus our study on the neighbourhood of
the Nordsund peninsula (60°40’30”N, 21°37’14”E) between Keikvesi and Katavakarinselkä,
representative for the whole Finnish archipelago with an average local isostatic uplift of 9 mm
per year (5 mm in the South and 14 mm in the Merenkurkku area. The Nordsund peninsula
contains a former bay of the Bothnia Sea, called Mustalahti, which is reduced to a lake since
the direct way out of inner land precipitation to the open sea disappeared in the 1980s.
We show that remotely sensed data on vegetation and surface wetness confirms this fast sea
regression and the silting-up of the nearby lakes that drain precipitation to the Gulf. The
changing of the Mustalahti over time and its vegetation is expressed in terms of Normalized
Difference Vegetation Index (NDVI) and the Normalized Difference Wetness Index (NDWI),
derived from Landsat 7 data for May, 12 th 2000 and for Landsat 8 for April, 23 rd 2019
characterized by a 30 m x 30 m pixel resolution. We discuss this changing coastline in the
framework of the Eco-Development paradigm which may rebalance nature, environment,
humans and culture. This paradigm is a valid alternative against the past and present-day
socio-economical dominant approach that contributed to the accelerated change of the Earth’s
climate.
How to cite: Verstraeten, G. J. M. and Verstraeten, W. W.: Eco-development response to climate change and the isostatic uplift of southwestern Finland: Case study of the Nordsund area, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3541, https://doi.org/10.5194/egusphere-egu2020-3541, 2020.
Remediation of climate change induced by anthropogenic emissions of greenhouse gasses and
it precursors is the main focus today. However, less known is that the environment may also
be subjected to relatively fast geological dynamical phenomena such as the isostatic uplift of
Fennoscandia, parts of Canada and northwestern Russia. This uplift affects the archipelago
along the coast of southwestern Finland and Sweden and causes the relocation of human
activities.
In this study we investigate the on-ground observed regression of the Gulf of Bothnia on the
coasts of southwestern Finland and its implications on the country-side activities in the
framework of the eco-development paradigm. We focus our study on the neighbourhood of
the Nordsund peninsula (60°40’30”N, 21°37’14”E) between Keikvesi and Katavakarinselkä,
representative for the whole Finnish archipelago with an average local isostatic uplift of 9 mm
per year (5 mm in the South and 14 mm in the Merenkurkku area. The Nordsund peninsula
contains a former bay of the Bothnia Sea, called Mustalahti, which is reduced to a lake since
the direct way out of inner land precipitation to the open sea disappeared in the 1980s.
We show that remotely sensed data on vegetation and surface wetness confirms this fast sea
regression and the silting-up of the nearby lakes that drain precipitation to the Gulf. The
changing of the Mustalahti over time and its vegetation is expressed in terms of Normalized
Difference Vegetation Index (NDVI) and the Normalized Difference Wetness Index (NDWI),
derived from Landsat 7 data for May, 12 th 2000 and for Landsat 8 for April, 23 rd 2019
characterized by a 30 m x 30 m pixel resolution. We discuss this changing coastline in the
framework of the Eco-Development paradigm which may rebalance nature, environment,
humans and culture. This paradigm is a valid alternative against the past and present-day
socio-economical dominant approach that contributed to the accelerated change of the Earth’s
climate.
How to cite: Verstraeten, G. J. M. and Verstraeten, W. W.: Eco-development response to climate change and the isostatic uplift of southwestern Finland: Case study of the Nordsund area, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3541, https://doi.org/10.5194/egusphere-egu2020-3541, 2020.
EGU2020-4263 | Displays | ITS1.11/OS1.14
Finding solutions in an interdisciplinary environmentAndrea Bryndum-Buchholz, Ana Corbalan, and Najeem Shajahan
Our rapidly changing world is facing challenges that increasingly demand strong interdisciplinary components in academic projects to find the solutions we need. Successful interdisciplinary research can enhance knowledge and hence lead to new discoveries and innovation. In order to successfully work together in projects that span multiple disciplines, it is important to fully understand the challenges these projects face. We revisit the meaning of interdisciplinarity and evaluate why it has often proven very challenging. For example, one of the greatest challenges is finding a common ground when framing key research questions. We analyze and present an ideal scenario, where challenges and limitations are acknowledged but overcome, and suggest some techniques that can be used to plan and successfully undertake interdisciplinary projects.
How to cite: Bryndum-Buchholz, A., Corbalan, A., and Shajahan, N.: Finding solutions in an interdisciplinary environment , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4263, https://doi.org/10.5194/egusphere-egu2020-4263, 2020.
Our rapidly changing world is facing challenges that increasingly demand strong interdisciplinary components in academic projects to find the solutions we need. Successful interdisciplinary research can enhance knowledge and hence lead to new discoveries and innovation. In order to successfully work together in projects that span multiple disciplines, it is important to fully understand the challenges these projects face. We revisit the meaning of interdisciplinarity and evaluate why it has often proven very challenging. For example, one of the greatest challenges is finding a common ground when framing key research questions. We analyze and present an ideal scenario, where challenges and limitations are acknowledged but overcome, and suggest some techniques that can be used to plan and successfully undertake interdisciplinary projects.
How to cite: Bryndum-Buchholz, A., Corbalan, A., and Shajahan, N.: Finding solutions in an interdisciplinary environment , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4263, https://doi.org/10.5194/egusphere-egu2020-4263, 2020.
EGU2020-4793 | Displays | ITS1.11/OS1.14
Declining silica availability – a challenge in the North Atlantic region?Kriste Makareviciute-Fichtner, Birte Matthiessen, Heike K. Lotze, and Ulrich Sommer
Understanding how changes in limiting nutrient availability affect life in the oceans requires interdisciplinary efforts. Here we illustrate this with an example of silicon, one of the most common elements on land which bioavailable form, silicic acid (Si(OH)4), is a limiting nutrient for silicifying primary producers, such as diatoms.
Silicic acid concentrations in the pelagic polar and subpolar North Atlantic have declined by 1-2 μM during spring pre-bloom conditions over the past 25 years. Many coastal areas of the North Atlantic region also face decreased relative availability of silicon due to increased riverine supply of nitrogen and phosphorus and stable or declining loads of silicon. Both declining silicic acid concentrations and declining silicon to nitrogen (Si:N) ratios limit the growth of diatoms, which are major primary producers contributing up to a quarter of global primary production.
To assess the effects of declining silicon availability on phytoplankton communities we conducted a mesocosm experiment manipulating Si:N ratios and copepod grazing pressure on phytoplankton communities from the Baltic Sea. Declining Si:N ratio affected not only diatom abundance and relative biomass but also their species composition and overall plankton diversity. Our results illustrate the importance of silicon in structuring community composition at the base of temperate marine food webs. Changes in silicic acid concentrations and Si:N ratios, therefore, may have far-reaching consequences on oceanic primary production and planktonic food webs.
The decline in silicon concentrations in polar and subpolar North Atlantic waters is attributed to natural multi-decadal variability but is likely amplified by reduced ocean mixing due to increased water temperatures, illustrating the need of international efforts to curb global climate change. The decline in Si:N ratios in coastal oceans also highlights the need for further reduction of nutrient pollution and improved river basin management. This may require interdisciplinary and international approaches to manage anthropogenic perturbations of the silicon cycle.
How to cite: Makareviciute-Fichtner, K., Matthiessen, B., Lotze, H. K., and Sommer, U.: Declining silica availability – a challenge in the North Atlantic region?, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4793, https://doi.org/10.5194/egusphere-egu2020-4793, 2020.
Understanding how changes in limiting nutrient availability affect life in the oceans requires interdisciplinary efforts. Here we illustrate this with an example of silicon, one of the most common elements on land which bioavailable form, silicic acid (Si(OH)4), is a limiting nutrient for silicifying primary producers, such as diatoms.
Silicic acid concentrations in the pelagic polar and subpolar North Atlantic have declined by 1-2 μM during spring pre-bloom conditions over the past 25 years. Many coastal areas of the North Atlantic region also face decreased relative availability of silicon due to increased riverine supply of nitrogen and phosphorus and stable or declining loads of silicon. Both declining silicic acid concentrations and declining silicon to nitrogen (Si:N) ratios limit the growth of diatoms, which are major primary producers contributing up to a quarter of global primary production.
To assess the effects of declining silicon availability on phytoplankton communities we conducted a mesocosm experiment manipulating Si:N ratios and copepod grazing pressure on phytoplankton communities from the Baltic Sea. Declining Si:N ratio affected not only diatom abundance and relative biomass but also their species composition and overall plankton diversity. Our results illustrate the importance of silicon in structuring community composition at the base of temperate marine food webs. Changes in silicic acid concentrations and Si:N ratios, therefore, may have far-reaching consequences on oceanic primary production and planktonic food webs.
The decline in silicon concentrations in polar and subpolar North Atlantic waters is attributed to natural multi-decadal variability but is likely amplified by reduced ocean mixing due to increased water temperatures, illustrating the need of international efforts to curb global climate change. The decline in Si:N ratios in coastal oceans also highlights the need for further reduction of nutrient pollution and improved river basin management. This may require interdisciplinary and international approaches to manage anthropogenic perturbations of the silicon cycle.
How to cite: Makareviciute-Fichtner, K., Matthiessen, B., Lotze, H. K., and Sommer, U.: Declining silica availability – a challenge in the North Atlantic region?, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4793, https://doi.org/10.5194/egusphere-egu2020-4793, 2020.
EGU2020-4892 | Displays | ITS1.11/OS1.14
Surface-Sensitive Methods for Marine Nanolayer Time-Series StudiesFlorian-David Lange, Nhat-Thao Ton-Nu, and Gernot Friedrichs
The interface between air and sea, the sea surface microlayer, covers a large part of the earth's surface and is enriched by amphiphilic organic molecules. It is a zone of very active chemistry and biology. The uppermost molecular layer directly at the air-sea interface, the so-called nanolayer, has a significant impact on wave dynamics by changing the viscoelastic properties of the interface and hence modulates air-sea gas exchange.
To answer the question if nanolayer abundance can be directly correlated to primary productivity, a close collaboration between biology and physical chemistry in the spirit of fundamental surface sciences is necessary. This contribution reports a showcase example how to apply a physico-chemical laser spectroscopic tool as a valuable contribution to such an interdisciplinary field. The described non-standard experiments yield fresh insight into a complex environmental system and shed light on non-obvious relations between variable biological activity and the physical properties of the air-sea interface. In the end, this is of particular interest for the assessment of the global role of the North Atlantic to act as a sink for anthropogenic CO2 emissions. Here, strong algae blooms take place, but if they go along with an immediate or delayed nanolayer formation is largely unknown.
From an analytical point of view, the investigation of the very thin organic layer at the air-water interface is challenging and has to rely on surface-sensitive techniques with the ability to distinguish between nanolayer and bulk water signal contributions. In this study, two complementary methods have been applied to measure both enrichment and abundance of natural sea surface films. Both laser spectroscopic Vibrational Sum Frequency Generation spectra (VSFG) and Langmuir compression isotherms yield information about the presence of surface-active compounds. Whereas the latter essentially measures surface tension changes, VSFG as a vibrational type of spectroscopy supplies additional information about the chemical nature of the interfacial molecules. Based on laboratory studies of organic nanolayer proxies, it was also possible to define a numerical index related to the surface coverage, hence simplifying the use of such measurements for other disciplines.
More precisely, natural samples were taken at the Boknis Eck time series station (BETS) in the Baltic Sea over ten years, complemented by a comprehensive data set obtained during two consecutive research cruises in the framework of the Baltic Gas Exchange (Baltic GasEx) experiment. Enrichment of surface-active organic material in the microlayer could be confirmed by both methods, indicating the expected tight connection between micro- and nanolayer signal. In agreement with earlier preliminary data (Biogeosciences 10 (2013) 5325), a seasonal trend of nanolayer abundance has been identified that does not directly correlate with chlorophyll concentration and the approximate time of the spring algae bloom at Boknis Eck. This interesting finding implies that primary productivity is not necessarily linked with nanolayer formation and that photochemical and microbial processing of organic precursor compounds play a role for the observed seasonality. More measurements along those lines are needed, in particular for the open Atlantic Ocean, to validate these findings.
How to cite: Lange, F.-D., Ton-Nu, N.-T., and Friedrichs, G.: Surface-Sensitive Methods for Marine Nanolayer Time-Series Studies, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4892, https://doi.org/10.5194/egusphere-egu2020-4892, 2020.
The interface between air and sea, the sea surface microlayer, covers a large part of the earth's surface and is enriched by amphiphilic organic molecules. It is a zone of very active chemistry and biology. The uppermost molecular layer directly at the air-sea interface, the so-called nanolayer, has a significant impact on wave dynamics by changing the viscoelastic properties of the interface and hence modulates air-sea gas exchange.
To answer the question if nanolayer abundance can be directly correlated to primary productivity, a close collaboration between biology and physical chemistry in the spirit of fundamental surface sciences is necessary. This contribution reports a showcase example how to apply a physico-chemical laser spectroscopic tool as a valuable contribution to such an interdisciplinary field. The described non-standard experiments yield fresh insight into a complex environmental system and shed light on non-obvious relations between variable biological activity and the physical properties of the air-sea interface. In the end, this is of particular interest for the assessment of the global role of the North Atlantic to act as a sink for anthropogenic CO2 emissions. Here, strong algae blooms take place, but if they go along with an immediate or delayed nanolayer formation is largely unknown.
From an analytical point of view, the investigation of the very thin organic layer at the air-water interface is challenging and has to rely on surface-sensitive techniques with the ability to distinguish between nanolayer and bulk water signal contributions. In this study, two complementary methods have been applied to measure both enrichment and abundance of natural sea surface films. Both laser spectroscopic Vibrational Sum Frequency Generation spectra (VSFG) and Langmuir compression isotherms yield information about the presence of surface-active compounds. Whereas the latter essentially measures surface tension changes, VSFG as a vibrational type of spectroscopy supplies additional information about the chemical nature of the interfacial molecules. Based on laboratory studies of organic nanolayer proxies, it was also possible to define a numerical index related to the surface coverage, hence simplifying the use of such measurements for other disciplines.
More precisely, natural samples were taken at the Boknis Eck time series station (BETS) in the Baltic Sea over ten years, complemented by a comprehensive data set obtained during two consecutive research cruises in the framework of the Baltic Gas Exchange (Baltic GasEx) experiment. Enrichment of surface-active organic material in the microlayer could be confirmed by both methods, indicating the expected tight connection between micro- and nanolayer signal. In agreement with earlier preliminary data (Biogeosciences 10 (2013) 5325), a seasonal trend of nanolayer abundance has been identified that does not directly correlate with chlorophyll concentration and the approximate time of the spring algae bloom at Boknis Eck. This interesting finding implies that primary productivity is not necessarily linked with nanolayer formation and that photochemical and microbial processing of organic precursor compounds play a role for the observed seasonality. More measurements along those lines are needed, in particular for the open Atlantic Ocean, to validate these findings.
How to cite: Lange, F.-D., Ton-Nu, N.-T., and Friedrichs, G.: Surface-Sensitive Methods for Marine Nanolayer Time-Series Studies, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4892, https://doi.org/10.5194/egusphere-egu2020-4892, 2020.
EGU2020-5803 | Displays | ITS1.11/OS1.14
Summer schools of the HOSST-TOSST graduate programme: a multi-sector approach towards scientific communication and outreachKirsten Meulenbroek, WanXuan Yao, and Tatum Miko Herrero
The goal of the HOSST-TOSST programme is to cultivate the next generation of advocates of the ocean. As we enter a time where all kinds of opinions are formed through the rapid exchange of unfounded information, the importance of science stays ever crucial as it could and should serve as a common ground based on factual evidence and analytical reasoning. The programme directly embedded training for scientific communication and outreach methods at the very beginning of the careers of the next generation of ocean scientists. One of the ways was through various mandatory summer schools located in marine institutes across the North Atlantic. The summer schools challenged our doctoral candidates from diverse disciplines to collaborate in teams. Each team was assigned with a mini-project, where communication and outreach were essential for their success.
In Halifax, Canada, the project aim was to create business proposals or products that would be financially viable whilst not encumbering the already struggling ocean.
In Kiel, Germany, the end goal was to come up with proposals for Marine Protected Areas in the busiest regions of the Atlantic, all the while navigating between various stakeholders and other ocean users to come up with the best compromise.
In Mindelo, Cabo Verde, the participants, including local students, did field research and presented findings on geological processes and marine ecosystems which directly influence the lives of the residents.
The summer schools aimed to instill an awareness of how to conduct scientific communication and outreach to the general public from a multi-spectrum approach. The variety within the three projects, places and the diverse communities involved have all contributed to discussions leading to a broader view on the issues, possible solutions and scientific questions that remain open surrounding the Atlantic Ocean in all its facets.
How to cite: Meulenbroek, K., Yao, W., and Herrero, T. M.: Summer schools of the HOSST-TOSST graduate programme: a multi-sector approach towards scientific communication and outreach, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5803, https://doi.org/10.5194/egusphere-egu2020-5803, 2020.
The goal of the HOSST-TOSST programme is to cultivate the next generation of advocates of the ocean. As we enter a time where all kinds of opinions are formed through the rapid exchange of unfounded information, the importance of science stays ever crucial as it could and should serve as a common ground based on factual evidence and analytical reasoning. The programme directly embedded training for scientific communication and outreach methods at the very beginning of the careers of the next generation of ocean scientists. One of the ways was through various mandatory summer schools located in marine institutes across the North Atlantic. The summer schools challenged our doctoral candidates from diverse disciplines to collaborate in teams. Each team was assigned with a mini-project, where communication and outreach were essential for their success.
In Halifax, Canada, the project aim was to create business proposals or products that would be financially viable whilst not encumbering the already struggling ocean.
In Kiel, Germany, the end goal was to come up with proposals for Marine Protected Areas in the busiest regions of the Atlantic, all the while navigating between various stakeholders and other ocean users to come up with the best compromise.
In Mindelo, Cabo Verde, the participants, including local students, did field research and presented findings on geological processes and marine ecosystems which directly influence the lives of the residents.
The summer schools aimed to instill an awareness of how to conduct scientific communication and outreach to the general public from a multi-spectrum approach. The variety within the three projects, places and the diverse communities involved have all contributed to discussions leading to a broader view on the issues, possible solutions and scientific questions that remain open surrounding the Atlantic Ocean in all its facets.
How to cite: Meulenbroek, K., Yao, W., and Herrero, T. M.: Summer schools of the HOSST-TOSST graduate programme: a multi-sector approach towards scientific communication and outreach, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5803, https://doi.org/10.5194/egusphere-egu2020-5803, 2020.
EGU2020-5811 | Displays | ITS1.11/OS1.14
TOSST Research Expeditions in the North Atlantic OceanRicardo Arruda, Lorenza Raimondi, Patrick Duplessis, Nadine Lehmann, Irena Schulten, Masoud Aali, Yuan Wang, and Scott McCain
Over the 6 years of the Transatlantic Ocean System Science and Technology program (TOSST - 2014 – 2019), graduate students participated in a variety of first class research expeditions in the North Atlantic Ocean, contributing to high quality datasets for this region and reaching a total of 380 days at-sea. These research cruises expanded from the Arctic Ocean, Labrador Sea and sub-Polar North Atlantic to the Equatorial North Atlantic, and along the African and Cabo Verdean coasts. A total of 12 long term cruises with collaboration between 18 research institutes, were conducted on board of 10 research vessels of various nationalities (Canada, Germany, Bermuda, Sweden, Ireland and USA). The range of measurements performed during these cruises, which highlights the interdisciplinary nature of the TOSST program, includes: chemical oceanography; biological oceanography; physical oceanography; marine biogeochemistry; microbiology; paleoceanography; geology; marine geophysics; and atmospheric chemistry. In this work, we will showcase the breath of research covered by TOSST graduates in the North Atlantic Ocean and provide details on the overall goals/objectives of each cruise, the teams and research vessels involved, the diverse scientific instrumentation deployed and sampling schemes. We highlight the importance of multi-disciplinary expeditions and at-sea experiences for professional as well as for personal development of early career scientists. Logistic and economic efforts are required to collect samples and to deploy instruments, therefore collaboration between disciplines, research institutes and countries (of which TOSST graduates’ research is an example) are fundamental in order to increase the quality, quantity and variety of observations in the North Atlantic Ocean.
How to cite: Arruda, R., Raimondi, L., Duplessis, P., Lehmann, N., Schulten, I., Aali, M., Wang, Y., and McCain, S.: TOSST Research Expeditions in the North Atlantic Ocean, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5811, https://doi.org/10.5194/egusphere-egu2020-5811, 2020.
Over the 6 years of the Transatlantic Ocean System Science and Technology program (TOSST - 2014 – 2019), graduate students participated in a variety of first class research expeditions in the North Atlantic Ocean, contributing to high quality datasets for this region and reaching a total of 380 days at-sea. These research cruises expanded from the Arctic Ocean, Labrador Sea and sub-Polar North Atlantic to the Equatorial North Atlantic, and along the African and Cabo Verdean coasts. A total of 12 long term cruises with collaboration between 18 research institutes, were conducted on board of 10 research vessels of various nationalities (Canada, Germany, Bermuda, Sweden, Ireland and USA). The range of measurements performed during these cruises, which highlights the interdisciplinary nature of the TOSST program, includes: chemical oceanography; biological oceanography; physical oceanography; marine biogeochemistry; microbiology; paleoceanography; geology; marine geophysics; and atmospheric chemistry. In this work, we will showcase the breath of research covered by TOSST graduates in the North Atlantic Ocean and provide details on the overall goals/objectives of each cruise, the teams and research vessels involved, the diverse scientific instrumentation deployed and sampling schemes. We highlight the importance of multi-disciplinary expeditions and at-sea experiences for professional as well as for personal development of early career scientists. Logistic and economic efforts are required to collect samples and to deploy instruments, therefore collaboration between disciplines, research institutes and countries (of which TOSST graduates’ research is an example) are fundamental in order to increase the quality, quantity and variety of observations in the North Atlantic Ocean.
How to cite: Arruda, R., Raimondi, L., Duplessis, P., Lehmann, N., Schulten, I., Aali, M., Wang, Y., and McCain, S.: TOSST Research Expeditions in the North Atlantic Ocean, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5811, https://doi.org/10.5194/egusphere-egu2020-5811, 2020.
EGU2020-5969 | Displays | ITS1.11/OS1.14 | Highlight
The #GoAtlanticBlue and the All-Atlantic Ocean Youth Ambassador Initiative to raise awareness of and promote the Atlantic OceanMargaret Rae
The Atlantic Ocean Research Alliance was launched on the signing of the Galway Statement on Atlantic Ocean Cooperation between Canada, the European Union and the United States of America in May 2013. A request to raise the visibility of the Atlantic Ocean was made by the trilateral Galway Statement Implementation Committee. In answer, the #GoAtlanticBlue campaign was created and piloted in June 2019 alongside World Oceans Day as this highly visible way to raise the profile of the Atlantic Ocean in people’s everyday lives and promote a reconnection with the Atlantic Ocean.
The #GoAtlanticBlue celebrating the Atlantic Ocean and our connections to it asked people and places to don blue garments, blue face paint, blue wigs and celebrate their connections whether they be livelihoods, inspiration, health and wellbeing or sustainable actions and developments for the ocean that could be celebrated. At night, the ask was to light up in blue and celebrate these connections. All were asked to share on social media. The success of this endeavour will be described and the next level of ambition will be discussed.
Following on from the June event, in August 2019 an All-Atlantic Ocean Youth Ambassador summer school was held at the special request of the Healthy Oceans & Seas Unit at the European Commission DG Research and Innovation. The All-Atlantic Ocean Youth Ambassador initiative is supported by the All-Atlantic Ocean Research Alliance under the Galway Statement on Atlantic Ocean Cooperation and the Belém Statement on Atlantic Ocean Research & innovation Cooperation. 23 Youth Ambassadors participated in this event from 15 countries along and across the Atlantic Ocean. Three (3) campaigns types were co-created during this time under the umbrella of #MyAtlanticStory, #GoAtlanticBlue. These campaigns as well as the All-Atlantic Ocean Youth Ambassador Forum launched in Brussels in February 2020 will be described and their success to date shown.
How to cite: Rae, M.: The #GoAtlanticBlue and the All-Atlantic Ocean Youth Ambassador Initiative to raise awareness of and promote the Atlantic Ocean, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5969, https://doi.org/10.5194/egusphere-egu2020-5969, 2020.
The Atlantic Ocean Research Alliance was launched on the signing of the Galway Statement on Atlantic Ocean Cooperation between Canada, the European Union and the United States of America in May 2013. A request to raise the visibility of the Atlantic Ocean was made by the trilateral Galway Statement Implementation Committee. In answer, the #GoAtlanticBlue campaign was created and piloted in June 2019 alongside World Oceans Day as this highly visible way to raise the profile of the Atlantic Ocean in people’s everyday lives and promote a reconnection with the Atlantic Ocean.
The #GoAtlanticBlue celebrating the Atlantic Ocean and our connections to it asked people and places to don blue garments, blue face paint, blue wigs and celebrate their connections whether they be livelihoods, inspiration, health and wellbeing or sustainable actions and developments for the ocean that could be celebrated. At night, the ask was to light up in blue and celebrate these connections. All were asked to share on social media. The success of this endeavour will be described and the next level of ambition will be discussed.
Following on from the June event, in August 2019 an All-Atlantic Ocean Youth Ambassador summer school was held at the special request of the Healthy Oceans & Seas Unit at the European Commission DG Research and Innovation. The All-Atlantic Ocean Youth Ambassador initiative is supported by the All-Atlantic Ocean Research Alliance under the Galway Statement on Atlantic Ocean Cooperation and the Belém Statement on Atlantic Ocean Research & innovation Cooperation. 23 Youth Ambassadors participated in this event from 15 countries along and across the Atlantic Ocean. Three (3) campaigns types were co-created during this time under the umbrella of #MyAtlanticStory, #GoAtlanticBlue. These campaigns as well as the All-Atlantic Ocean Youth Ambassador Forum launched in Brussels in February 2020 will be described and their success to date shown.
How to cite: Rae, M.: The #GoAtlanticBlue and the All-Atlantic Ocean Youth Ambassador Initiative to raise awareness of and promote the Atlantic Ocean, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5969, https://doi.org/10.5194/egusphere-egu2020-5969, 2020.
EGU2020-7665 | Displays | ITS1.11/OS1.14
Thermohaline multi-phase simulations of vent fluid salinity evolution following a diking event at East Pacific RiseFalko Vehling, Jörg Hasenclever, and Lars Rüpke
Submarine hydrothermal systems sustain unique ecosystems, affect global-scale biogeochemical ocean cycles, and mobilize metals from the oceanic crust to form volcanogenic massive sulfide deposits. Quantifying these processes requires linking seafloor observations to physico-chemical processes at depth and this is where numerical models of hydrothermal circulation can be particularly useful. One region where sufficient data is available to establish such a link is the East Pacific Rise (EPR) at 9°N, where vent fluid salinity and temperature have been repeatedly measured over a long time period. Here, large salinity and temperature changes of vents at the axial graben have been correlated with diking events and extrusive lava flows. Salinity changes imply the phase separation of seawater into a high-salinity brine and a low-salinity vapor phase. The intrusion of a new dike is likely to result in a characteristic salinity signal over several years: first the low salinity vapor phase rises and later the brine phase appears along with a decreasing vent temperature. These short-term salinity variations are super-imposed on the background salinity signal, which is modulated by phase separation phenomena on top of the axial magma lens.
From these variations, numerical models can help to infer sub-surface properties and processes such as permeability, background flow rates, and brine retention as well as mobilization – if the employed model can resolve the complexity of phase separation. We here present a novel numerical model for saltwater hydrothermal systems, which uses the Finite Volume Method on unstructured meshes and the Newton-Raphson Method for solving the coupled equations. We use this new 2-D model to investigate a setup that mimics hydrothermal convection on top of the axial magma lens, which is then perturbed by a dike intrusion. In a comprehensive suite of model runs, we have identified the key controls on the time evolution of vent fluid salinity following the diking event. Based on these insights, we can reproduce time-series data from the EPR at 9°N and infer likely ranges of rock properties for the oceanic crust layer 2B.
Our work shows how useful data integration into numerical hydrothermal models is. Unfortunately, data collection like mapping of magmatic events, continuous measurements of hydrothermal vent fluids or crustal drilling are very expensive and technically challenging. Here global and transdisciplinary collaboration would be very useful for achieving data with maximal benefit for all disciplines. Compared to the EPR the Mid-Atlantic Ridge shows a higher geological complexity, due to its lower spreading rate, and a higher diversity of vent fluid chemistry, but less continuous data is available, which hampers research using numerical models here for now. Therefore, numerical case studies at EPR serve as important validity checks for our numerical model and indicate where it has to be enhanced for quantifying processes related to hydrothermal systems at Mid-Atlantic Ridge.
How to cite: Vehling, F., Hasenclever, J., and Rüpke, L.: Thermohaline multi-phase simulations of vent fluid salinity evolution following a diking event at East Pacific Rise, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7665, https://doi.org/10.5194/egusphere-egu2020-7665, 2020.
Submarine hydrothermal systems sustain unique ecosystems, affect global-scale biogeochemical ocean cycles, and mobilize metals from the oceanic crust to form volcanogenic massive sulfide deposits. Quantifying these processes requires linking seafloor observations to physico-chemical processes at depth and this is where numerical models of hydrothermal circulation can be particularly useful. One region where sufficient data is available to establish such a link is the East Pacific Rise (EPR) at 9°N, where vent fluid salinity and temperature have been repeatedly measured over a long time period. Here, large salinity and temperature changes of vents at the axial graben have been correlated with diking events and extrusive lava flows. Salinity changes imply the phase separation of seawater into a high-salinity brine and a low-salinity vapor phase. The intrusion of a new dike is likely to result in a characteristic salinity signal over several years: first the low salinity vapor phase rises and later the brine phase appears along with a decreasing vent temperature. These short-term salinity variations are super-imposed on the background salinity signal, which is modulated by phase separation phenomena on top of the axial magma lens.
From these variations, numerical models can help to infer sub-surface properties and processes such as permeability, background flow rates, and brine retention as well as mobilization – if the employed model can resolve the complexity of phase separation. We here present a novel numerical model for saltwater hydrothermal systems, which uses the Finite Volume Method on unstructured meshes and the Newton-Raphson Method for solving the coupled equations. We use this new 2-D model to investigate a setup that mimics hydrothermal convection on top of the axial magma lens, which is then perturbed by a dike intrusion. In a comprehensive suite of model runs, we have identified the key controls on the time evolution of vent fluid salinity following the diking event. Based on these insights, we can reproduce time-series data from the EPR at 9°N and infer likely ranges of rock properties for the oceanic crust layer 2B.
Our work shows how useful data integration into numerical hydrothermal models is. Unfortunately, data collection like mapping of magmatic events, continuous measurements of hydrothermal vent fluids or crustal drilling are very expensive and technically challenging. Here global and transdisciplinary collaboration would be very useful for achieving data with maximal benefit for all disciplines. Compared to the EPR the Mid-Atlantic Ridge shows a higher geological complexity, due to its lower spreading rate, and a higher diversity of vent fluid chemistry, but less continuous data is available, which hampers research using numerical models here for now. Therefore, numerical case studies at EPR serve as important validity checks for our numerical model and indicate where it has to be enhanced for quantifying processes related to hydrothermal systems at Mid-Atlantic Ridge.
How to cite: Vehling, F., Hasenclever, J., and Rüpke, L.: Thermohaline multi-phase simulations of vent fluid salinity evolution following a diking event at East Pacific Rise, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7665, https://doi.org/10.5194/egusphere-egu2020-7665, 2020.
EGU2020-8880 | Displays | ITS1.11/OS1.14
Discovering a new type of oceanic intraplate volcanism: the experience of two PhD students - a beginner and a seasoned marine geologistTatum Miko Herrero and Dominik Pałgan
Unearthing transit data from several expeditions with both trained and untrained eyes started a curiosity-driven project that resulted in the discovery of the new type of intraplate volcanism. Author 1 was a first year doctoral candidate with a background in terrestrial volcano geomorphology, trained in the Philippines and was new to the field of seafloor geology. Author 2 was a fourth year doctoral candidate with a background in submarine volcanology and seafloor mapping, trained in Poland and was a seasoned seafloor mapper who served as a guide in the workings of GEOMAR and the Helmholtz Research School for Ocean System Science and Technology (HOSST) Program, as well as in submarine volcanology. Author 1 faced a challenge - learning new techniques used in the submarine environment, including how to acquire and post-process ship-based bathymetric data, and interpret seafloor structures in order to construct geological maps of the seafloor. This transition from on-land to submarine environment was the beginning of the development in understanding the processes shaping the seafloor of the North Atlantic and to focus on new scientific questions.
Already existing ship transit data from multiple cruises were processed and anomalous high acoustic backscatter signals were found on the seafloor where such anomalies theoretically should not exist on 20Ma old oceanic crust. This coincided with later extraordinary findings collected during a more recent expedition (M139 from 2017). Observed high backscatter resembled that of fresh lava flows found along mid-ocean ridge axis. The area is an intraplate setting that do not have a known record of hotspot activity. Participation of both Authors in the expedition M139 provided an excellent environment to learn about submarine volcanology and seafloor mapping by learn-by-doing approach. Together, the authors and the whole team gathered rock samples and mapped the area in detail. Laboratory analysis and geochemical modelling concluded that the lava flows are of a different source from known intraplate volcanism compositions. The results would change the view on subducted plate composition, the geochemical budget of the Earth, and the availability of hard substrate and chemosynthetic environments for organisms in such remote regions of the seafloor.
The Helmholtz Research School for Ocean System Science and Technology (HOSST) has arranged an opportunity to bring together early career scientists of different initial backgrounds and learning cultures. It has provided a venue for candidates to go through similar experiences not only in conducting research but also in dealing with “PhD life”. It is because HOSST Research School values working in close ties on communal big picture goals for the North Atlantic Ocean and fosters a valuable support group. In this case it was a mentor-mentee relationship that helped contribute to a scientific breakthrough. This is just one example of support relationships that have developed in the HOSST graduate program.
How to cite: Herrero, T. M. and Pałgan, D.: Discovering a new type of oceanic intraplate volcanism: the experience of two PhD students - a beginner and a seasoned marine geologist, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8880, https://doi.org/10.5194/egusphere-egu2020-8880, 2020.
Unearthing transit data from several expeditions with both trained and untrained eyes started a curiosity-driven project that resulted in the discovery of the new type of intraplate volcanism. Author 1 was a first year doctoral candidate with a background in terrestrial volcano geomorphology, trained in the Philippines and was new to the field of seafloor geology. Author 2 was a fourth year doctoral candidate with a background in submarine volcanology and seafloor mapping, trained in Poland and was a seasoned seafloor mapper who served as a guide in the workings of GEOMAR and the Helmholtz Research School for Ocean System Science and Technology (HOSST) Program, as well as in submarine volcanology. Author 1 faced a challenge - learning new techniques used in the submarine environment, including how to acquire and post-process ship-based bathymetric data, and interpret seafloor structures in order to construct geological maps of the seafloor. This transition from on-land to submarine environment was the beginning of the development in understanding the processes shaping the seafloor of the North Atlantic and to focus on new scientific questions.
Already existing ship transit data from multiple cruises were processed and anomalous high acoustic backscatter signals were found on the seafloor where such anomalies theoretically should not exist on 20Ma old oceanic crust. This coincided with later extraordinary findings collected during a more recent expedition (M139 from 2017). Observed high backscatter resembled that of fresh lava flows found along mid-ocean ridge axis. The area is an intraplate setting that do not have a known record of hotspot activity. Participation of both Authors in the expedition M139 provided an excellent environment to learn about submarine volcanology and seafloor mapping by learn-by-doing approach. Together, the authors and the whole team gathered rock samples and mapped the area in detail. Laboratory analysis and geochemical modelling concluded that the lava flows are of a different source from known intraplate volcanism compositions. The results would change the view on subducted plate composition, the geochemical budget of the Earth, and the availability of hard substrate and chemosynthetic environments for organisms in such remote regions of the seafloor.
The Helmholtz Research School for Ocean System Science and Technology (HOSST) has arranged an opportunity to bring together early career scientists of different initial backgrounds and learning cultures. It has provided a venue for candidates to go through similar experiences not only in conducting research but also in dealing with “PhD life”. It is because HOSST Research School values working in close ties on communal big picture goals for the North Atlantic Ocean and fosters a valuable support group. In this case it was a mentor-mentee relationship that helped contribute to a scientific breakthrough. This is just one example of support relationships that have developed in the HOSST graduate program.
How to cite: Herrero, T. M. and Pałgan, D.: Discovering a new type of oceanic intraplate volcanism: the experience of two PhD students - a beginner and a seasoned marine geologist, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8880, https://doi.org/10.5194/egusphere-egu2020-8880, 2020.
EGU2020-9594 * | Displays | ITS1.11/OS1.14 | Highlight
Trans- and interdisciplinary research - Running a Graduate Research School across the Atlantic OceanChristel van den Bogaard and Kirsten Laing
Understanding ocean and atmosphere dynamics in the Atlantic Ocean is the goal of the HOSST-TOSST Research school "Transatlantic Ocean System Science and Technology“. At the heart of the project is the introduction of science work across topics of the North Atlantic Ocean System. Our goal is motivating the young researcher to consider and engage with various aspects of ocean research beyond their own special field of research. For this we have established a weekly seminar series with video system support. It allows us to stay in contact even with an ocean between us. Being able to stay in contact, we meet once a year in person in a joint summer school, setting up topics outside the immediate research areas and have all participants work in small groups. Co-supervision of doctoral thesis and extended research exchanges at the partner University, working with the co-supervisors research group, are fundamental for the full transatlantic research experience.
The poster and our presence will give interested persons the chance to learn from our experience how to enable a good group dynamic in the research school. Providing the basics for the best interdisciplinary research. Come and learn from our experience of establishing a dynamic research network across the Atlantic.
How to cite: van den Bogaard, C. and Laing, K.: Trans- and interdisciplinary research - Running a Graduate Research School across the Atlantic Ocean, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9594, https://doi.org/10.5194/egusphere-egu2020-9594, 2020.
Understanding ocean and atmosphere dynamics in the Atlantic Ocean is the goal of the HOSST-TOSST Research school "Transatlantic Ocean System Science and Technology“. At the heart of the project is the introduction of science work across topics of the North Atlantic Ocean System. Our goal is motivating the young researcher to consider and engage with various aspects of ocean research beyond their own special field of research. For this we have established a weekly seminar series with video system support. It allows us to stay in contact even with an ocean between us. Being able to stay in contact, we meet once a year in person in a joint summer school, setting up topics outside the immediate research areas and have all participants work in small groups. Co-supervision of doctoral thesis and extended research exchanges at the partner University, working with the co-supervisors research group, are fundamental for the full transatlantic research experience.
The poster and our presence will give interested persons the chance to learn from our experience how to enable a good group dynamic in the research school. Providing the basics for the best interdisciplinary research. Come and learn from our experience of establishing a dynamic research network across the Atlantic.
How to cite: van den Bogaard, C. and Laing, K.: Trans- and interdisciplinary research - Running a Graduate Research School across the Atlantic Ocean, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9594, https://doi.org/10.5194/egusphere-egu2020-9594, 2020.
EGU2020-10056 * | Displays | ITS1.11/OS1.14 | Highlight
Numbers vs. Narratives: the importance of integrating social science perspectives in ocean sustainability researchHelen Packer and Mirjam Held
Many disciplines study the ocean and its uses from different perspectives. Recently, there has been a growing awareness about the inseparability of the social and ecological systems and that achieving sustainable use of ocean resources will require the integration of different types of knowledge and disciplines. In this presentation, we will draw from the experience of two early career interdisciplinary scientists to present examples of the role social sciences can play in achieving sustainable oceans management, how and why it should be integrated with other ocean disciplines. More specifically, we will present how a qualitative research approaches to understanding seafood sustainability governance and community/rights-based management makes an important contribution to sustainable ocean management. We conclude that to achieve ocean sustainability, which is a societal problem, we not only need numbers but also the social sciences and their narratives.
How to cite: Packer, H. and Held, M.: Numbers vs. Narratives: the importance of integrating social science perspectives in ocean sustainability research, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10056, https://doi.org/10.5194/egusphere-egu2020-10056, 2020.
Many disciplines study the ocean and its uses from different perspectives. Recently, there has been a growing awareness about the inseparability of the social and ecological systems and that achieving sustainable use of ocean resources will require the integration of different types of knowledge and disciplines. In this presentation, we will draw from the experience of two early career interdisciplinary scientists to present examples of the role social sciences can play in achieving sustainable oceans management, how and why it should be integrated with other ocean disciplines. More specifically, we will present how a qualitative research approaches to understanding seafood sustainability governance and community/rights-based management makes an important contribution to sustainable ocean management. We conclude that to achieve ocean sustainability, which is a societal problem, we not only need numbers but also the social sciences and their narratives.
How to cite: Packer, H. and Held, M.: Numbers vs. Narratives: the importance of integrating social science perspectives in ocean sustainability research, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10056, https://doi.org/10.5194/egusphere-egu2020-10056, 2020.
EGU2020-11245 | Displays | ITS1.11/OS1.14
Iceland-Faroe Ridge overflow dynamics, 55-6 ka BPMaryam Mirzaloo, Dirk Nürnberg, Markus Kienast, and Jeroen van der Lubbe
The understanding of the past changes in this critical area of oceanic circulation will be beneficial to predict future climate conditions and their related socio-economic impacts. Sediment cores recovered from the western flank of the Iceland-Faroe Ridge (IFR; P457-905 and -909) provide unique archives to reconstruct changes in the Iceland-Scotland overflow water (ISOW), an important component of the Atlantic Meridional Overturning Circulation (AMOC) over the last 55-6 ka BP. We provide high-resolution records of lithogenic grain-size and XRF bulk chemistry on millennial timescales. The age models of both cores have been constrained by radiocarbon datings of planktonic foraminifera and distinct tephra layers, which include the well-known Faroe-Marine-Ash-Zones (FMAZ) II and III. Both grain-size and XRF bulk chemistry (Zr/Rb and Ti/K) reveal prominent Dansgaard-Oeschger sedimentary cycles, which reflect considerable changes in near-bottom current strength and sediment transport/deposition. The transition between cold Greenland Stadials (GSs) and warm Greenland Interstadials (GIs) occur in typical, recurring sedimentation patterns. The GIs are characterized by relatively strong bottom currents and the transport/deposition of basaltic (Ti-rich) silts from local volcanic sources resembling the modern ocean circulation pattern. In contrast, fine grained felsic (K-rich) sediments were deposited during GSs, when the ISOW was weak. In particular, the Heinrich (like) Stadials HS1 and HS2 stand out as intervals of very fine felsic sediment deposition and hence, slackened bottom currents. The bottom currents appear to progressively strengthen throughout the GIs, and sharply decline towards the GSs. This pattern contrasts with records from north of the IFR, which might be explained by a diminishing contribution of the flow cascading over the IFR. Together, these new records show strong changes in bottom current dynamics related to the Iceland-Scotland overflow, which has a strong influence on the past and modern climate of the North Atlantic Region. However, climate change is an interdisciplinary field of research. HOSST-TOSST transatlantic interdisciplinary research program provides the unique opportunity for constructive communication and collaboration among scientists with different skills filling knowledge gaps and bridging the earth sciences with social and economic disciplines. Such interdisciplinary programs at early stages in an academic career is necessary to move and encourage the new generation of the scientific community toward a tradition of broad‐scale interactions.
How to cite: Mirzaloo, M., Nürnberg, D., Kienast, M., and van der Lubbe, J.: Iceland-Faroe Ridge overflow dynamics, 55-6 ka BP, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11245, https://doi.org/10.5194/egusphere-egu2020-11245, 2020.
The understanding of the past changes in this critical area of oceanic circulation will be beneficial to predict future climate conditions and their related socio-economic impacts. Sediment cores recovered from the western flank of the Iceland-Faroe Ridge (IFR; P457-905 and -909) provide unique archives to reconstruct changes in the Iceland-Scotland overflow water (ISOW), an important component of the Atlantic Meridional Overturning Circulation (AMOC) over the last 55-6 ka BP. We provide high-resolution records of lithogenic grain-size and XRF bulk chemistry on millennial timescales. The age models of both cores have been constrained by radiocarbon datings of planktonic foraminifera and distinct tephra layers, which include the well-known Faroe-Marine-Ash-Zones (FMAZ) II and III. Both grain-size and XRF bulk chemistry (Zr/Rb and Ti/K) reveal prominent Dansgaard-Oeschger sedimentary cycles, which reflect considerable changes in near-bottom current strength and sediment transport/deposition. The transition between cold Greenland Stadials (GSs) and warm Greenland Interstadials (GIs) occur in typical, recurring sedimentation patterns. The GIs are characterized by relatively strong bottom currents and the transport/deposition of basaltic (Ti-rich) silts from local volcanic sources resembling the modern ocean circulation pattern. In contrast, fine grained felsic (K-rich) sediments were deposited during GSs, when the ISOW was weak. In particular, the Heinrich (like) Stadials HS1 and HS2 stand out as intervals of very fine felsic sediment deposition and hence, slackened bottom currents. The bottom currents appear to progressively strengthen throughout the GIs, and sharply decline towards the GSs. This pattern contrasts with records from north of the IFR, which might be explained by a diminishing contribution of the flow cascading over the IFR. Together, these new records show strong changes in bottom current dynamics related to the Iceland-Scotland overflow, which has a strong influence on the past and modern climate of the North Atlantic Region. However, climate change is an interdisciplinary field of research. HOSST-TOSST transatlantic interdisciplinary research program provides the unique opportunity for constructive communication and collaboration among scientists with different skills filling knowledge gaps and bridging the earth sciences with social and economic disciplines. Such interdisciplinary programs at early stages in an academic career is necessary to move and encourage the new generation of the scientific community toward a tradition of broad‐scale interactions.
How to cite: Mirzaloo, M., Nürnberg, D., Kienast, M., and van der Lubbe, J.: Iceland-Faroe Ridge overflow dynamics, 55-6 ka BP, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11245, https://doi.org/10.5194/egusphere-egu2020-11245, 2020.
EGU2020-12036 | Displays | ITS1.11/OS1.14 | Highlight
A transatlantic, multidisciplinary graduate school focussed on the North Atlantic Ocean: rationale, challenges, lessons-learned, achievements and benefits.Douglas Wallace, Markus Kienast, Kirsten Laing, Brendal Townsend, Christian Dullo, Colin Devey, Christel van den Bogaard, and Tatiana Cabral
Increasingly, careers in ocean research are international and lie outside traditional employment sectors (academia and government). In response to a need for training that prepares the next generation of ocean scientists for a globalized, multisectoral environment, we initiated a transatlantic, multi-disciplinary graduate school which connected students and their supervisors in Halifax, Canada and Kiel, Germany. We took advantage of complementary capacities and cultures on both sides of the Atlantic to create a training program that conveyed technical and research skills in ocean science and advanced technology, and promoted the ability to manage deep sea and open ocean environments. Our goal was to provide each graduate with an international network and the ability to work effectively as an “advocate for the ocean”.
The transatlantic graduate school was supported from 2012 to 2020 with funding obtained, separately, from the Natural Sciences and Engineering Research Council’s CREATE program in Canada and the Helmholtz Association’s Graduate Research School program in Germany. The NSERC CREATE Transatlantic Ocean System Science and Technology (TOSST) was based at Dalhousie University, whereas the Helmholtz Ocean System Science and Technology (HOSST) graduate school was based at the Helmholtz Centre for Ocean Research Kiel (GEOMAR) with participation from the Christian-Albrecht University of Kiel. TOSST supported a total of 20 PhD and 3 Masters candidates. HOSST supported 24 doctoral researchers. The participants were recruited into 2 cohorts.
The participants’ disciplines ranged from marine geology to atmospheric physics and included molecular ecology, marine conservation as well as social and policy sciences, etc.. The common focus was on the Atlantic Ocean and on value-added training that addressed business skills as well as economic, regulatory, management and cultural aspects relevant to Atlantic Ocean spaces. A program of annual Summer Schools included two held in a small island developing state (Cabo Verde), with participation of African students.
TOSST-HOSST did not attempt to unify the disparate academic systems of Germany and Canada but, instead, focussed on connecting and broadening the experience of young researchers. The program sought to maximise the value of transatlantic scientific cooperation, convey broad experience and skills beyond students’ individual projects and disciplines, and create a diverse community of scholars who could work effectively together.
The presentation will highlight benefits and challenges encountered, on both sides of the Atlantic, and lessons learned during the program. Examples of lessons learned included: the value of a bilateral, cohort model for building networks of young researchers over long distances (as opposed to more distributed, multi-institutional networks); the importance of regular (effective) videoconferencing as well as (occasional) face-to-face meetings; the importance of program coordinators for overcoming barriers to international exchanges; the risk of overburdening the participants with program requirements without compensation of the academic requirements of home institutions; the need for supervisors to commit to the international aspects of the program; the value of exposure to radically different research environments (including those in developing countries).
How to cite: Wallace, D., Kienast, M., Laing, K., Townsend, B., Dullo, C., Devey, C., van den Bogaard, C., and Cabral, T.: A transatlantic, multidisciplinary graduate school focussed on the North Atlantic Ocean: rationale, challenges, lessons-learned, achievements and benefits., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12036, https://doi.org/10.5194/egusphere-egu2020-12036, 2020.
Increasingly, careers in ocean research are international and lie outside traditional employment sectors (academia and government). In response to a need for training that prepares the next generation of ocean scientists for a globalized, multisectoral environment, we initiated a transatlantic, multi-disciplinary graduate school which connected students and their supervisors in Halifax, Canada and Kiel, Germany. We took advantage of complementary capacities and cultures on both sides of the Atlantic to create a training program that conveyed technical and research skills in ocean science and advanced technology, and promoted the ability to manage deep sea and open ocean environments. Our goal was to provide each graduate with an international network and the ability to work effectively as an “advocate for the ocean”.
The transatlantic graduate school was supported from 2012 to 2020 with funding obtained, separately, from the Natural Sciences and Engineering Research Council’s CREATE program in Canada and the Helmholtz Association’s Graduate Research School program in Germany. The NSERC CREATE Transatlantic Ocean System Science and Technology (TOSST) was based at Dalhousie University, whereas the Helmholtz Ocean System Science and Technology (HOSST) graduate school was based at the Helmholtz Centre for Ocean Research Kiel (GEOMAR) with participation from the Christian-Albrecht University of Kiel. TOSST supported a total of 20 PhD and 3 Masters candidates. HOSST supported 24 doctoral researchers. The participants were recruited into 2 cohorts.
The participants’ disciplines ranged from marine geology to atmospheric physics and included molecular ecology, marine conservation as well as social and policy sciences, etc.. The common focus was on the Atlantic Ocean and on value-added training that addressed business skills as well as economic, regulatory, management and cultural aspects relevant to Atlantic Ocean spaces. A program of annual Summer Schools included two held in a small island developing state (Cabo Verde), with participation of African students.
TOSST-HOSST did not attempt to unify the disparate academic systems of Germany and Canada but, instead, focussed on connecting and broadening the experience of young researchers. The program sought to maximise the value of transatlantic scientific cooperation, convey broad experience and skills beyond students’ individual projects and disciplines, and create a diverse community of scholars who could work effectively together.
The presentation will highlight benefits and challenges encountered, on both sides of the Atlantic, and lessons learned during the program. Examples of lessons learned included: the value of a bilateral, cohort model for building networks of young researchers over long distances (as opposed to more distributed, multi-institutional networks); the importance of regular (effective) videoconferencing as well as (occasional) face-to-face meetings; the importance of program coordinators for overcoming barriers to international exchanges; the risk of overburdening the participants with program requirements without compensation of the academic requirements of home institutions; the need for supervisors to commit to the international aspects of the program; the value of exposure to radically different research environments (including those in developing countries).
How to cite: Wallace, D., Kienast, M., Laing, K., Townsend, B., Dullo, C., Devey, C., van den Bogaard, C., and Cabral, T.: A transatlantic, multidisciplinary graduate school focussed on the North Atlantic Ocean: rationale, challenges, lessons-learned, achievements and benefits., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12036, https://doi.org/10.5194/egusphere-egu2020-12036, 2020.
EGU2020-15577 | Displays | ITS1.11/OS1.14
Heme b distributions through the Atlantic Ocean: in situ identification of iron limited phytoplanktonEvangelia Louropoulou, Martha Gledhill, Eric P. Achterberg, Thomas J. Browning, David J. Honey, Ruth A. Schmitz, and Alessandro Tagliabue
Heme b is an iron-containing cofactor in hemoproteins that participates in the fundamental processes of photosynthesis and respiration in phytoplankton. Heme b concentrations typically decline in waters with low iron concentrations but due to lack of field data, the distribution of heme b in particulate material in the ocean is poorly constrained. Within the framework of the Helmholtz Research School for Ocean System Science and Technology (HOSST) and the GEOTRACES programme, the authors compiled datasets and conducted multidisciplinary research (e.g. chemical oceanography, microbiology, biogeochemical modelling) in order to test heme b as an indicator of in situ iron-limited phytoplankton. This study was initiated in the North Atlantic Ocean and expanded to the under-sampled South Atlantic Ocean for comparison of the results considering the different phytoplankton populations. Here, we report particulate heme b distributions across the Atlantic Ocean (59.9°N to 34.6°S). Heme b concentrations in surface waters ranged from 0.10 to 33.7 pmol L-1 (median=1.47 pmol L-1, n=974) and were highest in regions with a high biomass. The ratio of heme b to particulate organic carbon (POC) exhibited a mean value of 0.44 μmol heme b mol-1 POC. We identified the ratio of 0.10 µmol heme b mol-1 POC as the cut-off between heme b replete and heme b deficient phytoplankton. By this definition, the ratio heme b relative to POC was consistently below 0.10 μmol mol-1 in areas characterized by low Fe supply; these were the Subtropical South Atlantic gyre and the seasonally iron limited Irminger Basin. Thus, the ratio heme b relative to POC gave a reliable indication of iron limited phytoplankton communities in situ. Furthermore, the comparison of observed and modelled heme b suggested that heme b could account for between 0.17-9.1% of biogenic iron. This range was comparable to previous culturing observations for species with low heme b content and species growing in low Fe (≤0.50 nmol L-1) or nitrate culturing media. Our large scale observations of heme b relative to organic matter suggest the impact of changes in iron supply on phytoplankton iron status.
How to cite: Louropoulou, E., Gledhill, M., Achterberg, E. P., Browning, T. J., Honey, D. J., Schmitz, R. A., and Tagliabue, A.: Heme b distributions through the Atlantic Ocean: in situ identification of iron limited phytoplankton, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-15577, https://doi.org/10.5194/egusphere-egu2020-15577, 2020.
Heme b is an iron-containing cofactor in hemoproteins that participates in the fundamental processes of photosynthesis and respiration in phytoplankton. Heme b concentrations typically decline in waters with low iron concentrations but due to lack of field data, the distribution of heme b in particulate material in the ocean is poorly constrained. Within the framework of the Helmholtz Research School for Ocean System Science and Technology (HOSST) and the GEOTRACES programme, the authors compiled datasets and conducted multidisciplinary research (e.g. chemical oceanography, microbiology, biogeochemical modelling) in order to test heme b as an indicator of in situ iron-limited phytoplankton. This study was initiated in the North Atlantic Ocean and expanded to the under-sampled South Atlantic Ocean for comparison of the results considering the different phytoplankton populations. Here, we report particulate heme b distributions across the Atlantic Ocean (59.9°N to 34.6°S). Heme b concentrations in surface waters ranged from 0.10 to 33.7 pmol L-1 (median=1.47 pmol L-1, n=974) and were highest in regions with a high biomass. The ratio of heme b to particulate organic carbon (POC) exhibited a mean value of 0.44 μmol heme b mol-1 POC. We identified the ratio of 0.10 µmol heme b mol-1 POC as the cut-off between heme b replete and heme b deficient phytoplankton. By this definition, the ratio heme b relative to POC was consistently below 0.10 μmol mol-1 in areas characterized by low Fe supply; these were the Subtropical South Atlantic gyre and the seasonally iron limited Irminger Basin. Thus, the ratio heme b relative to POC gave a reliable indication of iron limited phytoplankton communities in situ. Furthermore, the comparison of observed and modelled heme b suggested that heme b could account for between 0.17-9.1% of biogenic iron. This range was comparable to previous culturing observations for species with low heme b content and species growing in low Fe (≤0.50 nmol L-1) or nitrate culturing media. Our large scale observations of heme b relative to organic matter suggest the impact of changes in iron supply on phytoplankton iron status.
How to cite: Louropoulou, E., Gledhill, M., Achterberg, E. P., Browning, T. J., Honey, D. J., Schmitz, R. A., and Tagliabue, A.: Heme b distributions through the Atlantic Ocean: in situ identification of iron limited phytoplankton, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-15577, https://doi.org/10.5194/egusphere-egu2020-15577, 2020.
EGU2020-22065 | Displays | ITS1.11/OS1.14
The Marine Environmental Observation, Prediction and Response Network (MEOPAR): An Interdisciplinary, Networked Approach to Building Canada’s Marine Research CapacityLaura Avery, Doug Wallace, and Rodrigo Menafra
The Marine Environmental Observation, Prediction and Response Network (MEOPAR) is an interdisciplinary Canadian Network of Centres of Excellence, connecting leading marine researchers across the country with trainees, partners and communities. MEOPAR funds research, trains Highly-Qualified Personnel, develops strategic partnerships, and works to support knowledge mobilization in marine challenges and opportunities for the benefit of the Canadian economy and society. As a Network, MEOPAR’s strength lies in our inter-sectoral connections—to researchers, partners, organizations, and Indigenous communities, all of whom have an interest in learning more about risks and opportunities in the marine environment. The Network funds research focusing on the North Atlantic, St. Lawrence, Arctic Ocean, and Salish Sea.
MEOPAR has trained over 700 Highly-Qualified Personnel (“MEOPeers”) since 2012. One in three MEOPeers are international students or researchers who have chosen to study or progress in their research careers in Canada. MEOPAR’s training program builds capacity in interdisciplinary research and 21st-century skills related to marine environmental risk and the required response and policy strategies. Training content is based on MEOPAR's four outcome areas (Ocean Observation; Forecasting and Prediction; Coastal Resilience; and Marine Operations), along with core content areas relevant to Canada’s next generation of marine professionals (Knowledge Translation and Science Communication; Interdisciplinary Research; and Career Development). To help build capacity in marine research, MEOPAR offers a suite of training initiatives to post-secondary students and early-career researchers, including a Postdoctoral Fellowship Award, Early Career Faculty grants, travel awards, workshops, International Research Internship and Visiting Scholar funding, and an Annual Training Meeting. These initiatives provide MEOPeers with value-added training opportunities they would not be able to access through their academic programs or research labs. This poster will introduce MEOPAR’s interdisciplinary and intercultural approaches to training the next generation of marine leaders in Canada. Case studies will feature MEOPeers working in the North Atlantic region who are pursuing value-added training opportunities supported by the Network.
How to cite: Avery, L., Wallace, D., and Menafra, R.: The Marine Environmental Observation, Prediction and Response Network (MEOPAR): An Interdisciplinary, Networked Approach to Building Canada’s Marine Research Capacity , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22065, https://doi.org/10.5194/egusphere-egu2020-22065, 2020.
The Marine Environmental Observation, Prediction and Response Network (MEOPAR) is an interdisciplinary Canadian Network of Centres of Excellence, connecting leading marine researchers across the country with trainees, partners and communities. MEOPAR funds research, trains Highly-Qualified Personnel, develops strategic partnerships, and works to support knowledge mobilization in marine challenges and opportunities for the benefit of the Canadian economy and society. As a Network, MEOPAR’s strength lies in our inter-sectoral connections—to researchers, partners, organizations, and Indigenous communities, all of whom have an interest in learning more about risks and opportunities in the marine environment. The Network funds research focusing on the North Atlantic, St. Lawrence, Arctic Ocean, and Salish Sea.
MEOPAR has trained over 700 Highly-Qualified Personnel (“MEOPeers”) since 2012. One in three MEOPeers are international students or researchers who have chosen to study or progress in their research careers in Canada. MEOPAR’s training program builds capacity in interdisciplinary research and 21st-century skills related to marine environmental risk and the required response and policy strategies. Training content is based on MEOPAR's four outcome areas (Ocean Observation; Forecasting and Prediction; Coastal Resilience; and Marine Operations), along with core content areas relevant to Canada’s next generation of marine professionals (Knowledge Translation and Science Communication; Interdisciplinary Research; and Career Development). To help build capacity in marine research, MEOPAR offers a suite of training initiatives to post-secondary students and early-career researchers, including a Postdoctoral Fellowship Award, Early Career Faculty grants, travel awards, workshops, International Research Internship and Visiting Scholar funding, and an Annual Training Meeting. These initiatives provide MEOPeers with value-added training opportunities they would not be able to access through their academic programs or research labs. This poster will introduce MEOPAR’s interdisciplinary and intercultural approaches to training the next generation of marine leaders in Canada. Case studies will feature MEOPeers working in the North Atlantic region who are pursuing value-added training opportunities supported by the Network.
How to cite: Avery, L., Wallace, D., and Menafra, R.: The Marine Environmental Observation, Prediction and Response Network (MEOPAR): An Interdisciplinary, Networked Approach to Building Canada’s Marine Research Capacity , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22065, https://doi.org/10.5194/egusphere-egu2020-22065, 2020.
EGU2020-22573 | Displays | ITS1.11/OS1.14
Learning German: The significance of language in a multicultural graduate schoolMirjam Held, Ricardo Arruda, Allison Chua, and Ana Corbalan
The HOSST and TOSST transatlantic graduate schools were conceived and designed as multidisciplinary and multicultural training opportunities. While HOSST is headquartered at the GEOMAR Helmholtz Centre for Ocean Research in Kiel, Germany, TOSST is run out of Dalhousie University in Halifax, Canada. English being the language of science, the main language of communication in both programs is English. For most HOSST- and TOSST students, however, English is not their native tongue, but a second or even third language.
Language is a fundamental aspect of any culture; in fact, they are intertwined and mutually influence each other. A culture can only be fully understood through its corresponding language, while interacting with a different language always also illuminates the respective culture. An integral part of the HOSST- and TOSST graduate schools is the requirement that each student spends a 4-month research exchange at the sister institution. For most TOSST students, this meant immersing themselves not only into the German culture but also the German language.
To ease the transition to working and living in Germany, TOSST offered their students a German course, a proposition that was requested by the students and unanimously supported by the TOSST leadership team. Thanks to longstanding relationships with the German community in Halifax, the TOSST German course was offered through the German Heritage Language School. It so happened that the teacher was also a TOSST student. Many students accepted the offer to immerse themselves into a new language and culture ahead of their research exchange. Obviously they did not reach fluency after one or two terms, but studying German prepared them to engage with residents in everyday situations and to better understand the local culture.
Beyond these practical applications, the students appreciated an opportunity for lifelong learning outside of their field of research. Both the students and the teacher found interacting with the German language as part of their work days to foster their creativity by providing a different stimulus than their usual research efforts. The German course further provided an opportunity to build and deepen friendships among TOSST students across cultures and disciplines. The learning not only provided theoretical knowledge of the German culture, but opened up access to the sizeable German community in Halifax. A handful of students even continued with the course after their research exchange was completed as they appreciated studying the German language and culture as a skill that will serve them well beyond the TOSST graduate school.
How to cite: Held, M., Arruda, R., Chua, A., and Corbalan, A.: Learning German: The significance of language in a multicultural graduate school, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22573, https://doi.org/10.5194/egusphere-egu2020-22573, 2020.
The HOSST and TOSST transatlantic graduate schools were conceived and designed as multidisciplinary and multicultural training opportunities. While HOSST is headquartered at the GEOMAR Helmholtz Centre for Ocean Research in Kiel, Germany, TOSST is run out of Dalhousie University in Halifax, Canada. English being the language of science, the main language of communication in both programs is English. For most HOSST- and TOSST students, however, English is not their native tongue, but a second or even third language.
Language is a fundamental aspect of any culture; in fact, they are intertwined and mutually influence each other. A culture can only be fully understood through its corresponding language, while interacting with a different language always also illuminates the respective culture. An integral part of the HOSST- and TOSST graduate schools is the requirement that each student spends a 4-month research exchange at the sister institution. For most TOSST students, this meant immersing themselves not only into the German culture but also the German language.
To ease the transition to working and living in Germany, TOSST offered their students a German course, a proposition that was requested by the students and unanimously supported by the TOSST leadership team. Thanks to longstanding relationships with the German community in Halifax, the TOSST German course was offered through the German Heritage Language School. It so happened that the teacher was also a TOSST student. Many students accepted the offer to immerse themselves into a new language and culture ahead of their research exchange. Obviously they did not reach fluency after one or two terms, but studying German prepared them to engage with residents in everyday situations and to better understand the local culture.
Beyond these practical applications, the students appreciated an opportunity for lifelong learning outside of their field of research. Both the students and the teacher found interacting with the German language as part of their work days to foster their creativity by providing a different stimulus than their usual research efforts. The German course further provided an opportunity to build and deepen friendships among TOSST students across cultures and disciplines. The learning not only provided theoretical knowledge of the German culture, but opened up access to the sizeable German community in Halifax. A handful of students even continued with the course after their research exchange was completed as they appreciated studying the German language and culture as a skill that will serve them well beyond the TOSST graduate school.
How to cite: Held, M., Arruda, R., Chua, A., and Corbalan, A.: Learning German: The significance of language in a multicultural graduate school, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22573, https://doi.org/10.5194/egusphere-egu2020-22573, 2020.
ITS1.12/BG1.20 – Solutions for sustainable agri-food systems under climate change and globalisation
EGU2020-19693 | Displays | ITS1.12/BG1.20
The Food-Environment-Health Nexus of nutrition security in IndiaMaria Cristina Rulli, Livia Ricciardi, Davide Danilo Chiarelli, and Paolo D'Odorico
Feeding humanity while preserving environmental sustainability is one of the major challenges of the next few decades. Many of the global changes for planetary sustainability are due to the food system that is increasing production at the expense of the environment. However, nutrition related diseases caused by low quality diets are on the rise. The 2018 FAO report on State of Food Security and Nutrition in the World shows that the number of malnourished people keeps increasing. Undernourished people account for 821 million, including 151 million children under five affected by stunting, while the lives of over 50 million children in the world continue to be threatened by wasting. On the other hand, over 38 million children under five years of age are overweight, 672 million adults are obese, while diabetes, high blood pressure and anaemia are increasing.
Iron Deficiency Anaemia (IDA) is a major problem in India, especially among women. Around 53.1% of Indian women are affected by IDA, which is indeed becoming a major public health issue.
Although India was the first country to launch the National Nutritional Anemia Prevention Program in 1970, IDA remains widespread. There are many reasons for the emergence of a wide range of IDA in India, namely, insufficient iron intake, poor iron absorption, increased iron demand during repeated pregnancy and lactation, insufficient iron reserve at birth, umbilical cord clamping time, and food supplementation.
Punjab is the Indian state facing the most severe condition regarding the prevalence of anaemia, despite this state being one of the main food producers of India. Taking Punjab as a case study we analysed to what extent it is possible to feed the Punjab population with an healthy (adequate in term of micronutrient) and sustainable diet. To this end, using data from National Family Health Survey-4 (NFHS-4) and projected population surveys, an estimation of iron requirement is calculated. Natural resources (i.e. land, water) used for current diet and the additional resources needed to sustainably feed the local population with a reference healthy planetary diet are evaluated.
How to cite: Rulli, M. C., Ricciardi, L., Chiarelli, D. D., and D'Odorico, P.: The Food-Environment-Health Nexus of nutrition security in India, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19693, https://doi.org/10.5194/egusphere-egu2020-19693, 2020.
Feeding humanity while preserving environmental sustainability is one of the major challenges of the next few decades. Many of the global changes for planetary sustainability are due to the food system that is increasing production at the expense of the environment. However, nutrition related diseases caused by low quality diets are on the rise. The 2018 FAO report on State of Food Security and Nutrition in the World shows that the number of malnourished people keeps increasing. Undernourished people account for 821 million, including 151 million children under five affected by stunting, while the lives of over 50 million children in the world continue to be threatened by wasting. On the other hand, over 38 million children under five years of age are overweight, 672 million adults are obese, while diabetes, high blood pressure and anaemia are increasing.
Iron Deficiency Anaemia (IDA) is a major problem in India, especially among women. Around 53.1% of Indian women are affected by IDA, which is indeed becoming a major public health issue.
Although India was the first country to launch the National Nutritional Anemia Prevention Program in 1970, IDA remains widespread. There are many reasons for the emergence of a wide range of IDA in India, namely, insufficient iron intake, poor iron absorption, increased iron demand during repeated pregnancy and lactation, insufficient iron reserve at birth, umbilical cord clamping time, and food supplementation.
Punjab is the Indian state facing the most severe condition regarding the prevalence of anaemia, despite this state being one of the main food producers of India. Taking Punjab as a case study we analysed to what extent it is possible to feed the Punjab population with an healthy (adequate in term of micronutrient) and sustainable diet. To this end, using data from National Family Health Survey-4 (NFHS-4) and projected population surveys, an estimation of iron requirement is calculated. Natural resources (i.e. land, water) used for current diet and the additional resources needed to sustainably feed the local population with a reference healthy planetary diet are evaluated.
How to cite: Rulli, M. C., Ricciardi, L., Chiarelli, D. D., and D'Odorico, P.: The Food-Environment-Health Nexus of nutrition security in India, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19693, https://doi.org/10.5194/egusphere-egu2020-19693, 2020.
EGU2020-19 | Displays | ITS1.12/BG1.20
Drivers of multi environmental impacts embodied in international rice tradeDario Caro and Fabio Sporchia
Global rice production and trade have strongly increased in recent years. Trade of rice corresponds to a high share of virtual resources shipped from a country to another one. While most of the studies have focused on single impacts embodied in international trade of agricultural products, this study reveals a complete overview of the three most relevant resources/impacts embodied in international rice trade such as water, land and CH4 emissions. Our analysis includes more than 160 countries for the period 2000-2016 by using country-specific impact factors. This trilateral analysis allows the assessment of tradeoffs between different impact categories and relative discussion about international trade policies. Indeed, while the outcome of the impacts embodied in trade is mostly due to the volume of rice traded, the three country-specific impact factors such as water demand, yield and emission factor also determinate the results thus revealing tradeoffs among the three impacts generated. Existing trade flows are mainly leaded by economic aspects rather than focusing on environmental performances. We conclude that international policies should lead developing countries, which are the largest exporters of rice and have a lower efficiency production, to invest in the improvement of their environmental performances thus maintaining their international market competitiveness.
How to cite: Caro, D. and Sporchia, F.: Drivers of multi environmental impacts embodied in international rice trade, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19, https://doi.org/10.5194/egusphere-egu2020-19, 2020.
Global rice production and trade have strongly increased in recent years. Trade of rice corresponds to a high share of virtual resources shipped from a country to another one. While most of the studies have focused on single impacts embodied in international trade of agricultural products, this study reveals a complete overview of the three most relevant resources/impacts embodied in international rice trade such as water, land and CH4 emissions. Our analysis includes more than 160 countries for the period 2000-2016 by using country-specific impact factors. This trilateral analysis allows the assessment of tradeoffs between different impact categories and relative discussion about international trade policies. Indeed, while the outcome of the impacts embodied in trade is mostly due to the volume of rice traded, the three country-specific impact factors such as water demand, yield and emission factor also determinate the results thus revealing tradeoffs among the three impacts generated. Existing trade flows are mainly leaded by economic aspects rather than focusing on environmental performances. We conclude that international policies should lead developing countries, which are the largest exporters of rice and have a lower efficiency production, to invest in the improvement of their environmental performances thus maintaining their international market competitiveness.
How to cite: Caro, D. and Sporchia, F.: Drivers of multi environmental impacts embodied in international rice trade, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19, https://doi.org/10.5194/egusphere-egu2020-19, 2020.
EGU2020-6934 | Displays | ITS1.12/BG1.20
Representative meat consumption pathways for sub-Saharan Africa and their local and global energy and environmental implicationsGiacomo Falchetta, Nicolò Golinucci, and Michel Noussan
In sub-Saharan Africa (SSA) most people live on plant-dominated diets, with significantly lower levels of per-capita meat consumption than in any other region. Yet, economic development has nearly everywhere spurred a shift to dietary regimes with a greater consumption of meat, albeit with regional heterogeneity for meat-type and magnitude. A growing regional economy, changing cultural attitudes, and a steeply increasing population could thus push the regional demand upward in the coming decades, with significant depletion of regional and global natural resources and environmental repercussions. We study the historical association of the four main meat types with demand drivers in recently developed countries via seemingly unrelated regression (SUR) equation systems. Using the calibrated coefficients, trajectories of meat consumption in SSA to 2050 are projected relying on the SSP scenarios over GDP and population growth. Then, using a Leontiefian environmentally extended input-output (EEIO) framework exploiting the EXIOBASE3 database, we estimate the related energy, land, and water requirements, and the implied greenhouse gas (CO2, CH4, N2O) emissions. We calculate that if production to meet those consumption levels takes place in the continent – compared to the current situation – global greenhouse gas (GHG) emissions would grow by 230 Mt CO2e (4.4% of today’s global agriculture-related emissions), the land required for cropping and grazing would require additional 4.2 · 106 km2 (more than half of the total arable land in SSA), total blue water consumption would rise by 10,300 Mm3 (0.89% of the global total), and additional 1.2 EJ of energy (6% of today’s total primary energy demand in the region) would be required. Alternative scenarios where SSA is a net importer of final meat products are reported for comparison. The local policy and attitudes towards farming practices and dietary choices will have significant impact on both the regional environment and global GHG emissions.
How to cite: Falchetta, G., Golinucci, N., and Noussan, M.: Representative meat consumption pathways for sub-Saharan Africa and their local and global energy and environmental implications, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6934, https://doi.org/10.5194/egusphere-egu2020-6934, 2020.
In sub-Saharan Africa (SSA) most people live on plant-dominated diets, with significantly lower levels of per-capita meat consumption than in any other region. Yet, economic development has nearly everywhere spurred a shift to dietary regimes with a greater consumption of meat, albeit with regional heterogeneity for meat-type and magnitude. A growing regional economy, changing cultural attitudes, and a steeply increasing population could thus push the regional demand upward in the coming decades, with significant depletion of regional and global natural resources and environmental repercussions. We study the historical association of the four main meat types with demand drivers in recently developed countries via seemingly unrelated regression (SUR) equation systems. Using the calibrated coefficients, trajectories of meat consumption in SSA to 2050 are projected relying on the SSP scenarios over GDP and population growth. Then, using a Leontiefian environmentally extended input-output (EEIO) framework exploiting the EXIOBASE3 database, we estimate the related energy, land, and water requirements, and the implied greenhouse gas (CO2, CH4, N2O) emissions. We calculate that if production to meet those consumption levels takes place in the continent – compared to the current situation – global greenhouse gas (GHG) emissions would grow by 230 Mt CO2e (4.4% of today’s global agriculture-related emissions), the land required for cropping and grazing would require additional 4.2 · 106 km2 (more than half of the total arable land in SSA), total blue water consumption would rise by 10,300 Mm3 (0.89% of the global total), and additional 1.2 EJ of energy (6% of today’s total primary energy demand in the region) would be required. Alternative scenarios where SSA is a net importer of final meat products are reported for comparison. The local policy and attitudes towards farming practices and dietary choices will have significant impact on both the regional environment and global GHG emissions.
How to cite: Falchetta, G., Golinucci, N., and Noussan, M.: Representative meat consumption pathways for sub-Saharan Africa and their local and global energy and environmental implications, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6934, https://doi.org/10.5194/egusphere-egu2020-6934, 2020.
EGU2020-1526 | Displays | ITS1.12/BG1.20 | Highlight
The multiple interactions of banana production, biodiversity, trade and climate in the PhilippinesAndrea Monica Ortiz
Banana is a globally important fruit, and the Philippines is one of the world’s largest producers of banana both for domestic consumption and for export. While the popular fruit provides an important source of nutrition and economic revenue, banana production has many negative impacts on the environment. This is due to the input-intensive nature of banana production, as well as the habitat loss and expansion associated with growing trade demands for Philippine bananas, primarily from China, Japan and South Korea. The increased homogeneity of the landscape for banana cultivation also has impacts on threatened Philippine species.
An additional factor of climate risk is added to the multiple interactions between banana production and the environment: the Philippines is vulnerable to climate change and climate hazards. Approximately 20 tropical cyclones enter the Philippine Area of Responsibility every year and are a significant cause of losses and damages to agriculture, particularly banana production which is sensitive to strong winds. Thus, there is a complex set of interactions between banana production, its negative impacts on the environment, the increasing exposure of plantations to climate hazards, and the role of banana in the local diet and economy.
Data on agriculture, trade and tropical cyclones are used to show that a number of threatened Philippine species occur within agricultural pressure zones from banana production, some of which overlap with protected areas. An analysis of agricultural and economic data shows that damages from tropical cyclones are increasing, but tropical cyclones themselves are not increasing in intensity nor frequency. This means that agricultural expansion has impacts both on biodiversity and on the sustainability of banana production itself. Several recommendations to adapt growing systems to be both resilient and more supportive of biodiversity are offered.
How to cite: Ortiz, A. M.: The multiple interactions of banana production, biodiversity, trade and climate in the Philippines, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1526, https://doi.org/10.5194/egusphere-egu2020-1526, 2020.
Banana is a globally important fruit, and the Philippines is one of the world’s largest producers of banana both for domestic consumption and for export. While the popular fruit provides an important source of nutrition and economic revenue, banana production has many negative impacts on the environment. This is due to the input-intensive nature of banana production, as well as the habitat loss and expansion associated with growing trade demands for Philippine bananas, primarily from China, Japan and South Korea. The increased homogeneity of the landscape for banana cultivation also has impacts on threatened Philippine species.
An additional factor of climate risk is added to the multiple interactions between banana production and the environment: the Philippines is vulnerable to climate change and climate hazards. Approximately 20 tropical cyclones enter the Philippine Area of Responsibility every year and are a significant cause of losses and damages to agriculture, particularly banana production which is sensitive to strong winds. Thus, there is a complex set of interactions between banana production, its negative impacts on the environment, the increasing exposure of plantations to climate hazards, and the role of banana in the local diet and economy.
Data on agriculture, trade and tropical cyclones are used to show that a number of threatened Philippine species occur within agricultural pressure zones from banana production, some of which overlap with protected areas. An analysis of agricultural and economic data shows that damages from tropical cyclones are increasing, but tropical cyclones themselves are not increasing in intensity nor frequency. This means that agricultural expansion has impacts both on biodiversity and on the sustainability of banana production itself. Several recommendations to adapt growing systems to be both resilient and more supportive of biodiversity are offered.
How to cite: Ortiz, A. M.: The multiple interactions of banana production, biodiversity, trade and climate in the Philippines, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1526, https://doi.org/10.5194/egusphere-egu2020-1526, 2020.
EGU2020-21492 | Displays | ITS1.12/BG1.20
Scaling-up sustainable intensification practices for rice production in East AfricaRobert Burtscher, Taher Kahil, Mikhail Smilovic, Diana Luna, Jenan Irshaid, Thomas Falk, Caroline Hambloch, Sylvia Tramberend, Elizabeth Tellez Leon, Paul Yillia, Junko Mochizuki, Paul Kariuki, Michael Hauser, and Yoshihide Wada
Food security has long been a challenge for East Africa region and is becoming a pressing issue for the coming decades because food demand is expected to increase considerably following rapid population and income growth. Agricultural production in the region is thus required to intensify, in a sustainable way, to keep up with food demand. However, many challenges face the sustainable intensification of the agricultural production including low productivity, inadequate management, small scale operations, and large climate variability. Several pilot initiatives, that involves a bundle of land and water management practices, have been introduced in the region to tackle such challenges. However, their large-scale implementation remains limited. In the framework of a research project which is jointly implemented by the International Institute for Applied System Analysis (IIASA), the Lake Victoria Basin Commission (LVBC) and the International Crops Research Institute for Semi-Arid Tropics (ICRISAT), we analyse up scaling opportunities for water and land management practices for the sustainable and resilient intensification of rice and fodder production systems in the extended Lake Victoria Basin in East Africa. The expected outcome of this project is to provide an improved understanding of up scaling of such practices through model simulations and integrated analysis of political economy aspects, governance and social and gender dimensions.
This paper presents an integrated upscaling modeling framework that combines biophysical suitability analysis and economic optimization. Several production system options (i.e., management practices) for rice intensification are examined at high-spatial resolution (0.5°x0.5°) in the extended Lake Victoria basin. The suitability analysis identifies suitable area for the production system options based on a combination of various biophysical factors such as climate, hydrology, vegetation and soil properties using the Global Agroecological Zones (GAEZ) model and the Community Water Model (CWaTM). The economic optimization identifies the optimal combination of those production systems that maximizes their overall contribution to agricultural economic benefits having satisfied various technical and resource constraints including commodity balance, land availability and suitability, water availability, labor availability and capital constraints. A set of socioeconomic (e.g., impact of population and income growth on food demand and agricultural productivity) and climate change (e.g., impact on water resources availability) scenarios based on combinations of the Shared Socioeconomic Pathways (SSPs), Representative Concentration Pathways (RCPs), and co-developed bottom-up policy scenarios, through stakeholders’ engagement with the Basin Commission (LVBC), have been utilized to simulate the modeling framework. Results of this study show the existence of significant opportunities for the sustainable intensification of rice production in East Africa. Moreover, the study identifies the key biophysical and economic factors that could enable the upscaling of sustainable land and water management practices for rice production in the region. Overall, this study demonstrates the capacity of the proposed upscaling modeling framework as a system approach to address the linkages between the intensification of agricultural production and the sustainable use of natural resources.
How to cite: Burtscher, R., Kahil, T., Smilovic, M., Luna, D., Irshaid, J., Falk, T., Hambloch, C., Tramberend, S., Tellez Leon, E., Yillia, P., Mochizuki, J., Kariuki, P., Hauser, M., and Wada, Y.: Scaling-up sustainable intensification practices for rice production in East Africa, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21492, https://doi.org/10.5194/egusphere-egu2020-21492, 2020.
Food security has long been a challenge for East Africa region and is becoming a pressing issue for the coming decades because food demand is expected to increase considerably following rapid population and income growth. Agricultural production in the region is thus required to intensify, in a sustainable way, to keep up with food demand. However, many challenges face the sustainable intensification of the agricultural production including low productivity, inadequate management, small scale operations, and large climate variability. Several pilot initiatives, that involves a bundle of land and water management practices, have been introduced in the region to tackle such challenges. However, their large-scale implementation remains limited. In the framework of a research project which is jointly implemented by the International Institute for Applied System Analysis (IIASA), the Lake Victoria Basin Commission (LVBC) and the International Crops Research Institute for Semi-Arid Tropics (ICRISAT), we analyse up scaling opportunities for water and land management practices for the sustainable and resilient intensification of rice and fodder production systems in the extended Lake Victoria Basin in East Africa. The expected outcome of this project is to provide an improved understanding of up scaling of such practices through model simulations and integrated analysis of political economy aspects, governance and social and gender dimensions.
This paper presents an integrated upscaling modeling framework that combines biophysical suitability analysis and economic optimization. Several production system options (i.e., management practices) for rice intensification are examined at high-spatial resolution (0.5°x0.5°) in the extended Lake Victoria basin. The suitability analysis identifies suitable area for the production system options based on a combination of various biophysical factors such as climate, hydrology, vegetation and soil properties using the Global Agroecological Zones (GAEZ) model and the Community Water Model (CWaTM). The economic optimization identifies the optimal combination of those production systems that maximizes their overall contribution to agricultural economic benefits having satisfied various technical and resource constraints including commodity balance, land availability and suitability, water availability, labor availability and capital constraints. A set of socioeconomic (e.g., impact of population and income growth on food demand and agricultural productivity) and climate change (e.g., impact on water resources availability) scenarios based on combinations of the Shared Socioeconomic Pathways (SSPs), Representative Concentration Pathways (RCPs), and co-developed bottom-up policy scenarios, through stakeholders’ engagement with the Basin Commission (LVBC), have been utilized to simulate the modeling framework. Results of this study show the existence of significant opportunities for the sustainable intensification of rice production in East Africa. Moreover, the study identifies the key biophysical and economic factors that could enable the upscaling of sustainable land and water management practices for rice production in the region. Overall, this study demonstrates the capacity of the proposed upscaling modeling framework as a system approach to address the linkages between the intensification of agricultural production and the sustainable use of natural resources.
How to cite: Burtscher, R., Kahil, T., Smilovic, M., Luna, D., Irshaid, J., Falk, T., Hambloch, C., Tramberend, S., Tellez Leon, E., Yillia, P., Mochizuki, J., Kariuki, P., Hauser, M., and Wada, Y.: Scaling-up sustainable intensification practices for rice production in East Africa, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21492, https://doi.org/10.5194/egusphere-egu2020-21492, 2020.
EGU2020-9627 | Displays | ITS1.12/BG1.20
Agricultural water consumption and crop pricesBenedetta Falsetti, Elena Vallino, Luca Ridolfi, and Francesco Laio
Most human activities depend on water. Agriculture alone consumes 70% of all freshwater withdrawals worldwide. In cases when such withdrawal overcome sustainability levels, water scarcity represents a growing threat to food security. In this framework, there has been an enduring debate on the opportunity of assigning an economic value to water. Some studies argue that water resources would be more efficiently allocated if they had a price that reflects their scarcity and that a pricing policy would also provide incentives for more sustainable consumption. Building on these considerations, in this work we investigate whether the water consumption in agricultural production is reflected in crop prices.
In this research, we focus specifically on the production of agricultural primary goods to understand whether water consumption is taken into consideration in the prices associated with these products on the global market. We consider the water component also in terms of water availability per capita at the country level (Falkenmark Water Stress Indicator). Aware of the fact that water and land are usually regarded as a single entity, we analyze if the water, isolated from this relation, still has an impact.
We select twelve representative crops analyzing their farm gate prices from 1991 to 2016, collecting data regarding 162 countries in total. We identify two different behaviors: staple crops (e.g. wheat, maize, soybeans, and potatoes) tend to incorporate in their prices the amount of water employed during the cultivation process. Differently, cash crops (e.g. coffee, cocoa beans, tea, vanilla), which are not crucial in human diets and mainly produced for exportation purposes, show a weaker relationship between water footprint and prices on the global market. These variations may be ascribable to specific market dynamics related to the two product groups. While there could be different elements influencing the behavior of these two macro-categories of crops, it is important to understand how water is related to crop prices to purse more efficient practices in water allocation and governance management, improving environmental sustainability in this field.
How to cite: Falsetti, B., Vallino, E., Ridolfi, L., and Laio, F.: Agricultural water consumption and crop prices, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9627, https://doi.org/10.5194/egusphere-egu2020-9627, 2020.
Most human activities depend on water. Agriculture alone consumes 70% of all freshwater withdrawals worldwide. In cases when such withdrawal overcome sustainability levels, water scarcity represents a growing threat to food security. In this framework, there has been an enduring debate on the opportunity of assigning an economic value to water. Some studies argue that water resources would be more efficiently allocated if they had a price that reflects their scarcity and that a pricing policy would also provide incentives for more sustainable consumption. Building on these considerations, in this work we investigate whether the water consumption in agricultural production is reflected in crop prices.
In this research, we focus specifically on the production of agricultural primary goods to understand whether water consumption is taken into consideration in the prices associated with these products on the global market. We consider the water component also in terms of water availability per capita at the country level (Falkenmark Water Stress Indicator). Aware of the fact that water and land are usually regarded as a single entity, we analyze if the water, isolated from this relation, still has an impact.
We select twelve representative crops analyzing their farm gate prices from 1991 to 2016, collecting data regarding 162 countries in total. We identify two different behaviors: staple crops (e.g. wheat, maize, soybeans, and potatoes) tend to incorporate in their prices the amount of water employed during the cultivation process. Differently, cash crops (e.g. coffee, cocoa beans, tea, vanilla), which are not crucial in human diets and mainly produced for exportation purposes, show a weaker relationship between water footprint and prices on the global market. These variations may be ascribable to specific market dynamics related to the two product groups. While there could be different elements influencing the behavior of these two macro-categories of crops, it is important to understand how water is related to crop prices to purse more efficient practices in water allocation and governance management, improving environmental sustainability in this field.
How to cite: Falsetti, B., Vallino, E., Ridolfi, L., and Laio, F.: Agricultural water consumption and crop prices, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9627, https://doi.org/10.5194/egusphere-egu2020-9627, 2020.
EGU2020-9383 | Displays | ITS1.12/BG1.20
The water footprint of different diets within European sub-national geographical entitiesDavy Vanham
The water footprint concept has been recognized as being highly valuable for raising awareness of the large quantity of water resources required to produce the food we consume. We present, for three major European countries (the United Kingdom, France and Germany), a geographically detailed nationwide food-consumption-related water footprint, taking into account socio-economic factors of food consumption, for both existing and recommended diets (healthy diet with meat, healthy pescetarian diet and healthy vegetarian diet). Using socio-economic data, national food surveys and international food consumption and water footprint databases, we were able to refine national water footprint data to the smallest possible administrative boundaries within a country (reference period 2007–2011). We found geographical differences in water footprint values for existing diets as well as for the reduction in water footprints associated with a change to the recommended healthy diets. For all 43,786 analysed geographical entities, the water footprint decreases for a healthy diet containing meat (range 11–35%). Larger reductions are observed for the healthy pescetarian (range 33–55%) and healthy vegetarian (range 35–55%) diets. In other words, shifting to a healthy diet is not only good for human health, but also substantially reduces consumption of water resources, consistently for all geographical entities throughout the three countries. Our full results are available as a supplementary dataset. These data can be used at different governance levels in order to inform policies targeted to specific geographical entities.
This presentation is based on a recent paper published in Nature Sustainability
How to cite: Vanham, D.: The water footprint of different diets within European sub-national geographical entities, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9383, https://doi.org/10.5194/egusphere-egu2020-9383, 2020.
The water footprint concept has been recognized as being highly valuable for raising awareness of the large quantity of water resources required to produce the food we consume. We present, for three major European countries (the United Kingdom, France and Germany), a geographically detailed nationwide food-consumption-related water footprint, taking into account socio-economic factors of food consumption, for both existing and recommended diets (healthy diet with meat, healthy pescetarian diet and healthy vegetarian diet). Using socio-economic data, national food surveys and international food consumption and water footprint databases, we were able to refine national water footprint data to the smallest possible administrative boundaries within a country (reference period 2007–2011). We found geographical differences in water footprint values for existing diets as well as for the reduction in water footprints associated with a change to the recommended healthy diets. For all 43,786 analysed geographical entities, the water footprint decreases for a healthy diet containing meat (range 11–35%). Larger reductions are observed for the healthy pescetarian (range 33–55%) and healthy vegetarian (range 35–55%) diets. In other words, shifting to a healthy diet is not only good for human health, but also substantially reduces consumption of water resources, consistently for all geographical entities throughout the three countries. Our full results are available as a supplementary dataset. These data can be used at different governance levels in order to inform policies targeted to specific geographical entities.
This presentation is based on a recent paper published in Nature Sustainability
How to cite: Vanham, D.: The water footprint of different diets within European sub-national geographical entities, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9383, https://doi.org/10.5194/egusphere-egu2020-9383, 2020.
EGU2020-19724 | Displays | ITS1.12/BG1.20
Hydro-climatic and anthropic determinants of spatio-temporal variability of crop water footprintStefania Tamea, Marta Tuninetti, and Matteo Rolle
Multidisciplinary analyses of the water-food nexus are often based on the water footprint indicator. The water footprint measures the volume of water necessary to produce a good, and its unit counterpart (water footprint per unit weight of good) can be interpreted as an indicator of efficiency in the use of water resources. Crop water footprint refers to the unit water footprint of crops and it is defined as the volume of water evapotranspired during crop growth divided by the agricultural yield.
This contribution focuses on the spatial variability (at global scale) and temporal evolution (in the period 1961-2004) of the crop water footprint of four crops: wheat, rice, maize and soybean. In particular, we investigate the role of hydro-climatic and anthropic factors in determining the spatial and temporal variability. First, a sensitivity analysis is used to quantify the influence of precipitation, reference evapotranspiration, and agricultural yield on crop water footprint, separating between green water (precipitation) and blue water (irrigation). Second, an analysis of agricultural yield is presented that separates the effects of hydro-climatic and anthropic determinants on yield, with a special focus on temporal trends.
Results highlight the important role played by hydro-climatic variables in the separation of green and blue water, despite the limited sensitivity of total water footprint to such variables. In the temporal analysis, hydro-climatic variables are found to contribute to the inter-annual fluctuations of yield (and thus of crop water footprint) but the temporal trends are dominated by anthropic determinants. In conclusion, both hydro-climatic and anthropic variables have a role in spatio-temporal variability of crop water footprint, although their influence is different if considering different aspects of such variability.
How to cite: Tamea, S., Tuninetti, M., and Rolle, M.: Hydro-climatic and anthropic determinants of spatio-temporal variability of crop water footprint, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19724, https://doi.org/10.5194/egusphere-egu2020-19724, 2020.
Multidisciplinary analyses of the water-food nexus are often based on the water footprint indicator. The water footprint measures the volume of water necessary to produce a good, and its unit counterpart (water footprint per unit weight of good) can be interpreted as an indicator of efficiency in the use of water resources. Crop water footprint refers to the unit water footprint of crops and it is defined as the volume of water evapotranspired during crop growth divided by the agricultural yield.
This contribution focuses on the spatial variability (at global scale) and temporal evolution (in the period 1961-2004) of the crop water footprint of four crops: wheat, rice, maize and soybean. In particular, we investigate the role of hydro-climatic and anthropic factors in determining the spatial and temporal variability. First, a sensitivity analysis is used to quantify the influence of precipitation, reference evapotranspiration, and agricultural yield on crop water footprint, separating between green water (precipitation) and blue water (irrigation). Second, an analysis of agricultural yield is presented that separates the effects of hydro-climatic and anthropic determinants on yield, with a special focus on temporal trends.
Results highlight the important role played by hydro-climatic variables in the separation of green and blue water, despite the limited sensitivity of total water footprint to such variables. In the temporal analysis, hydro-climatic variables are found to contribute to the inter-annual fluctuations of yield (and thus of crop water footprint) but the temporal trends are dominated by anthropic determinants. In conclusion, both hydro-climatic and anthropic variables have a role in spatio-temporal variability of crop water footprint, although their influence is different if considering different aspects of such variability.
How to cite: Tamea, S., Tuninetti, M., and Rolle, M.: Hydro-climatic and anthropic determinants of spatio-temporal variability of crop water footprint, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19724, https://doi.org/10.5194/egusphere-egu2020-19724, 2020.
EGU2020-10402 | Displays | ITS1.12/BG1.20
Water resources conservation and nitrogen pollution reduction under global food trade and agricultural intensificationWenfeng Liu, Hong Yang, Matti Kummu, Junguo Liu, and Philippe Ciais
Global food trade entails virtual flows of agricultural resources and pollution across countries. Here we performed a global-scale assessment of impacts of international food trade on blue water use, total water use, and nitrogen (N) inputs and on N losses in maize, rice, and wheat production. We simulated baseline conditions for the year 2000 and explored the impacts of an agricultural intensification scenario, in which low-input countries increase N and irrigation inputs to a greater extent than high-input countries. We combined a crop model with the Global Trade Analysis Project model. Results show that food exports generally occurred from regions with lower water and N use intensities, defined here as water and N uses in relation to crop yields, to regions with higher resources use intensities. Globally, food trade thus conserved a large amount of water resources and N applications, and also substantially reduced N losses. The trade-related conservation in blue water use reached 85 km3 y−1, accounting for more than half of total blue water use for producing the three crops. Food exported from the USA contributed the largest proportion of global water and N conservation as well as N loss reduction, but also led to substantial export-associated N losses in the country itself. Under the intensification scenario, the converging water and N use intensities across countries result in a more balanced world; crop trade will generally decrease, and global water resources conservation and N pollution reduction associated with the trade will reduce accordingly. The study provides useful information to understand the implications of agricultural intensification for international crop trade, crop water use and N pollution patterns in the world.
How to cite: Liu, W., Yang, H., Kummu, M., Liu, J., and Ciais, P.: Water resources conservation and nitrogen pollution reduction under global food trade and agricultural intensification, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10402, https://doi.org/10.5194/egusphere-egu2020-10402, 2020.
Global food trade entails virtual flows of agricultural resources and pollution across countries. Here we performed a global-scale assessment of impacts of international food trade on blue water use, total water use, and nitrogen (N) inputs and on N losses in maize, rice, and wheat production. We simulated baseline conditions for the year 2000 and explored the impacts of an agricultural intensification scenario, in which low-input countries increase N and irrigation inputs to a greater extent than high-input countries. We combined a crop model with the Global Trade Analysis Project model. Results show that food exports generally occurred from regions with lower water and N use intensities, defined here as water and N uses in relation to crop yields, to regions with higher resources use intensities. Globally, food trade thus conserved a large amount of water resources and N applications, and also substantially reduced N losses. The trade-related conservation in blue water use reached 85 km3 y−1, accounting for more than half of total blue water use for producing the three crops. Food exported from the USA contributed the largest proportion of global water and N conservation as well as N loss reduction, but also led to substantial export-associated N losses in the country itself. Under the intensification scenario, the converging water and N use intensities across countries result in a more balanced world; crop trade will generally decrease, and global water resources conservation and N pollution reduction associated with the trade will reduce accordingly. The study provides useful information to understand the implications of agricultural intensification for international crop trade, crop water use and N pollution patterns in the world.
How to cite: Liu, W., Yang, H., Kummu, M., Liu, J., and Ciais, P.: Water resources conservation and nitrogen pollution reduction under global food trade and agricultural intensification, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10402, https://doi.org/10.5194/egusphere-egu2020-10402, 2020.
EGU2020-10200 | Displays | ITS1.12/BG1.20
Options for reducing costs of diesel pump irrigation systems in the Eastern Indo-Gangetic PlainsTimothy Foster, Roshan Adhikari, Subash Adhikari, Anton Urfels, and Timothy Krupnik
In many parts of South Asia, electricity for groundwater pumping has been directly or indirectly subsidised by governments to support intensification of agriculture. In contrast, farmers in large portions of the Eastern Indo-Gangetic Plains (EIGP) remain largely dependent on unsubsidised diesel or petrol power for irrigation pumping. Combined with a lack of comprehensive aquifer mapping, high energy costs of pumping limit the ability of farmers to utilise available groundwater resources. This increases exposure to farm production risks, in particular drought and precipitation variability.
To date, research to address these challenges has largely focused on efforts to enhance rural electrification or introduce renewable energy-based pumping systems that remain out of reach of many poor smallholders. However, there has been comparatively little focus on understanding opportunities to improve the cost-effectiveness and performance of the thousands of existing diesel-pump irrigation systems already in use in the EIGP. Here, we present findings from a recent survey of over 432 farmer households in the mid-western Terai region of Nepal – an important area of diesel-pump irrigation in the EIGP. Our survey provides information about key socio-economic, technological and behavioral aspects of diesel pump irrigation systems currently in operation, along with quantitative evidence about their impacts on agricultural productivity and profitability.
Survey results indicate that groundwater irrigation costs vary significantly between individual farmers. Farmers faced with higher costs of groundwater access irrigate their crops less frequently, which in turn results in lower crop yields and reduced overall farm profitability. Our data indicate that pumpset fuel efficiency may be a key driver of variability in irrigation costs, with large horsepower (>5 HP) Indian-made pumpsets appearing to have significantly higher fuel consumption rates (1.10 litre/hour and $18,000) and investments costs than alternative smaller horsepower (<5 HP) Chinese-made pumpsets (0.76 litre/hr and $30,000). Despite this, the majority of farmers continue to favour Indian pumpsets due to their higher reliability and well-established supply chains. Variability in access costs is also related to differences in capacity of farmers to invest in their own pumping systems. Pumpset rental rates in the region increase irrigation costs by a factor of 3-4 relative to the cost of fuel alone. Furthermore, rental rates typically are structured on a per-hourly basis, further exacerbating access costs for farmers with low yielding wells or whose irrigation management practices are less efficient.
Our findings highlight that opportunities exist to reduce costs of groundwater use in existing diesel irrigation systems through improved access to more energy efficient pumping systems. This would have positive near-term impacts on agricultural productivity and rural livelihoods, in particular helping farmers to more effectively buffer crops against monsoonal variability. Such near-term improvements in diesel pump irrigation systems would also play an important role in supporting agriculture in the EIGP to transition to more sustainable and clean sources of energy for irrigation pumping. However, efforts to enhance irrigation access must also occur alongside improvements to aquifer monitoring and governance of extraction, in order to minimise risks of future depletion such as observed in other parts of the IGP.
How to cite: Foster, T., Adhikari, R., Adhikari, S., Urfels, A., and Krupnik, T.: Options for reducing costs of diesel pump irrigation systems in the Eastern Indo-Gangetic Plains, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10200, https://doi.org/10.5194/egusphere-egu2020-10200, 2020.
In many parts of South Asia, electricity for groundwater pumping has been directly or indirectly subsidised by governments to support intensification of agriculture. In contrast, farmers in large portions of the Eastern Indo-Gangetic Plains (EIGP) remain largely dependent on unsubsidised diesel or petrol power for irrigation pumping. Combined with a lack of comprehensive aquifer mapping, high energy costs of pumping limit the ability of farmers to utilise available groundwater resources. This increases exposure to farm production risks, in particular drought and precipitation variability.
To date, research to address these challenges has largely focused on efforts to enhance rural electrification or introduce renewable energy-based pumping systems that remain out of reach of many poor smallholders. However, there has been comparatively little focus on understanding opportunities to improve the cost-effectiveness and performance of the thousands of existing diesel-pump irrigation systems already in use in the EIGP. Here, we present findings from a recent survey of over 432 farmer households in the mid-western Terai region of Nepal – an important area of diesel-pump irrigation in the EIGP. Our survey provides information about key socio-economic, technological and behavioral aspects of diesel pump irrigation systems currently in operation, along with quantitative evidence about their impacts on agricultural productivity and profitability.
Survey results indicate that groundwater irrigation costs vary significantly between individual farmers. Farmers faced with higher costs of groundwater access irrigate their crops less frequently, which in turn results in lower crop yields and reduced overall farm profitability. Our data indicate that pumpset fuel efficiency may be a key driver of variability in irrigation costs, with large horsepower (>5 HP) Indian-made pumpsets appearing to have significantly higher fuel consumption rates (1.10 litre/hour and $18,000) and investments costs than alternative smaller horsepower (<5 HP) Chinese-made pumpsets (0.76 litre/hr and $30,000). Despite this, the majority of farmers continue to favour Indian pumpsets due to their higher reliability and well-established supply chains. Variability in access costs is also related to differences in capacity of farmers to invest in their own pumping systems. Pumpset rental rates in the region increase irrigation costs by a factor of 3-4 relative to the cost of fuel alone. Furthermore, rental rates typically are structured on a per-hourly basis, further exacerbating access costs for farmers with low yielding wells or whose irrigation management practices are less efficient.
Our findings highlight that opportunities exist to reduce costs of groundwater use in existing diesel irrigation systems through improved access to more energy efficient pumping systems. This would have positive near-term impacts on agricultural productivity and rural livelihoods, in particular helping farmers to more effectively buffer crops against monsoonal variability. Such near-term improvements in diesel pump irrigation systems would also play an important role in supporting agriculture in the EIGP to transition to more sustainable and clean sources of energy for irrigation pumping. However, efforts to enhance irrigation access must also occur alongside improvements to aquifer monitoring and governance of extraction, in order to minimise risks of future depletion such as observed in other parts of the IGP.
How to cite: Foster, T., Adhikari, R., Adhikari, S., Urfels, A., and Krupnik, T.: Options for reducing costs of diesel pump irrigation systems in the Eastern Indo-Gangetic Plains, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10200, https://doi.org/10.5194/egusphere-egu2020-10200, 2020.
EGU2020-3366 | Displays | ITS1.12/BG1.20
Agricultural infrastructure: the forgotten key driving force on crop-related water footprints and virtual water flows in developing countries: a case for ChinaHongrong Huang, La Zhuo, and Pute Wu
Agricultural infrastructure plays important roles in boosting food production and trade system in developing countries, while as being a ‘grey solutions’, generates increasingly risks on the environmental sustainability. There is little information on impacts of agricultural infrastructure developments on water consumption and flows, (i.e. water footprint and virtual water flows) related to crop production, consumption and trade especially in developing countries with high water risk. Here we, taking mainland China over 2000-2017 as the study case, identified and evaluated the strengths and spatial heterogeneities in main socio-economic driving factors of provincial water footprints and inter-provincial virtual water flows related to three staple crops (rice, wheat and maize). For the first time, we consider irrigation (II), electricity (EI) and road infrastructures (RI) in the driving factor analysis through the extended STIRPAT (stochastic impacts by regression on population, affluence and technology) model. Results show that the II, EI and RI in China were expanded by 33.8 times, 4.5 times and 2.4 times, respectively by year 2017 compared to 2000. Although the II was the most critical driver to effectively reduce the per unit water footprint, especially the blue water footprint in crop production (i.e., increasing water efficiency), the developments of II led to the bigger total water consumption. Such phenomenon was observed in Jing-Jin region, North Coast and Northwest China with water resource shortage. The EI and RI had increasing effects on provincial virtual water export, and the corresponding driving strengths varied across spaces. Obviously, the visible effects from the agricultural infrastructures on regional water consumption, water productivity and virtual water patterns cannot be neglected.
How to cite: Huang, H., Zhuo, L., and Wu, P.: Agricultural infrastructure: the forgotten key driving force on crop-related water footprints and virtual water flows in developing countries: a case for China, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3366, https://doi.org/10.5194/egusphere-egu2020-3366, 2020.
Agricultural infrastructure plays important roles in boosting food production and trade system in developing countries, while as being a ‘grey solutions’, generates increasingly risks on the environmental sustainability. There is little information on impacts of agricultural infrastructure developments on water consumption and flows, (i.e. water footprint and virtual water flows) related to crop production, consumption and trade especially in developing countries with high water risk. Here we, taking mainland China over 2000-2017 as the study case, identified and evaluated the strengths and spatial heterogeneities in main socio-economic driving factors of provincial water footprints and inter-provincial virtual water flows related to three staple crops (rice, wheat and maize). For the first time, we consider irrigation (II), electricity (EI) and road infrastructures (RI) in the driving factor analysis through the extended STIRPAT (stochastic impacts by regression on population, affluence and technology) model. Results show that the II, EI and RI in China were expanded by 33.8 times, 4.5 times and 2.4 times, respectively by year 2017 compared to 2000. Although the II was the most critical driver to effectively reduce the per unit water footprint, especially the blue water footprint in crop production (i.e., increasing water efficiency), the developments of II led to the bigger total water consumption. Such phenomenon was observed in Jing-Jin region, North Coast and Northwest China with water resource shortage. The EI and RI had increasing effects on provincial virtual water export, and the corresponding driving strengths varied across spaces. Obviously, the visible effects from the agricultural infrastructures on regional water consumption, water productivity and virtual water patterns cannot be neglected.
How to cite: Huang, H., Zhuo, L., and Wu, P.: Agricultural infrastructure: the forgotten key driving force on crop-related water footprints and virtual water flows in developing countries: a case for China, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3366, https://doi.org/10.5194/egusphere-egu2020-3366, 2020.
EGU2020-19504 | Displays | ITS1.12/BG1.20
Analyzing German transformation pathways‘ alignment with national and global climate and sustainability goals in the FABLE frameworkUwe A. Schneider and Jan Steinhauser
Successful implementation of the Paris climate agreement and the 2015 UN sustainability goals requires large-scale transformations in all relevant areas, with land-use and land-management among the most critical ones. On a global scale, a variety of transformation pathways has been discussed. However, these pathways often assume a uniform global transformation from which each country’s transformation pathway follows in a top-down manner. This approach faces implementation difficulties due to inconsistencies between resulting country pathways and the respective country’s political reality. Hence, a bottom-up approach may create less ambitious, but more realistic transformation pathways, both on a regional and global scale.
As part of an international effort to create national and global transformation pathways in line with the Paris climate agreement and the 2015 UN sustainability goals, this project aims to model impacts from current and projected German environmental policies and societal developments and to embed them into a broader international context. We analyze current data, trends and developmental goals for key aspects of German society and policies affecting environmental factors, utilizing the FABLE calculator.
By implementing these national results as well as data from analogous projects focusing on other countries into a global framework, we can compare the global impacts of projected national transformation pathways as well as needs for adjustment in regards to climate and sustainability goals. This approach will allow for partial corrections in each national model, more in line with each country’s respective economic and political circumstances.
How to cite: Schneider, U. A. and Steinhauser, J.: Analyzing German transformation pathways‘ alignment with national and global climate and sustainability goals in the FABLE framework, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19504, https://doi.org/10.5194/egusphere-egu2020-19504, 2020.
Successful implementation of the Paris climate agreement and the 2015 UN sustainability goals requires large-scale transformations in all relevant areas, with land-use and land-management among the most critical ones. On a global scale, a variety of transformation pathways has been discussed. However, these pathways often assume a uniform global transformation from which each country’s transformation pathway follows in a top-down manner. This approach faces implementation difficulties due to inconsistencies between resulting country pathways and the respective country’s political reality. Hence, a bottom-up approach may create less ambitious, but more realistic transformation pathways, both on a regional and global scale.
As part of an international effort to create national and global transformation pathways in line with the Paris climate agreement and the 2015 UN sustainability goals, this project aims to model impacts from current and projected German environmental policies and societal developments and to embed them into a broader international context. We analyze current data, trends and developmental goals for key aspects of German society and policies affecting environmental factors, utilizing the FABLE calculator.
By implementing these national results as well as data from analogous projects focusing on other countries into a global framework, we can compare the global impacts of projected national transformation pathways as well as needs for adjustment in regards to climate and sustainability goals. This approach will allow for partial corrections in each national model, more in line with each country’s respective economic and political circumstances.
How to cite: Schneider, U. A. and Steinhauser, J.: Analyzing German transformation pathways‘ alignment with national and global climate and sustainability goals in the FABLE framework, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19504, https://doi.org/10.5194/egusphere-egu2020-19504, 2020.
EGU2020-3105 | Displays | ITS1.12/BG1.20
The Spatial Diffusion and Management of Pitaya Cultivation in the Hengchun PeninsulaHsuan-Chi Tsai and Shu-Chuan Hsu
The Hengchun Peninsula is located in Pingtung County, the southernmost part of Taiwan. Having a unique geographical location and climate factors, it results in some special industry patterns and cultural activities, such as onion, sisal agave and Gang-Kou tea. With climate conditions limiting the cultivated activities and the choice of crops, rice, yam and peanuts were the main crops grown since the Japanese era until the 2000s. However, in the last 20 years, pitaya has been planted rapidly in the Hengchun Peninsula. Since 2000, the planted area of pitaya grew to 152.9 ha. As the production and planted area accounts for over 35% of Pingtung County, pitaya gradually becomes an important crop. So the main purpose of this article is to find out why pitaya becomes an important crop in this area. This consists of the factors behind farmers changing their cropping patterns to growing pitaya, the factors resulting in the spread of pitaya in the Hengchun Peninsula, and the marketing channels used by farmers to sell their crops. In addition to reviewing articles, a field study was conducted with a sample of 30 farmers, keymen and middlemen. Statistical data was sorted and compiled, and semi-structured interviews were conducted to help to clarify what factors drive the farmers involved in pitaya cultivation.
It was found that pitaya cultivation in the Hengchun Peninsula originated from Bao-li village before spreading to other areas. The area around Bao-li village was also the most concentrated area of pitaya orchards, while the distribution in other areas was relatively scattered. Statistical data also showed a consistent phenomenon in the Hengchun Peninsula where specific cash crops rapidly develop and then gradually disappear after a short period of time. This occurs in sisal agave, sorghum, watermelons and other crops, which are drought and wind resistant crops. This phenomenon reflects that the selection of crops in this area is less, because of the fall and winter’s prevailing wind—the downslope winds (also called luo-shan wind) —and the land is not fertile. Thus, once a crop with an economic value higher than previous crops appears, farmers will flock to plant that kind of crop. Farmers will also change their crops due to policy changes or encouragement of local farmers’ associations. The results show that farmers thought that pitaya has a lot of advantages in contrast to other crops, such as high profits, ability to tolerate the harsh environment and has a long production period. So many farmers who grew onions, pangola grass, rice and other crops have also used some of their land to grow pitaya. The results about marketing revealed that the channels for distribution comprise of 50% of the farmers directly selling to customers, while the rest sell to local associations or commission men. In addition, it can also be found that farmers with larger planting areas generally have relatively stable and fixed sales channels when compared with smallholders, and farmers with stable sales channels tend to expand their pitaya cultivation area.
How to cite: Tsai, H.-C. and Hsu, S.-C.: The Spatial Diffusion and Management of Pitaya Cultivation in the Hengchun Peninsula, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3105, https://doi.org/10.5194/egusphere-egu2020-3105, 2020.
The Hengchun Peninsula is located in Pingtung County, the southernmost part of Taiwan. Having a unique geographical location and climate factors, it results in some special industry patterns and cultural activities, such as onion, sisal agave and Gang-Kou tea. With climate conditions limiting the cultivated activities and the choice of crops, rice, yam and peanuts were the main crops grown since the Japanese era until the 2000s. However, in the last 20 years, pitaya has been planted rapidly in the Hengchun Peninsula. Since 2000, the planted area of pitaya grew to 152.9 ha. As the production and planted area accounts for over 35% of Pingtung County, pitaya gradually becomes an important crop. So the main purpose of this article is to find out why pitaya becomes an important crop in this area. This consists of the factors behind farmers changing their cropping patterns to growing pitaya, the factors resulting in the spread of pitaya in the Hengchun Peninsula, and the marketing channels used by farmers to sell their crops. In addition to reviewing articles, a field study was conducted with a sample of 30 farmers, keymen and middlemen. Statistical data was sorted and compiled, and semi-structured interviews were conducted to help to clarify what factors drive the farmers involved in pitaya cultivation.
It was found that pitaya cultivation in the Hengchun Peninsula originated from Bao-li village before spreading to other areas. The area around Bao-li village was also the most concentrated area of pitaya orchards, while the distribution in other areas was relatively scattered. Statistical data also showed a consistent phenomenon in the Hengchun Peninsula where specific cash crops rapidly develop and then gradually disappear after a short period of time. This occurs in sisal agave, sorghum, watermelons and other crops, which are drought and wind resistant crops. This phenomenon reflects that the selection of crops in this area is less, because of the fall and winter’s prevailing wind—the downslope winds (also called luo-shan wind) —and the land is not fertile. Thus, once a crop with an economic value higher than previous crops appears, farmers will flock to plant that kind of crop. Farmers will also change their crops due to policy changes or encouragement of local farmers’ associations. The results show that farmers thought that pitaya has a lot of advantages in contrast to other crops, such as high profits, ability to tolerate the harsh environment and has a long production period. So many farmers who grew onions, pangola grass, rice and other crops have also used some of their land to grow pitaya. The results about marketing revealed that the channels for distribution comprise of 50% of the farmers directly selling to customers, while the rest sell to local associations or commission men. In addition, it can also be found that farmers with larger planting areas generally have relatively stable and fixed sales channels when compared with smallholders, and farmers with stable sales channels tend to expand their pitaya cultivation area.
How to cite: Tsai, H.-C. and Hsu, S.-C.: The Spatial Diffusion and Management of Pitaya Cultivation in the Hengchun Peninsula, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3105, https://doi.org/10.5194/egusphere-egu2020-3105, 2020.
EGU2020-7373 | Displays | ITS1.12/BG1.20
Predict indoor environment of greenhouses for automatic greenhouse environmental control using machine learning techniquesChia-Hui Hsu, Angela Huang, and Fi-John Chang
Maintaining stable crop production is the main benefit of greenhouses, which, however, would consume additional resources to control the indoor environment, as compared to open field cultivation. In consideration of Water-Food-Energy Nexus (WFE Nexus) management, it’s important to build an integrated methodology to estimate and optimize the crop production and resources consumption of greenhouses. Since the crop production of greenhouses is predictable if the indoor environment is well controlled, the main thing we should consider is how to reduce the water and energy consumption as much as possible during the environmental control process for greenhouses. For this purpose, we first build a machine learning-based model to predict indoor environment, including air temperature, relative humidity (RH), and soil water content, for a greenhouse that grows crops. Then according to the suitability criteria of the crop, the predicted values are utilized for environmental control if the values violate the criteria. Under such circumstance, an estimation model is established to determine which type and level of control mechanisms upon water and energy should be activated for meeting the suitability criteria to maintain stable crop production. The study area is a cherry tomato greenhouse located at the farm in Changhua County, Taiwan, where a total of 44,310 datasets were recorded by Internet of Things (IoT) from 2018 to 2019 at a 10-minute temporal resolution. This study also evaluates the efficiency of greenhouses under different scenarios of climatic conditions. The results are expected to contribute to the automatic greenhouse environmental control for stimulating the synergies of the WEF Nexus management toward sustainable development.
Keywords: Water-Food-Energy Nexus (WFE Nexus); Greenhouse; Machine learning; Internet of Things (IoT)
How to cite: Hsu, C.-H., Huang, A., and Chang, F.-J.: Predict indoor environment of greenhouses for automatic greenhouse environmental control using machine learning techniques, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7373, https://doi.org/10.5194/egusphere-egu2020-7373, 2020.
Maintaining stable crop production is the main benefit of greenhouses, which, however, would consume additional resources to control the indoor environment, as compared to open field cultivation. In consideration of Water-Food-Energy Nexus (WFE Nexus) management, it’s important to build an integrated methodology to estimate and optimize the crop production and resources consumption of greenhouses. Since the crop production of greenhouses is predictable if the indoor environment is well controlled, the main thing we should consider is how to reduce the water and energy consumption as much as possible during the environmental control process for greenhouses. For this purpose, we first build a machine learning-based model to predict indoor environment, including air temperature, relative humidity (RH), and soil water content, for a greenhouse that grows crops. Then according to the suitability criteria of the crop, the predicted values are utilized for environmental control if the values violate the criteria. Under such circumstance, an estimation model is established to determine which type and level of control mechanisms upon water and energy should be activated for meeting the suitability criteria to maintain stable crop production. The study area is a cherry tomato greenhouse located at the farm in Changhua County, Taiwan, where a total of 44,310 datasets were recorded by Internet of Things (IoT) from 2018 to 2019 at a 10-minute temporal resolution. This study also evaluates the efficiency of greenhouses under different scenarios of climatic conditions. The results are expected to contribute to the automatic greenhouse environmental control for stimulating the synergies of the WEF Nexus management toward sustainable development.
Keywords: Water-Food-Energy Nexus (WFE Nexus); Greenhouse; Machine learning; Internet of Things (IoT)
How to cite: Hsu, C.-H., Huang, A., and Chang, F.-J.: Predict indoor environment of greenhouses for automatic greenhouse environmental control using machine learning techniques, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7373, https://doi.org/10.5194/egusphere-egu2020-7373, 2020.
EGU2020-12305 | Displays | ITS1.12/BG1.20
Addressing Deforestation in Global Supply Chains: The Industry ApproachSophia Carodenuto
The winner of the International Statistic of the Decade is 8.4 million – the number of football pitches deforested from 2000 to 2019 in the Amazon rainforest. The Royal Statistical Society selected this statistic to give a powerful visual to one of the decade’s worst examples of environmental degradation. Global food supply chains are the major driver behind this deforestation. As globalization has dispersed the production of goods around the world, global supply chains increasingly displace the environmental and social impacts of consumption in rich and emerging economies to distant locations. Grown predominantly in (sub)tropical ecosystems and consumed in industrialized economies, cocoa/chocolate represents the inherent transnational challenges of many of today’s highly prized foods. Chocolate’s distinct geographies of production and consumption result in forest loss and persistent poverty in places far from the immediate purview of consumers. Despite growing public awareness and media attention, most consumers of conventional cocoa/chocolate products are unable to know the precise origins of their chocolate due to its complex supply chain involving multiple intermediaries. Outside of niche chocolate products that carry significantly higher price tags, the average chocolate consumer buying a Mars bar or Reeses peanut butter cup remains in the dark about the social and environmental impacts of their purchases. In 2017, the global cocoa/chocolate industry responded by committing themselves to “zero deforestation cocoa,” whereby they aim for full supply chain traceability to ultimately end deforestation and restore forest areas in cocoa origins.
The problem that this research aims to address is that despite their continued proliferation, corporate zero deforestation supply chain initiatives have thus far had only modest success in reaching their stated aims (Lambin et al. 2018). As company pledges grow in number and magnitude, deforestation continues in many commodity production areas, especially in tropical forest areas (Curtis et al. 2018). Through a systematic review of company pledges. this research brings more understanding to what precisely the global cocoa industry is committing to, and how these pledged changes are meant to be rolled out in practice. This knowledge will improve accountability by bringing clarity to questions surrounding who is meant to do what and how along the bumpy road to zero deforestation cocoa. Further, this research will shed light on the lesser known actors in the cocoa supply chain: the intermediary cocoa traders often operating informally in cocoa origins though a case study in Côte d’Ivoire- the world’s number one cocoa exporter. As technological advancements in commodity traceability and forest monitoring reduce the perceived distance between cocoa producers and their downstream buyers, supply chain actors are forging new partnerships to reduce the climate footprint of chocolate. This research accompanies one of these innovative partnerships between cocoa farming and chocolate eating communities.
References
Curtis et al. (2018). Classifying drivers of global forest loss. Science, 361(6407), 1108-1111.
Lambin, et al. (2018). The role of supply‐chain initiatives in reducing deforestation. Nature Climate Change, 1. https://doi.org/10.1038/s41558‐017‐0061‐1, 109–116.
How to cite: Carodenuto, S.: Addressing Deforestation in Global Supply Chains: The Industry Approach , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12305, https://doi.org/10.5194/egusphere-egu2020-12305, 2020.
The winner of the International Statistic of the Decade is 8.4 million – the number of football pitches deforested from 2000 to 2019 in the Amazon rainforest. The Royal Statistical Society selected this statistic to give a powerful visual to one of the decade’s worst examples of environmental degradation. Global food supply chains are the major driver behind this deforestation. As globalization has dispersed the production of goods around the world, global supply chains increasingly displace the environmental and social impacts of consumption in rich and emerging economies to distant locations. Grown predominantly in (sub)tropical ecosystems and consumed in industrialized economies, cocoa/chocolate represents the inherent transnational challenges of many of today’s highly prized foods. Chocolate’s distinct geographies of production and consumption result in forest loss and persistent poverty in places far from the immediate purview of consumers. Despite growing public awareness and media attention, most consumers of conventional cocoa/chocolate products are unable to know the precise origins of their chocolate due to its complex supply chain involving multiple intermediaries. Outside of niche chocolate products that carry significantly higher price tags, the average chocolate consumer buying a Mars bar or Reeses peanut butter cup remains in the dark about the social and environmental impacts of their purchases. In 2017, the global cocoa/chocolate industry responded by committing themselves to “zero deforestation cocoa,” whereby they aim for full supply chain traceability to ultimately end deforestation and restore forest areas in cocoa origins.
The problem that this research aims to address is that despite their continued proliferation, corporate zero deforestation supply chain initiatives have thus far had only modest success in reaching their stated aims (Lambin et al. 2018). As company pledges grow in number and magnitude, deforestation continues in many commodity production areas, especially in tropical forest areas (Curtis et al. 2018). Through a systematic review of company pledges. this research brings more understanding to what precisely the global cocoa industry is committing to, and how these pledged changes are meant to be rolled out in practice. This knowledge will improve accountability by bringing clarity to questions surrounding who is meant to do what and how along the bumpy road to zero deforestation cocoa. Further, this research will shed light on the lesser known actors in the cocoa supply chain: the intermediary cocoa traders often operating informally in cocoa origins though a case study in Côte d’Ivoire- the world’s number one cocoa exporter. As technological advancements in commodity traceability and forest monitoring reduce the perceived distance between cocoa producers and their downstream buyers, supply chain actors are forging new partnerships to reduce the climate footprint of chocolate. This research accompanies one of these innovative partnerships between cocoa farming and chocolate eating communities.
References
Curtis et al. (2018). Classifying drivers of global forest loss. Science, 361(6407), 1108-1111.
Lambin, et al. (2018). The role of supply‐chain initiatives in reducing deforestation. Nature Climate Change, 1. https://doi.org/10.1038/s41558‐017‐0061‐1, 109–116.
How to cite: Carodenuto, S.: Addressing Deforestation in Global Supply Chains: The Industry Approach , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12305, https://doi.org/10.5194/egusphere-egu2020-12305, 2020.
EGU2020-1206 | Displays | ITS1.12/BG1.20
Impacts of Land Use Change on Food Security in Nigeria: An integration of stakeholder participation in bioeconomic modellingKashimana Amanda Ivo
Two hundred million persons at an annual population growth rate of 2.5% in addition to uncertainty in climate and societal changes challenges development goals particularly food security in Nigeria. Food security challenges primarily originate from conflicts in agricultural and forestry land systems causing changes in the systems. Agricultural and forestry land systems constitute 77.7% and 7.7% of land area in Nigeria. However, pressured by an increasing population and a changing climate, society and even seemingly divergent policy objectives, these systems have failed to ensure food security. The challenge for Nigeria is to simultaneously maintain a 5% annual increment in food production and conserve 10% of its land area as forest. With agriculture already occupying 77.7% of the total land area, what will a 5% annual increment in food production and a 10% conservation of land area mean for both agriculture and forestry systems? Would these targets require an expansion or intensification or an integration of both systems? This paper provides insights into opportunities and trade-off for optimal land use systems in Nigeria by answering questions such as how can its land use be optimized for biodiversity conservation and agricultural production targets? Amidst the aforementioned targets what plausible governance, management technologies and policy adjustments can aid food security in Nigeria and at what cost?
How to cite: Ivo, K. A.: Impacts of Land Use Change on Food Security in Nigeria: An integration of stakeholder participation in bioeconomic modelling, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1206, https://doi.org/10.5194/egusphere-egu2020-1206, 2020.
Two hundred million persons at an annual population growth rate of 2.5% in addition to uncertainty in climate and societal changes challenges development goals particularly food security in Nigeria. Food security challenges primarily originate from conflicts in agricultural and forestry land systems causing changes in the systems. Agricultural and forestry land systems constitute 77.7% and 7.7% of land area in Nigeria. However, pressured by an increasing population and a changing climate, society and even seemingly divergent policy objectives, these systems have failed to ensure food security. The challenge for Nigeria is to simultaneously maintain a 5% annual increment in food production and conserve 10% of its land area as forest. With agriculture already occupying 77.7% of the total land area, what will a 5% annual increment in food production and a 10% conservation of land area mean for both agriculture and forestry systems? Would these targets require an expansion or intensification or an integration of both systems? This paper provides insights into opportunities and trade-off for optimal land use systems in Nigeria by answering questions such as how can its land use be optimized for biodiversity conservation and agricultural production targets? Amidst the aforementioned targets what plausible governance, management technologies and policy adjustments can aid food security in Nigeria and at what cost?
How to cite: Ivo, K. A.: Impacts of Land Use Change on Food Security in Nigeria: An integration of stakeholder participation in bioeconomic modelling, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1206, https://doi.org/10.5194/egusphere-egu2020-1206, 2020.
EGU2020-2989 | Displays | ITS1.12/BG1.20
Towards an artificial carbohydrates supply on EarthFlorian Dinger and Ulrich Platt
How to feed a growing global population in a secure and sustainable way? The conventional, biogenic agriculture has yet failed to provide a reliable concept which circumvents its severe environmental externalities — such as the massive use of land area, water for irrigation, fertiliser, pesticides, herbicides, and fossil fuel. In contrast, the artificial synthesis of carbohydrates from atmospheric carbon dioxide, water, and renewable energy would allow not only for a highly reliable production without those externalities, but would also allow to increase the agricultural capacities of our planet by several orders of magnitude. All required technology is either commercially available or at least developed on a lab-scale. No directed research has, however, yet been conducted towards an industry-scale carbohydrate synthesis because the biogenic carbohydrate production was economically more competitive. Taking the environmental and socioeconomic externalities of the conventional sugar production into account, this economical narrative has to be questioned. We estimate the production costs of artificial sugar at ~1 €/kg. While the today’s spot market price for conventional sugar is about ~0.3 €/kg, we estimate its total costs (including external costs) at >0.9 €/kg in humid regions and >2 €/kg in semi-arid regions. Accordingly, artificial sugar appears already today to be the less expensive way of production. The artificial sugar production allows in principle also for a subsequent synthesis of other carbohydrates such as starch as well as of fats. These synthetic products could be used as a feedstock to microorganisms, fungi, insects, or livestock in order to enhance also the sustainability of the biogenic production of, e.g., proteins.
How to cite: Dinger, F. and Platt, U.: Towards an artificial carbohydrates supply on Earth, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2989, https://doi.org/10.5194/egusphere-egu2020-2989, 2020.
How to feed a growing global population in a secure and sustainable way? The conventional, biogenic agriculture has yet failed to provide a reliable concept which circumvents its severe environmental externalities — such as the massive use of land area, water for irrigation, fertiliser, pesticides, herbicides, and fossil fuel. In contrast, the artificial synthesis of carbohydrates from atmospheric carbon dioxide, water, and renewable energy would allow not only for a highly reliable production without those externalities, but would also allow to increase the agricultural capacities of our planet by several orders of magnitude. All required technology is either commercially available or at least developed on a lab-scale. No directed research has, however, yet been conducted towards an industry-scale carbohydrate synthesis because the biogenic carbohydrate production was economically more competitive. Taking the environmental and socioeconomic externalities of the conventional sugar production into account, this economical narrative has to be questioned. We estimate the production costs of artificial sugar at ~1 €/kg. While the today’s spot market price for conventional sugar is about ~0.3 €/kg, we estimate its total costs (including external costs) at >0.9 €/kg in humid regions and >2 €/kg in semi-arid regions. Accordingly, artificial sugar appears already today to be the less expensive way of production. The artificial sugar production allows in principle also for a subsequent synthesis of other carbohydrates such as starch as well as of fats. These synthetic products could be used as a feedstock to microorganisms, fungi, insects, or livestock in order to enhance also the sustainability of the biogenic production of, e.g., proteins.
How to cite: Dinger, F. and Platt, U.: Towards an artificial carbohydrates supply on Earth, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2989, https://doi.org/10.5194/egusphere-egu2020-2989, 2020.
EGU2020-18677 | Displays | ITS1.12/BG1.20
Climate SMART Agriculture: How well does the agricultural sector in Luxembourg perform in terms of climate change?Evelyne Stoll, Christian Schader, Torsten Bohn, Rachel Reckinger, Laura Leimbrock, Gilles Altmann, and Stéphanie Zimmer
In Luxembourg, the agricultural sector was responsible for 711.7 Gg CO2-equivalents in 2016, which corresponds to 6.95 % of the total country greenhouse gas (GHG) emissions. Over 50 % of the farms are specialist grazing livestock farms. The beef and cattle milk production account globally together for over 60 % of the sector’s global emissions. Thus, the climate impact of the whole agricultural sector in Luxembourg can be significantly lowered by reducing the GHG emissions of the specialist grazing livestock sector. However, beyond farm type, the GHG emissions of a farm are also influenced by other factors, such as management systems and farming practices. To enable a transition towards a more climate-positive agriculture, insights into the sustainability performance in terms of climate change are needed.
The aim of this study is to determine the current sustainability performance of the Luxembourgish specialist grazing livestock sector in terms of climate change. The climate impact of the different specialist grazing livestock farm types (OTE (orientation technico-économique) 45 - Specialist dairying; OTE 46 - Specialist cattle - rearing and fattening and OTE 47 - Cattle - dairying, rearing and fattening combined) and of different management systems (conventional or organic) was assessed at farm-level. Furthermore, the relationship between the sustainability performance in terms of climate change and other areas of sustainability is being studied. Farming practices of 60 farms typical for Luxembourg in regard to their share of arable land and permanent grassland (OTE 45: 3 farms; OTE 46: 15; OTE 45: 11; Conventional: 44; Organic: 16) and their respective sustainability implications were assessed in 2019 according to the FAO SAFA Guidelines (Guidelines for the Sustainability Assessment of Food and Agriculture Systems, 2014) using the Sustainability Monitoring and Assessment RouTine (SMART)-Farm Tool (v5.0). Organic farms were highly overrepresented, with 26.7 % in the sample compared to 5 % of all Luxembourgish farms. The data was collected during a farm visit and a 3 h interview with the farm manager. The impact of management system and farm type on the SAFA-goal achievement for the sub-theme Greenhouse Gases (GHG) were studied.
The results show that the sustainability performances of the participating farms were moderate to good. Goal achievement for the sub-theme GHG was moderate and did not differ significantly between the three farm types (OTE 45: 53.3 % ±3.9 SD goal achievement; OTE 46: 55.6 % ±7.3 SD; OTE 47: 54.6 % ±6.9 SD). Organic farms showed a significantly higher mean goal achievement for GHG than conventional farms (p-value < 0.001) (organic: 58.3 % ±6.0 SD; conventional: 52.6 % ±4.4 SD). For indicators positively impacting GHG, the organic and the OTE 46 farms had generally higher ratings. Correlations between GHG and the other sub-themes were mainly in the Environmental Integrity dimension, showing that implementing climate-positive farming practices can also improve other ecological aspects. The indicator analysis identified the following linchpins: increase in protein autarky, closing of farming cycles and holistic approach with strategic decision making leading to harmonized actions towards a sustainable and climate positive farming system.
How to cite: Stoll, E., Schader, C., Bohn, T., Reckinger, R., Leimbrock, L., Altmann, G., and Zimmer, S.: Climate SMART Agriculture: How well does the agricultural sector in Luxembourg perform in terms of climate change?, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18677, https://doi.org/10.5194/egusphere-egu2020-18677, 2020.
In Luxembourg, the agricultural sector was responsible for 711.7 Gg CO2-equivalents in 2016, which corresponds to 6.95 % of the total country greenhouse gas (GHG) emissions. Over 50 % of the farms are specialist grazing livestock farms. The beef and cattle milk production account globally together for over 60 % of the sector’s global emissions. Thus, the climate impact of the whole agricultural sector in Luxembourg can be significantly lowered by reducing the GHG emissions of the specialist grazing livestock sector. However, beyond farm type, the GHG emissions of a farm are also influenced by other factors, such as management systems and farming practices. To enable a transition towards a more climate-positive agriculture, insights into the sustainability performance in terms of climate change are needed.
The aim of this study is to determine the current sustainability performance of the Luxembourgish specialist grazing livestock sector in terms of climate change. The climate impact of the different specialist grazing livestock farm types (OTE (orientation technico-économique) 45 - Specialist dairying; OTE 46 - Specialist cattle - rearing and fattening and OTE 47 - Cattle - dairying, rearing and fattening combined) and of different management systems (conventional or organic) was assessed at farm-level. Furthermore, the relationship between the sustainability performance in terms of climate change and other areas of sustainability is being studied. Farming practices of 60 farms typical for Luxembourg in regard to their share of arable land and permanent grassland (OTE 45: 3 farms; OTE 46: 15; OTE 45: 11; Conventional: 44; Organic: 16) and their respective sustainability implications were assessed in 2019 according to the FAO SAFA Guidelines (Guidelines for the Sustainability Assessment of Food and Agriculture Systems, 2014) using the Sustainability Monitoring and Assessment RouTine (SMART)-Farm Tool (v5.0). Organic farms were highly overrepresented, with 26.7 % in the sample compared to 5 % of all Luxembourgish farms. The data was collected during a farm visit and a 3 h interview with the farm manager. The impact of management system and farm type on the SAFA-goal achievement for the sub-theme Greenhouse Gases (GHG) were studied.
The results show that the sustainability performances of the participating farms were moderate to good. Goal achievement for the sub-theme GHG was moderate and did not differ significantly between the three farm types (OTE 45: 53.3 % ±3.9 SD goal achievement; OTE 46: 55.6 % ±7.3 SD; OTE 47: 54.6 % ±6.9 SD). Organic farms showed a significantly higher mean goal achievement for GHG than conventional farms (p-value < 0.001) (organic: 58.3 % ±6.0 SD; conventional: 52.6 % ±4.4 SD). For indicators positively impacting GHG, the organic and the OTE 46 farms had generally higher ratings. Correlations between GHG and the other sub-themes were mainly in the Environmental Integrity dimension, showing that implementing climate-positive farming practices can also improve other ecological aspects. The indicator analysis identified the following linchpins: increase in protein autarky, closing of farming cycles and holistic approach with strategic decision making leading to harmonized actions towards a sustainable and climate positive farming system.
How to cite: Stoll, E., Schader, C., Bohn, T., Reckinger, R., Leimbrock, L., Altmann, G., and Zimmer, S.: Climate SMART Agriculture: How well does the agricultural sector in Luxembourg perform in terms of climate change?, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18677, https://doi.org/10.5194/egusphere-egu2020-18677, 2020.
EGU2020-19939 | Displays | ITS1.12/BG1.20
Effectiveness of Soil Conservation Practices as Climate Change Adaptations in Eastern US Corn-Soybean ProductionRobert Hill, Natalia Salazar, and Adel Shirmohammadi
Climate change is projected to affect the atmospheric variables that control crop production in the Eastern United States (US). Given that changes in these variables over the next decades are currently unavoidable, crop production will need to adapt to the expected changes in order to prevent or reduce yield losses. The main objectives of this study were: 1) to evaluate the effects of climate change on yields in rainfed corn (Zea mays L.)-soybean (Glycine max (L.) Merr.) rotation systems in the Eastern US and 2) to test two soil conservation practices—no tillage and winter cover cropping with rye (Secale cereale L.)—for their effectiveness as climate change adaptations in these systems. We used the Agricultural Policy/Environmental eXtender (APEX) model to simulate corn-soybean rotation systems in the future (2041‒2070) at nine land grant university research farms located throughout the Eastern US corn-soybean production belt from New York to Georgia. The simulated effects of climate change on yields varied depending on the climate model used, ranging from decreases to increases. Mean corn yields experienced decreases of 15‒51% and increases of 14‒85% while mean soybean yields experienced decreases of 7.6‒13% and increases of 22‒170%. Yield decreases were most common under the climate model predicting the highest increase in temperature and a reduction in precipitation, whereas yield increases were most common in the climate models predicting either a relatively small increase in temperature or a relatively large increase in precipitation. In many cases, the effects of climate change on yields worsened with time within the 30-year future period. The effects of climate change differed between the northern, central, and southern regions of the Eastern US, generally improving with latitude. Climate change generally affected corn yields more negatively or less positively than it did soybean yields. No tillage and rye cover cropping did not serve as effective climate change adaptations in regards to corn or soybean yields. In fact, planting rye after corn and soybeans reduced mean corn yields by 3.1‒28% relative to the control (no cover crop). We speculate that this yield decrease occurred because the rye cover crop reduced the amount of soil water available to the following corn crop.
How to cite: Hill, R., Salazar, N., and Shirmohammadi, A.: Effectiveness of Soil Conservation Practices as Climate Change Adaptations in Eastern US Corn-Soybean Production, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19939, https://doi.org/10.5194/egusphere-egu2020-19939, 2020.
Climate change is projected to affect the atmospheric variables that control crop production in the Eastern United States (US). Given that changes in these variables over the next decades are currently unavoidable, crop production will need to adapt to the expected changes in order to prevent or reduce yield losses. The main objectives of this study were: 1) to evaluate the effects of climate change on yields in rainfed corn (Zea mays L.)-soybean (Glycine max (L.) Merr.) rotation systems in the Eastern US and 2) to test two soil conservation practices—no tillage and winter cover cropping with rye (Secale cereale L.)—for their effectiveness as climate change adaptations in these systems. We used the Agricultural Policy/Environmental eXtender (APEX) model to simulate corn-soybean rotation systems in the future (2041‒2070) at nine land grant university research farms located throughout the Eastern US corn-soybean production belt from New York to Georgia. The simulated effects of climate change on yields varied depending on the climate model used, ranging from decreases to increases. Mean corn yields experienced decreases of 15‒51% and increases of 14‒85% while mean soybean yields experienced decreases of 7.6‒13% and increases of 22‒170%. Yield decreases were most common under the climate model predicting the highest increase in temperature and a reduction in precipitation, whereas yield increases were most common in the climate models predicting either a relatively small increase in temperature or a relatively large increase in precipitation. In many cases, the effects of climate change on yields worsened with time within the 30-year future period. The effects of climate change differed between the northern, central, and southern regions of the Eastern US, generally improving with latitude. Climate change generally affected corn yields more negatively or less positively than it did soybean yields. No tillage and rye cover cropping did not serve as effective climate change adaptations in regards to corn or soybean yields. In fact, planting rye after corn and soybeans reduced mean corn yields by 3.1‒28% relative to the control (no cover crop). We speculate that this yield decrease occurred because the rye cover crop reduced the amount of soil water available to the following corn crop.
How to cite: Hill, R., Salazar, N., and Shirmohammadi, A.: Effectiveness of Soil Conservation Practices as Climate Change Adaptations in Eastern US Corn-Soybean Production, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19939, https://doi.org/10.5194/egusphere-egu2020-19939, 2020.
ITS1.15/BG3.56 – Amazon forest – a natural laboratory of global significance
EGU2020-18290 | Displays | ITS1.15/BG3.56
AmazonFACE – Assessing the response of Amazon rainforest functioning to elevated atmospheric carbon dioxide concentrationsAnja Rammig, Katrin Fleischer, Sabrina Garcia, Nathielly Martins, Juliane Menezes, Lucia Fuchslueger, Karst Schaap, Iokanam Pereira, Bruno Takeshi, Carlos Quesada, Bart Kruijt, Richard Norby, Alessandro Araujo, Tomas Domingues, Thorsten Grams, Iain Hartley, Martin De Kauwe, Florian Hofhansl, and David Lapola
The rapid rise in atmospheric CO2 concentration over the past century is unprecedented. It has unambiguously influenced Earth’s climate system and terrestrial ecosystems. Elevated atmospheric CO2 concentrations (eCO2) have induced an increase in biomass and thus, a carbon sink in forests worldwide. It is assumed that eCO2 stimulates photosynthesis and plant productivity and enhances water-use efficiency – the so-called CO2-fertilization effect, which may provide an important buffering effect for plants during adverse climate conditions. For these reasons, current global climate simulations consistently predict that tropical forests will continue to sequester more carbon in aboveground biomass, partially compensating human emissions and decelerating climate change by acting as a carbon sink. In contrast to model simulations, several lines of evidence point towards a decreasing carbon sink strength of the Amazon rainforest. Reliable predictions of eCO2 effects in the Amazon rainforest are hindered by a lack of process-based information gained from ecosystem scale eCO2 experiments. Here we report on baseline measurements from the Amazon Free Air CO2 Enrichment (AmazonFACE) experiment and preliminary results from open-top chamber (OTC) experiments with eCO2. After three months of eCO2, we find that understory saplings increased carbon assimilation by 17% (under light saturated conditions) and water use efficiency by 39% in the OTC experiment. We present our main hypotheses for the FACE experiment, and discuss our expectations on the potential driving processes for limiting or stimulating the Amazon rainforest carbon sink under eCO2. We focus on possible effects of eCO2 on carbon uptake and allocation, nutrient cycling, water-use and plant-herbivore interactions, which need to be implemented in dynamic vegetation models to estimate future changes of the Amazon carbon sink.
How to cite: Rammig, A., Fleischer, K., Garcia, S., Martins, N., Menezes, J., Fuchslueger, L., Schaap, K., Pereira, I., Takeshi, B., Quesada, C., Kruijt, B., Norby, R., Araujo, A., Domingues, T., Grams, T., Hartley, I., De Kauwe, M., Hofhansl, F., and Lapola, D.: AmazonFACE – Assessing the response of Amazon rainforest functioning to elevated atmospheric carbon dioxide concentrations, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18290, https://doi.org/10.5194/egusphere-egu2020-18290, 2020.
The rapid rise in atmospheric CO2 concentration over the past century is unprecedented. It has unambiguously influenced Earth’s climate system and terrestrial ecosystems. Elevated atmospheric CO2 concentrations (eCO2) have induced an increase in biomass and thus, a carbon sink in forests worldwide. It is assumed that eCO2 stimulates photosynthesis and plant productivity and enhances water-use efficiency – the so-called CO2-fertilization effect, which may provide an important buffering effect for plants during adverse climate conditions. For these reasons, current global climate simulations consistently predict that tropical forests will continue to sequester more carbon in aboveground biomass, partially compensating human emissions and decelerating climate change by acting as a carbon sink. In contrast to model simulations, several lines of evidence point towards a decreasing carbon sink strength of the Amazon rainforest. Reliable predictions of eCO2 effects in the Amazon rainforest are hindered by a lack of process-based information gained from ecosystem scale eCO2 experiments. Here we report on baseline measurements from the Amazon Free Air CO2 Enrichment (AmazonFACE) experiment and preliminary results from open-top chamber (OTC) experiments with eCO2. After three months of eCO2, we find that understory saplings increased carbon assimilation by 17% (under light saturated conditions) and water use efficiency by 39% in the OTC experiment. We present our main hypotheses for the FACE experiment, and discuss our expectations on the potential driving processes for limiting or stimulating the Amazon rainforest carbon sink under eCO2. We focus on possible effects of eCO2 on carbon uptake and allocation, nutrient cycling, water-use and plant-herbivore interactions, which need to be implemented in dynamic vegetation models to estimate future changes of the Amazon carbon sink.
How to cite: Rammig, A., Fleischer, K., Garcia, S., Martins, N., Menezes, J., Fuchslueger, L., Schaap, K., Pereira, I., Takeshi, B., Quesada, C., Kruijt, B., Norby, R., Araujo, A., Domingues, T., Grams, T., Hartley, I., De Kauwe, M., Hofhansl, F., and Lapola, D.: AmazonFACE – Assessing the response of Amazon rainforest functioning to elevated atmospheric carbon dioxide concentrations, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18290, https://doi.org/10.5194/egusphere-egu2020-18290, 2020.
EGU2020-11182 | Displays | ITS1.15/BG3.56
Amazon Carbon Balance affected by human activities and Climate changeLuciana Vanni Gatti, Luana Basso, Lucas Domingues, Henrique Cassol, Luciano Marani, John Miller, Manuel Gloor, Luiz Aragao, Egidio Arai, Graciela Tejada, Liana Anderson, Celso Von Randow, Wouter Peters, Alber Ipia Sanchez, Caio Correia, Stephane Crispim, and Raiane Neves
Amazon is the major tropical land region, with critical processes, such as the carbon cycle, not yet fully understood. Only very few long-term greenhouse gas measurements regionally represented is available in the tropics. The Amazon accounts for 50% of Earth’s tropical rainforests hosting the largest carbon pool in vegetation and soils (~200 PgC). The net carbon exchange between tropical land and the atmosphere is critically important because the stability of carbon in forests and soils can be disrupted in short time-scales. The main processes releasing C to the atmosphere are deforestation, degradation, fires and changes in growing conditions due to increased temperatures and droughts. Such changes may thus cause feedbacks on global climate.
In the last 40 years, Amazon mean temperature increased by 1.1ºC. The length and intensity of the dry season is also increasing, causing a strong stress each year higher to the forest.
We observed a reduction of 17% in precipitation during dry season and the transition dry to wet season during this same period. This reduction in precipitation and the increase in temperature during the dry season exacerbate vegetation water stress, with consequences for carbon balance.
To understand the consequences of human-driven and climate changes on the C budget of Amazonia, we put in place the first program with regional representativeness, from 2010 onwards, aiming to quantify greenhouse gases based on extensive collection of vertical profiles of CO2 and CO. Regular vertical profiles from the ground up to 4.5 km height were performed at four sites along the main air-stream over the Amazon. Along this period from 2010 to 2018, we performed 669 vertical profiles, over four strategic regions that represent fluxes over the entire Amazon region.
The observed variability of carbon fluxes during these 9 years is correlated with climate variability (Temperature, precipitation, GRACE, EVI) and human-driven changes (Biomass Burning). The correlations were performed inside each influenced area for each studied site and show how high temperature and water stress during dry season are affecting the Amazon Carbon Balance. At Southeast of Amazon these extreme conditions are dominating the annual balance. Fire emission is the main source of carbon to the atmosphere, which is not compensate by the C removal from old-growth Amazon forest. The west Amazon almost compensate the east carbon source. During wet/normal years Amazon Carbon Balance is around neutral, but during dry years the uptake capacity is very compromised.
How to cite: Gatti, L. V., Basso, L., Domingues, L., Cassol, H., Marani, L., Miller, J., Gloor, M., Aragao, L., Arai, E., Tejada, G., Anderson, L., Von Randow, C., Peters, W., Ipia Sanchez, A., Correia, C., Crispim, S., and Neves, R.: Amazon Carbon Balance affected by human activities and Climate change, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11182, https://doi.org/10.5194/egusphere-egu2020-11182, 2020.
Amazon is the major tropical land region, with critical processes, such as the carbon cycle, not yet fully understood. Only very few long-term greenhouse gas measurements regionally represented is available in the tropics. The Amazon accounts for 50% of Earth’s tropical rainforests hosting the largest carbon pool in vegetation and soils (~200 PgC). The net carbon exchange between tropical land and the atmosphere is critically important because the stability of carbon in forests and soils can be disrupted in short time-scales. The main processes releasing C to the atmosphere are deforestation, degradation, fires and changes in growing conditions due to increased temperatures and droughts. Such changes may thus cause feedbacks on global climate.
In the last 40 years, Amazon mean temperature increased by 1.1ºC. The length and intensity of the dry season is also increasing, causing a strong stress each year higher to the forest.
We observed a reduction of 17% in precipitation during dry season and the transition dry to wet season during this same period. This reduction in precipitation and the increase in temperature during the dry season exacerbate vegetation water stress, with consequences for carbon balance.
To understand the consequences of human-driven and climate changes on the C budget of Amazonia, we put in place the first program with regional representativeness, from 2010 onwards, aiming to quantify greenhouse gases based on extensive collection of vertical profiles of CO2 and CO. Regular vertical profiles from the ground up to 4.5 km height were performed at four sites along the main air-stream over the Amazon. Along this period from 2010 to 2018, we performed 669 vertical profiles, over four strategic regions that represent fluxes over the entire Amazon region.
The observed variability of carbon fluxes during these 9 years is correlated with climate variability (Temperature, precipitation, GRACE, EVI) and human-driven changes (Biomass Burning). The correlations were performed inside each influenced area for each studied site and show how high temperature and water stress during dry season are affecting the Amazon Carbon Balance. At Southeast of Amazon these extreme conditions are dominating the annual balance. Fire emission is the main source of carbon to the atmosphere, which is not compensate by the C removal from old-growth Amazon forest. The west Amazon almost compensate the east carbon source. During wet/normal years Amazon Carbon Balance is around neutral, but during dry years the uptake capacity is very compromised.
How to cite: Gatti, L. V., Basso, L., Domingues, L., Cassol, H., Marani, L., Miller, J., Gloor, M., Aragao, L., Arai, E., Tejada, G., Anderson, L., Von Randow, C., Peters, W., Ipia Sanchez, A., Correia, C., Crispim, S., and Neves, R.: Amazon Carbon Balance affected by human activities and Climate change, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11182, https://doi.org/10.5194/egusphere-egu2020-11182, 2020.
EGU2020-17538 | Displays | ITS1.15/BG3.56
Analysis of canopy structural and functional properties of tropical forests in a fertilisation experiment by Sentinel-2 imagesMaral Maleki, Lore Verryckt, Jose Miguel Barrios, Josep Peñuelas, Ivan Janssens, and Manuela Balzarolo
Tropical forests such as Amazon is repository of ecological services. Understanding how tropical forest responds to the climate helps to improve ecosystem modeling and declining the uncertainty in calculation of carbon balance. Nowadays, the availability of very high resolution satellite imagery such as Sentinel-2 are powerful tools for analyzing the canopy structural and functional shifts over time, especially for tropical forest.
In this study, we examined the effect of the nutrient availability (nitrogen (N) and phosphorus (P)) on canopy and structural properties in tropical forest of French Guiana. In situ observations of canopy structure and functioning (i.e. photosynthesis, leaf N, chlorophyll content) were collected at two experimental sites (Paracou and Nouragues). Three topographical positions in each site were considered (top of the hills, middle and bottom end of the slope) and four plots were manipulated with different level of fertilization (Control, N, P, NP) in September 2016. Statistical analysis were conducted to analyze how the fertilization affect the forest canopy seasonality and if differences between sites and across positions existed. Furthermore, we tested whether Sentinel-2 data could help or not to describe the canopy changes observed in the field. Therefore, all Sentinel-2 images available before the start of the experiment, which date represent the natural situation, and two years after the intensive and repeated fertilization were collected. Greenness, chlorophyll and N, P related indicators were calculated from Sentinel-2 images.
Key words: Sentinel-2, Tropical forest, soil fertilization, topographical position.
How to cite: Maleki, M., Verryckt, L., Barrios, J. M., Peñuelas, J., Janssens, I., and Balzarolo, M.: Analysis of canopy structural and functional properties of tropical forests in a fertilisation experiment by Sentinel-2 images, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-17538, https://doi.org/10.5194/egusphere-egu2020-17538, 2020.
Tropical forests such as Amazon is repository of ecological services. Understanding how tropical forest responds to the climate helps to improve ecosystem modeling and declining the uncertainty in calculation of carbon balance. Nowadays, the availability of very high resolution satellite imagery such as Sentinel-2 are powerful tools for analyzing the canopy structural and functional shifts over time, especially for tropical forest.
In this study, we examined the effect of the nutrient availability (nitrogen (N) and phosphorus (P)) on canopy and structural properties in tropical forest of French Guiana. In situ observations of canopy structure and functioning (i.e. photosynthesis, leaf N, chlorophyll content) were collected at two experimental sites (Paracou and Nouragues). Three topographical positions in each site were considered (top of the hills, middle and bottom end of the slope) and four plots were manipulated with different level of fertilization (Control, N, P, NP) in September 2016. Statistical analysis were conducted to analyze how the fertilization affect the forest canopy seasonality and if differences between sites and across positions existed. Furthermore, we tested whether Sentinel-2 data could help or not to describe the canopy changes observed in the field. Therefore, all Sentinel-2 images available before the start of the experiment, which date represent the natural situation, and two years after the intensive and repeated fertilization were collected. Greenness, chlorophyll and N, P related indicators were calculated from Sentinel-2 images.
Key words: Sentinel-2, Tropical forest, soil fertilization, topographical position.
How to cite: Maleki, M., Verryckt, L., Barrios, J. M., Peñuelas, J., Janssens, I., and Balzarolo, M.: Analysis of canopy structural and functional properties of tropical forests in a fertilisation experiment by Sentinel-2 images, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-17538, https://doi.org/10.5194/egusphere-egu2020-17538, 2020.
EGU2020-558 | Displays | ITS1.15/BG3.56
Amazon CH4 budget and its controls based on atmospheric data from vertical profiles measurementsLuana Basso, Luciana Gatti, Luciano Marani, Henrique Cassol, Graciela Tejada, Lucas Domingues, Caio Correia, Stephane Crispim, Raiane Neves, Alber Ipia, Egidio Arai, Luiz Aragão, John Miller, and Manuel Gloor
Wetland emissions are considered the main natural global Methane (CH4) source, but it is budget remains highly uncertain. Tropical regions like the Amazon, host some of the largest wetlands/seasonally flooded areas on the globe. However, tropical regions are still poorly observed with large-scale integrating observations. Here we present the first atmospheric sampling of the lower troposphere over the Amazon using regular vertical profile greenhouse gas and carbon monoxide (CO) observations at four sites. Since 2010 we collected bimonthly CH4, to provide solid seasonal and annual CH4 budgets with large spatial resolution. Vertical profiles are sampled using light aircraft, high-precision greenhouse gas and CO analysis of flask air, fortnightly between 2010 to 2018. The results show a regional variation in CH4 emissions. There are comparably high emissions from the northeast part of the Amazon exhibiting strong variability, with particularly high CH4 fluxes in the beginning of the wet season (January to March). A second period of high emissions occurs during the dry season. The cause of the high emissions is unclear. In the other three sites located further downwind along the main air-stream are observed lower emissions, that represents approximately 25-30% of what is observed in the northeast region and with a clear annual seasonality. In addition, these data show an interannual variability in emissions magnitude, so we discuss how these data can be correlate to climate variables (like temperature and precipitation) and with human-driven changes (like biomass burning) that could be influencing this variability. Over the full period the Amazon (total area of around 7.2 million km2) was a source of CH4, of approximately 46 ± 6 Tg/year, which represent 8% of the global CH4 flux to the atmosphere. Using a CO/CH4 emission ratio calculated in this study we find a biomass burning contribution varying between 10 and 23% of the total flux at each site.
Acknowledgment: FAPESP (2019/23654-2, 2018/14006-4, 2016/02018-2, 2008/58120-3, 2011/51841-0), NASA, ERC (GEOCARBON, Horizon 2020/ASICA), NERC (NE/F005806/1), CNPq (480713/2013-8).
How to cite: Basso, L., Gatti, L., Marani, L., Cassol, H., Tejada, G., Domingues, L., Correia, C., Crispim, S., Neves, R., Ipia, A., Arai, E., Aragão, L., Miller, J., and Gloor, M.: Amazon CH4 budget and its controls based on atmospheric data from vertical profiles measurements, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-558, https://doi.org/10.5194/egusphere-egu2020-558, 2020.
Wetland emissions are considered the main natural global Methane (CH4) source, but it is budget remains highly uncertain. Tropical regions like the Amazon, host some of the largest wetlands/seasonally flooded areas on the globe. However, tropical regions are still poorly observed with large-scale integrating observations. Here we present the first atmospheric sampling of the lower troposphere over the Amazon using regular vertical profile greenhouse gas and carbon monoxide (CO) observations at four sites. Since 2010 we collected bimonthly CH4, to provide solid seasonal and annual CH4 budgets with large spatial resolution. Vertical profiles are sampled using light aircraft, high-precision greenhouse gas and CO analysis of flask air, fortnightly between 2010 to 2018. The results show a regional variation in CH4 emissions. There are comparably high emissions from the northeast part of the Amazon exhibiting strong variability, with particularly high CH4 fluxes in the beginning of the wet season (January to March). A second period of high emissions occurs during the dry season. The cause of the high emissions is unclear. In the other three sites located further downwind along the main air-stream are observed lower emissions, that represents approximately 25-30% of what is observed in the northeast region and with a clear annual seasonality. In addition, these data show an interannual variability in emissions magnitude, so we discuss how these data can be correlate to climate variables (like temperature and precipitation) and with human-driven changes (like biomass burning) that could be influencing this variability. Over the full period the Amazon (total area of around 7.2 million km2) was a source of CH4, of approximately 46 ± 6 Tg/year, which represent 8% of the global CH4 flux to the atmosphere. Using a CO/CH4 emission ratio calculated in this study we find a biomass burning contribution varying between 10 and 23% of the total flux at each site.
Acknowledgment: FAPESP (2019/23654-2, 2018/14006-4, 2016/02018-2, 2008/58120-3, 2011/51841-0), NASA, ERC (GEOCARBON, Horizon 2020/ASICA), NERC (NE/F005806/1), CNPq (480713/2013-8).
How to cite: Basso, L., Gatti, L., Marani, L., Cassol, H., Tejada, G., Domingues, L., Correia, C., Crispim, S., Neves, R., Ipia, A., Arai, E., Aragão, L., Miller, J., and Gloor, M.: Amazon CH4 budget and its controls based on atmospheric data from vertical profiles measurements, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-558, https://doi.org/10.5194/egusphere-egu2020-558, 2020.
EGU2020-16019 | Displays | ITS1.15/BG3.56
Temporal variations of CH4/CO2/CO fluxes in the central Amazon rainforestShujiro Komiya, Jost Lavric, David Walter, Santiago Botia, Alessandro Araujo, Marta Sá, Matthias Sörgel, Stefan Wolff, Hella Asperen, Fumiyoshi Kondo, and Susan Trumbore
Amazon rainforests and soils contain large amounts of carbon, which is under pressure from ongoing climate and land use change in the Amazon basin. It is estimated that methane (CH4), an important greenhouse gas, is largely released from the flooded wetlands of the Amazon, but the trends and balances of CH4 in the Amazon rainforest are not yet well understood. In addition, the change in atmospheric CH4 concentration is strongly associated with a change in carbon monoxide (CO) concentration, often caused by the human-induced combustion of biomass that usually peaks during dry season. Understanding the long-term fluctuations in the fluxes of greenhouse gases in the Amazon rainforest is essential for improving our understanding of the carbon balance of the Amazon rainforest.
Since March 2012, we have continuously measured atmospheric CO2/CH4/CO concentrations at five levels (79, 53, 38, 24, and 4 m a.g.l.) using two wavelength-scanned cavity ring-down spectroscopy analyzers (G1301 and G1302, Picarro Inc., USA), which are automatically calibrated on site every day. In addition, we measured the CO2 flux by the eddy covariance method at the same tower. We estimated the CO2/CH4/CO fluxes by combining the vertical profile of the CO2/CH4/CO concentrations with the flux gradient method. Our results generally show no major difference in CO2 flux between the wet and dry seasons except for year 2017, when an elevated CO2 uptake was documented during the dry season despite the lowest precipitation between 2014 and 2018. The CH4 flux showed the largest CH4 emission during the dry season in year 2016. Further results will be analyzed and discussed in the presentation.
How to cite: Komiya, S., Lavric, J., Walter, D., Botia, S., Araujo, A., Sá, M., Sörgel, M., Wolff, S., Asperen, H., Kondo, F., and Trumbore, S.: Temporal variations of CH4/CO2/CO fluxes in the central Amazon rainforest, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-16019, https://doi.org/10.5194/egusphere-egu2020-16019, 2020.
Amazon rainforests and soils contain large amounts of carbon, which is under pressure from ongoing climate and land use change in the Amazon basin. It is estimated that methane (CH4), an important greenhouse gas, is largely released from the flooded wetlands of the Amazon, but the trends and balances of CH4 in the Amazon rainforest are not yet well understood. In addition, the change in atmospheric CH4 concentration is strongly associated with a change in carbon monoxide (CO) concentration, often caused by the human-induced combustion of biomass that usually peaks during dry season. Understanding the long-term fluctuations in the fluxes of greenhouse gases in the Amazon rainforest is essential for improving our understanding of the carbon balance of the Amazon rainforest.
Since March 2012, we have continuously measured atmospheric CO2/CH4/CO concentrations at five levels (79, 53, 38, 24, and 4 m a.g.l.) using two wavelength-scanned cavity ring-down spectroscopy analyzers (G1301 and G1302, Picarro Inc., USA), which are automatically calibrated on site every day. In addition, we measured the CO2 flux by the eddy covariance method at the same tower. We estimated the CO2/CH4/CO fluxes by combining the vertical profile of the CO2/CH4/CO concentrations with the flux gradient method. Our results generally show no major difference in CO2 flux between the wet and dry seasons except for year 2017, when an elevated CO2 uptake was documented during the dry season despite the lowest precipitation between 2014 and 2018. The CH4 flux showed the largest CH4 emission during the dry season in year 2016. Further results will be analyzed and discussed in the presentation.
How to cite: Komiya, S., Lavric, J., Walter, D., Botia, S., Araujo, A., Sá, M., Sörgel, M., Wolff, S., Asperen, H., Kondo, F., and Trumbore, S.: Temporal variations of CH4/CO2/CO fluxes in the central Amazon rainforest, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-16019, https://doi.org/10.5194/egusphere-egu2020-16019, 2020.
EGU2020-561 | Displays | ITS1.15/BG3.56
Is it feasible to relate CO2 atmospheric measurements with land use and cover change data? -A primary assessment of land use and cover change datasets in the AmazonGraciela Tejada, Luciana Gatti, Luana Basso, Luciano Marani, Henrique Cassol, Egidio Arai, Luiz Aragão, Stephane Crispim, Raiane Neves, Lucas Domingues, Caio Correia, Alber Ipia, Manuel Gloor, John Miller, and Celso von Randow
Atmospheric CO2 concentrations have had a significant increase in recent years reaching levels never seen before. In the Amazon region, the main CO2 emissions come from land use and cover change (LUCC), especially for the deforestation of natural forests. It is very important to understand the impacts of climate change and deforestation on the Amazon forests to understand their role in the current carbon balance at different scales. The lower-troposphere greenhouse gas (GHG) monitoring program “CARBAM project”, has been collecting bimonthly GHGs vertical profiles in four sites of the Amazon since 2010, filling a very important gap in regional GHGs measurements. Here we compare different LUCC datasets for the Amazon region to see if there is a relation between annual LUCC and bimonthly CO2 aircraft measurements in the Amazon. We compared the annual (2010-2018) LUCC area from IBGE, PRODES and mapbiomas pan-amazon datasets for each mean influence area of the CARBAM sites and relate this LUCC areas with the annual CO2 fluxes. We found differences in the classification methods of the LUCC data, showing differences in the total deforested area. The LUCC data have different tendencies in each CARBAM influence area having more deforestation in the east side of the Amazon CARBAM sites. There is no clear trend between LUCC and carbon fluxes in the last 8 years. Inter-annual CO2 fluxes variability could be related with the several droughts that influence the photosynthesis/respiration. Here we highlight the scale issues regarding LUCC datasets, atmospheric CO2 measurements and CO2 modeling to better understand the current Amazon carbon balance.
Acknowledgment: FAPESP (2018/18493-7; 2018/14006-4; 2016/2016/02018-2), NASA, ERC (GEOCARBON, Horizon 2020/ASICA), NERC (NE/F005806/1), CNPq (480713/2013-8).
How to cite: Tejada, G., Gatti, L., Basso, L., Marani, L., Cassol, H., Arai, E., Aragão, L., Crispim, S., Neves, R., Domingues, L., Correia, C., Ipia, A., Gloor, M., Miller, J., and von Randow, C.: Is it feasible to relate CO2 atmospheric measurements with land use and cover change data? -A primary assessment of land use and cover change datasets in the Amazon, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-561, https://doi.org/10.5194/egusphere-egu2020-561, 2020.
Atmospheric CO2 concentrations have had a significant increase in recent years reaching levels never seen before. In the Amazon region, the main CO2 emissions come from land use and cover change (LUCC), especially for the deforestation of natural forests. It is very important to understand the impacts of climate change and deforestation on the Amazon forests to understand their role in the current carbon balance at different scales. The lower-troposphere greenhouse gas (GHG) monitoring program “CARBAM project”, has been collecting bimonthly GHGs vertical profiles in four sites of the Amazon since 2010, filling a very important gap in regional GHGs measurements. Here we compare different LUCC datasets for the Amazon region to see if there is a relation between annual LUCC and bimonthly CO2 aircraft measurements in the Amazon. We compared the annual (2010-2018) LUCC area from IBGE, PRODES and mapbiomas pan-amazon datasets for each mean influence area of the CARBAM sites and relate this LUCC areas with the annual CO2 fluxes. We found differences in the classification methods of the LUCC data, showing differences in the total deforested area. The LUCC data have different tendencies in each CARBAM influence area having more deforestation in the east side of the Amazon CARBAM sites. There is no clear trend between LUCC and carbon fluxes in the last 8 years. Inter-annual CO2 fluxes variability could be related with the several droughts that influence the photosynthesis/respiration. Here we highlight the scale issues regarding LUCC datasets, atmospheric CO2 measurements and CO2 modeling to better understand the current Amazon carbon balance.
Acknowledgment: FAPESP (2018/18493-7; 2018/14006-4; 2016/2016/02018-2), NASA, ERC (GEOCARBON, Horizon 2020/ASICA), NERC (NE/F005806/1), CNPq (480713/2013-8).
How to cite: Tejada, G., Gatti, L., Basso, L., Marani, L., Cassol, H., Arai, E., Aragão, L., Crispim, S., Neves, R., Domingues, L., Correia, C., Ipia, A., Gloor, M., Miller, J., and von Randow, C.: Is it feasible to relate CO2 atmospheric measurements with land use and cover change data? -A primary assessment of land use and cover change datasets in the Amazon, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-561, https://doi.org/10.5194/egusphere-egu2020-561, 2020.
EGU2020-1034 | Displays | ITS1.15/BG3.56
An R tool for Capturing Dynamics of Actual Evapotranspiration with MEP model and its application in AmazonYong Yang, Huaiwei Sun, and Jingfeng Wang
Abstract: It is challenged to get an accurate estimate of surface energy budget for the investigation of land atmosphere and global ecosystems. In this study, we established a novel tool based the maximum entropy production (MEP) method for the simulation of global energy flux as well as evapotranspiration (ET) processes. This tool (named as RMEP) was built in R for its great convenient for open-source and the feature of easy-use. As only three variables (net radiation, surface temperature, and specific humidity) are need for MEP model, it shows great advantages in simulation for both global or site scales. Firstly, we compare the performances of RMEP in two flux sites, BR-Sa1and BR-Sa3 of Amazon basin, with the simulation of heat fluxes. Although the substantial bias of G flux exist, both the latent and sense heat flux show high R2 in hourly temporal scale. Then, the RMEP was test in large scale by employing the global scale dataset. Since the Global Land Data Assimilation System (GLDAS) product integrates satellite data and ground-based observations at global scale, the variables of radiation, surface temperature, as well as specific humidity of GLDAS were used as inputs for RMEP and the outputs of RMEP were validated with the variables of fluxes and evapotranspiration in GLDAS. The MEP model shows a high performances in simulating surface energy budget in global scale and Amazon basin area of 3-hourly temporal scale. The performances of MEP model using GLDAS data are superior to that of EC data, with higher R2, lower RMSE and higher, positive NSE. In addition, the MEP accurately estimated ET over regional or global scale. Especially for Amazon area, MEP simulated results of heat fluxes and ET are used in comparisons at their original (3-hourly and daily) and aggregated monthly temporal scales. Generally, the original 3-hourly simulations had a higher accuracy and smaller bias than daily simulations, take the aggregated monthly ET for example, the monthly 3-hourly ET (R2=0.91, NSE=0.85) outperformed than that of daily scale (R2=0.29, NSE=-0.98). Results indicated the excellent performances of the MEP model in estimating ET with 3-hourly temporal scale in Amazon area. In summary, the RMEP shows great performances in both site and global scale. It also can deal with the input file with both site measured table and global netcdf types. The resulted figures, global ET values (in netcdf file), source code, and R package can be shared by the request to the first author.
Appendix. List of figures and tables.
Table 1. Information for two flux sites
Figure 1.
How to cite: Yang, Y., Sun, H., and Wang, J.: An R tool for Capturing Dynamics of Actual Evapotranspiration with MEP model and its application in Amazon, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1034, https://doi.org/10.5194/egusphere-egu2020-1034, 2020.
Abstract: It is challenged to get an accurate estimate of surface energy budget for the investigation of land atmosphere and global ecosystems. In this study, we established a novel tool based the maximum entropy production (MEP) method for the simulation of global energy flux as well as evapotranspiration (ET) processes. This tool (named as RMEP) was built in R for its great convenient for open-source and the feature of easy-use. As only three variables (net radiation, surface temperature, and specific humidity) are need for MEP model, it shows great advantages in simulation for both global or site scales. Firstly, we compare the performances of RMEP in two flux sites, BR-Sa1and BR-Sa3 of Amazon basin, with the simulation of heat fluxes. Although the substantial bias of G flux exist, both the latent and sense heat flux show high R2 in hourly temporal scale. Then, the RMEP was test in large scale by employing the global scale dataset. Since the Global Land Data Assimilation System (GLDAS) product integrates satellite data and ground-based observations at global scale, the variables of radiation, surface temperature, as well as specific humidity of GLDAS were used as inputs for RMEP and the outputs of RMEP were validated with the variables of fluxes and evapotranspiration in GLDAS. The MEP model shows a high performances in simulating surface energy budget in global scale and Amazon basin area of 3-hourly temporal scale. The performances of MEP model using GLDAS data are superior to that of EC data, with higher R2, lower RMSE and higher, positive NSE. In addition, the MEP accurately estimated ET over regional or global scale. Especially for Amazon area, MEP simulated results of heat fluxes and ET are used in comparisons at their original (3-hourly and daily) and aggregated monthly temporal scales. Generally, the original 3-hourly simulations had a higher accuracy and smaller bias than daily simulations, take the aggregated monthly ET for example, the monthly 3-hourly ET (R2=0.91, NSE=0.85) outperformed than that of daily scale (R2=0.29, NSE=-0.98). Results indicated the excellent performances of the MEP model in estimating ET with 3-hourly temporal scale in Amazon area. In summary, the RMEP shows great performances in both site and global scale. It also can deal with the input file with both site measured table and global netcdf types. The resulted figures, global ET values (in netcdf file), source code, and R package can be shared by the request to the first author.
Appendix. List of figures and tables.
Table 1. Information for two flux sites
Figure 1.
How to cite: Yang, Y., Sun, H., and Wang, J.: An R tool for Capturing Dynamics of Actual Evapotranspiration with MEP model and its application in Amazon, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1034, https://doi.org/10.5194/egusphere-egu2020-1034, 2020.
EGU2020-1618 | Displays | ITS1.15/BG3.56
Isoprene emission in central Amazonia - from measurements to model estimatesEliane Gomes-Alves, Tyeen Taylor, Pedro Assis, Giordane Martins, Rodrigo Souza, Sergio Duvoisin-Junior, Alex Guenther, Dasa Gu, Ana Maria Yáñez-Serrano, Jürgen Kesselmeier, Anywhere Tsokankunku, Matthias Sörgel, Bruce Nelson, Davieliton Pinho, Aline Lopes, Nathan Gonçalves, Trissevgeni Stavrakou, Maite Bauwens, Antonio Manzi, and Susan Trumbore
Isoprene regulates large-scale biogeochemical cycles by influencing atmospheric chemical and physical processes, and its dominant sources to the global atmosphere are the tropical forests. Although global and regional model estimates of isoprene emission have been optimized in the last decades, modeled emissions from tropical vegetation still carry high uncertainty due to a poor understanding of the biological and environmental controls on emissions. It is already known that isoprene emission quantities may vary significantly with plant traits, such as leaf phenology, and with the environment; however, current models still lack of good representation for tropical plant species due to the very few observations available. In order to create a predictive framework for the isoprene emission capacity of tropical forests, it is necessary an improved mechanistic understanding on how the magnitude of emissions varies with plant traits and the environment in such ecosystems. In this light, we aimed to quantify the isoprene emission capacity of different tree species across leaf ages, and combine these leaf measurements with long-term canopy measurements of isoprene and its biological and environmental drivers; then, use these results to better parameterize isoprene emissions estimated by MEGAN. We measured at the Amazon Tall Tower Observatory (ATTO) site, central Amazonia: (1) isoprene emission capacity at different leaf ages of 21 trees species; (2) isoprene canopy mixing ratios during six campaigns from 2013 to 2015; (3) isoprene tower flux during the dry season of 2015 (El-Niño year); (3) environmental factors – air temperature and photosynthetic active radiation (PAR) - from 2013 to 2018; and (4) biological factors – leaf demography and phenology (tower based measurements) from 2013 to 2018. We then parameterized the leaf age algorithm of MEGAN with the measurements of isoprene emission capacity at different leaf ages and the tower-based measurements of leaf demography and phenology. Modeling estimates were later compared with measurements (canopy level) and five years of satellite-derived isoprene emission (OMI) from the ATTO domain (2013-2017). Leaf level of isoprene emission capacity showed lower values for old leaves (> 6 months) and young leaves (< 2 months), compared to mature leaves (2-6 months); and our model results suggested that this affects seasonal ecosystem isoprene emission capacity, since the demography of the different leaf age classes varied a long of the year. We will present more results on how changes in leaf demography and phenology and in temperature and PAR affect seasonal ecosystem isoprene emission, and how modeling can be improved with the optimization of the leaf age algorithm. In addition, we will present a comparison of ecosystem isoprene emission of normal years (2013, 2014, 2017 years) and anomalous years (2015 - El-Niño; and 2016 - post El-Niño), and discuss how a strong El-Niño year can influence plant functional strategies that can be carried over to the consecutive year and potentially affect isoprene emission.
How to cite: Gomes-Alves, E., Taylor, T., Assis, P., Martins, G., Souza, R., Duvoisin-Junior, S., Guenther, A., Gu, D., Yáñez-Serrano, A. M., Kesselmeier, J., Tsokankunku, A., Sörgel, M., Nelson, B., Pinho, D., Lopes, A., Gonçalves, N., Stavrakou, T., Bauwens, M., Manzi, A., and Trumbore, S.: Isoprene emission in central Amazonia - from measurements to model estimates, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1618, https://doi.org/10.5194/egusphere-egu2020-1618, 2020.
Isoprene regulates large-scale biogeochemical cycles by influencing atmospheric chemical and physical processes, and its dominant sources to the global atmosphere are the tropical forests. Although global and regional model estimates of isoprene emission have been optimized in the last decades, modeled emissions from tropical vegetation still carry high uncertainty due to a poor understanding of the biological and environmental controls on emissions. It is already known that isoprene emission quantities may vary significantly with plant traits, such as leaf phenology, and with the environment; however, current models still lack of good representation for tropical plant species due to the very few observations available. In order to create a predictive framework for the isoprene emission capacity of tropical forests, it is necessary an improved mechanistic understanding on how the magnitude of emissions varies with plant traits and the environment in such ecosystems. In this light, we aimed to quantify the isoprene emission capacity of different tree species across leaf ages, and combine these leaf measurements with long-term canopy measurements of isoprene and its biological and environmental drivers; then, use these results to better parameterize isoprene emissions estimated by MEGAN. We measured at the Amazon Tall Tower Observatory (ATTO) site, central Amazonia: (1) isoprene emission capacity at different leaf ages of 21 trees species; (2) isoprene canopy mixing ratios during six campaigns from 2013 to 2015; (3) isoprene tower flux during the dry season of 2015 (El-Niño year); (3) environmental factors – air temperature and photosynthetic active radiation (PAR) - from 2013 to 2018; and (4) biological factors – leaf demography and phenology (tower based measurements) from 2013 to 2018. We then parameterized the leaf age algorithm of MEGAN with the measurements of isoprene emission capacity at different leaf ages and the tower-based measurements of leaf demography and phenology. Modeling estimates were later compared with measurements (canopy level) and five years of satellite-derived isoprene emission (OMI) from the ATTO domain (2013-2017). Leaf level of isoprene emission capacity showed lower values for old leaves (> 6 months) and young leaves (< 2 months), compared to mature leaves (2-6 months); and our model results suggested that this affects seasonal ecosystem isoprene emission capacity, since the demography of the different leaf age classes varied a long of the year. We will present more results on how changes in leaf demography and phenology and in temperature and PAR affect seasonal ecosystem isoprene emission, and how modeling can be improved with the optimization of the leaf age algorithm. In addition, we will present a comparison of ecosystem isoprene emission of normal years (2013, 2014, 2017 years) and anomalous years (2015 - El-Niño; and 2016 - post El-Niño), and discuss how a strong El-Niño year can influence plant functional strategies that can be carried over to the consecutive year and potentially affect isoprene emission.
How to cite: Gomes-Alves, E., Taylor, T., Assis, P., Martins, G., Souza, R., Duvoisin-Junior, S., Guenther, A., Gu, D., Yáñez-Serrano, A. M., Kesselmeier, J., Tsokankunku, A., Sörgel, M., Nelson, B., Pinho, D., Lopes, A., Gonçalves, N., Stavrakou, T., Bauwens, M., Manzi, A., and Trumbore, S.: Isoprene emission in central Amazonia - from measurements to model estimates, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1618, https://doi.org/10.5194/egusphere-egu2020-1618, 2020.
EGU2020-9967 | Displays | ITS1.15/BG3.56
Atmospheric impact of sesquiterpenes in the Amazon rainforestNora Zannoni, Stefan Wolff, Anywhere Tsokankunku, Matthias Soergel, Marta Sa, Alessandro Araujo, and Jonathan Williams
Sesquiterpenes (C15H24) are highly reactive biogenic volatile organic compounds playing an important role in atmospheric chemistry. Once emitted from the Earth’s surface, primarily by vegetation, they are rapidly oxidized to semivolatile oxygenated organic species that can lead to secondary organic aerosols (SOA) that influence climate. In the pristine Amazon rainforest environment oxidation of sesquiterpenes is initiated by OH and ozone.
We measured sesquiterpenes in March 2018 (wet season) and November 2018 (dry season) from central Amazonia, at the remote field site ATTO (Amazonian Tall Tower Observatory), Brazil. Samples were collected on adsorbent filled tubes equipped with ozone scrubbers at different heights above the forest canopy ; every three hours for two weeks at 80m and 150m (wet season) and every hour for three days at 80m, 150m and 320m (dry season). Samples were then analysed in the laboratory with a TD-GC-TOF-MS (Thermodesorption-Gas Chromatographer-Time Of Flight-Mass Spectrometer, Markes International). Simultaneous measurements of ozone and meteorological parameters were made at the nearby INSTANT tower. Identification of the chromatographic peaks was achieved by injection of standard molecules and by matching literature mass spectra. Quantification of the chemical compounds was achieved by injection of a standard mixture containing terpenes.The most abundant sesquiterpene measured at ATTO is (-)-α-copaene. Its diel profile varies with photosynthetically active radiation (PAR) and temperature, suggesting the canopy to be the main emission source. Interestingly, other identified sesquiterpenes show a consistent mirrored cycle, with their concentration being higher by night than by day. These varied mostly with RH suggesting the soil to be the main source of the emissions. Air samples taken at the ground are qualitatively and quantitatively different to those collected at different altitudes from the tower. Sesquiterpenes show a common maximum at sunrise (5 :00-7 :00 local time, UTC-4h) coincident with a strong decrease in ozone concentration (>50% decrease on average during the dry season). The strongest effect is registered during the dry season, when sesquiterpenes and ozone concentrations are highest and ozone loss is largest. The atmospheric impact of the measured sesquiterpenes will be discussed including ozone reactivity contributions and OH generation.
How to cite: Zannoni, N., Wolff, S., Tsokankunku, A., Soergel, M., Sa, M., Araujo, A., and Williams, J.: Atmospheric impact of sesquiterpenes in the Amazon rainforest, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9967, https://doi.org/10.5194/egusphere-egu2020-9967, 2020.
Sesquiterpenes (C15H24) are highly reactive biogenic volatile organic compounds playing an important role in atmospheric chemistry. Once emitted from the Earth’s surface, primarily by vegetation, they are rapidly oxidized to semivolatile oxygenated organic species that can lead to secondary organic aerosols (SOA) that influence climate. In the pristine Amazon rainforest environment oxidation of sesquiterpenes is initiated by OH and ozone.
We measured sesquiterpenes in March 2018 (wet season) and November 2018 (dry season) from central Amazonia, at the remote field site ATTO (Amazonian Tall Tower Observatory), Brazil. Samples were collected on adsorbent filled tubes equipped with ozone scrubbers at different heights above the forest canopy ; every three hours for two weeks at 80m and 150m (wet season) and every hour for three days at 80m, 150m and 320m (dry season). Samples were then analysed in the laboratory with a TD-GC-TOF-MS (Thermodesorption-Gas Chromatographer-Time Of Flight-Mass Spectrometer, Markes International). Simultaneous measurements of ozone and meteorological parameters were made at the nearby INSTANT tower. Identification of the chromatographic peaks was achieved by injection of standard molecules and by matching literature mass spectra. Quantification of the chemical compounds was achieved by injection of a standard mixture containing terpenes.The most abundant sesquiterpene measured at ATTO is (-)-α-copaene. Its diel profile varies with photosynthetically active radiation (PAR) and temperature, suggesting the canopy to be the main emission source. Interestingly, other identified sesquiterpenes show a consistent mirrored cycle, with their concentration being higher by night than by day. These varied mostly with RH suggesting the soil to be the main source of the emissions. Air samples taken at the ground are qualitatively and quantitatively different to those collected at different altitudes from the tower. Sesquiterpenes show a common maximum at sunrise (5 :00-7 :00 local time, UTC-4h) coincident with a strong decrease in ozone concentration (>50% decrease on average during the dry season). The strongest effect is registered during the dry season, when sesquiterpenes and ozone concentrations are highest and ozone loss is largest. The atmospheric impact of the measured sesquiterpenes will be discussed including ozone reactivity contributions and OH generation.
How to cite: Zannoni, N., Wolff, S., Tsokankunku, A., Soergel, M., Sa, M., Araujo, A., and Williams, J.: Atmospheric impact of sesquiterpenes in the Amazon rainforest, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9967, https://doi.org/10.5194/egusphere-egu2020-9967, 2020.
EGU2020-8977 | Displays | ITS1.15/BG3.56
Chemical characterization of submicrometer organic aerosol particles from the Amazon rainforest with high-resolution mass spectrometryDenis Leppla, Leslie Kremper, Nora Zannoni, Maria Praß, Florian Ditas, Bruna Holanda, Christopher Pöhlker, Jonathan Williams, Marta Sá, Stefan Wolff, Maria Christina Solci, and Thorsten Hoffmann
The Amazon Rainforest is one of the most important pristine ecosystems for atmospheric chemistry and biodiversity. This region allows the study of organic aerosol particles as well as their nucleation into clouds. However, the rainforest is subject to constant change due to human influences. Thus, it is essential to acquire climate data of trace gases and aerosols over the next decades for a better understanding of the atmospheric oxidant cycle. Therefore, the research site Amazon Tall Tower Observatory (ATTO) was established in the central Amazon Basin to perform long-term measurements under almost natural conditions.
Biogenic emissions of volatile organic compounds (VOCs) mainly consist of isoprene and terpenes. They are responsible for the production of a large fraction of atmospheric particulate matter. Isoprene represents the largest source of non-methane VOCs in the atmosphere and is primarily emitted from vegetation. Its global emissions were estimated in the magnitude of about 500 ‒ 600 Tg per year. Originally, the isoprene photooxidation was not expected to contribute to the secondary organic aerosol (SOA) budget, due to the high volatility of resulting oxidation products. However, several studies have proven evidence for the importance of isoprene SOA formation. Based on the two double bonds, isoprene is highly reactive towards atmospheric oxidants like OH and NO radicals. The subsequent reactive uptake on acidic particles is strongly dependent on the NO concentration. Therefore, anthropogenic sources have a substantial impact on the isoprene photooxidation.
The chemical composition of atmospheric aerosols in the rainforest highly depends on the current season, since the Amazon basin exhibits huge variations of gaseous and particulate matter with clean air conditions during the wet season and polluted conditions during the dry season, due to biomass burning events. For a comprehensive statement, it is necessary to perform field measurements under both conditions to study the isoprene and terpene SOA contribution. For that reason, filter samples were collected at ATTO at different heights to analyze the aerosol composition emitted both from local and regional sources.
High-resolution mass spectrometry combined with data mining techniques will help to link characteristic SOA compounds to certain climate conditions in order to get insights into the Amazon aerosol life cycle.
How to cite: Leppla, D., Kremper, L., Zannoni, N., Praß, M., Ditas, F., Holanda, B., Pöhlker, C., Williams, J., Sá, M., Wolff, S., Solci, M. C., and Hoffmann, T.: Chemical characterization of submicrometer organic aerosol particles from the Amazon rainforest with high-resolution mass spectrometry, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8977, https://doi.org/10.5194/egusphere-egu2020-8977, 2020.
The Amazon Rainforest is one of the most important pristine ecosystems for atmospheric chemistry and biodiversity. This region allows the study of organic aerosol particles as well as their nucleation into clouds. However, the rainforest is subject to constant change due to human influences. Thus, it is essential to acquire climate data of trace gases and aerosols over the next decades for a better understanding of the atmospheric oxidant cycle. Therefore, the research site Amazon Tall Tower Observatory (ATTO) was established in the central Amazon Basin to perform long-term measurements under almost natural conditions.
Biogenic emissions of volatile organic compounds (VOCs) mainly consist of isoprene and terpenes. They are responsible for the production of a large fraction of atmospheric particulate matter. Isoprene represents the largest source of non-methane VOCs in the atmosphere and is primarily emitted from vegetation. Its global emissions were estimated in the magnitude of about 500 ‒ 600 Tg per year. Originally, the isoprene photooxidation was not expected to contribute to the secondary organic aerosol (SOA) budget, due to the high volatility of resulting oxidation products. However, several studies have proven evidence for the importance of isoprene SOA formation. Based on the two double bonds, isoprene is highly reactive towards atmospheric oxidants like OH and NO radicals. The subsequent reactive uptake on acidic particles is strongly dependent on the NO concentration. Therefore, anthropogenic sources have a substantial impact on the isoprene photooxidation.
The chemical composition of atmospheric aerosols in the rainforest highly depends on the current season, since the Amazon basin exhibits huge variations of gaseous and particulate matter with clean air conditions during the wet season and polluted conditions during the dry season, due to biomass burning events. For a comprehensive statement, it is necessary to perform field measurements under both conditions to study the isoprene and terpene SOA contribution. For that reason, filter samples were collected at ATTO at different heights to analyze the aerosol composition emitted both from local and regional sources.
High-resolution mass spectrometry combined with data mining techniques will help to link characteristic SOA compounds to certain climate conditions in order to get insights into the Amazon aerosol life cycle.
How to cite: Leppla, D., Kremper, L., Zannoni, N., Praß, M., Ditas, F., Holanda, B., Pöhlker, C., Williams, J., Sá, M., Wolff, S., Solci, M. C., and Hoffmann, T.: Chemical characterization of submicrometer organic aerosol particles from the Amazon rainforest with high-resolution mass spectrometry, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8977, https://doi.org/10.5194/egusphere-egu2020-8977, 2020.
EGU2020-12009 | Displays | ITS1.15/BG3.56
The long-range transport of African mineral dust to the Amazon basinXurong Wang, Nan Ma, Maria Prass, Christopher Pöhlker, and Qiaoqiao Wang
Being the largest mineral dust source, Africa contributes over half of the global mineral dust emission. The trans-Atlantic transport of the large amount of mineral dust is shown to enter the Amazon basin frequently, not only perturbing the near pristine condition in the Amazon during the wet season, but also fertilizing the Amazon rainforest due to dust deposition and associated nutrients input. In this study, we use a global chemical transport model (GEOS-Chem) to simulate the emission, the long-range trans-Atlantic transport, and the deposition flux of African mineral dust to the Amazon basin during the period of 2013-2017, with observational constraints from AERONET data, MODIS data, as well as the observation from the Amazon Tall Tower Observatory (ATTO). With optimized size distribution of African dust, we improve the simulation of dust over both source (north Africa) and remote region (Amazon basin). The trans-Atlantic transport of African dust reaching the Amazon Basin generally occurs in winter and spring (Northern Hemisphere) associated with the northeasterly trade wind advection. In winter, the transport of dust layer occurs below 2 km height while in other seasons it occurs between 1 Km and 3 Km. With average annual emission of 0.78 (±0.14) Pg a-1, African dust entering the amazon basin could reach 3.93 (± 0.76) ug m-3 at ATTO, account for 19% (± 2.5%) of total particle concentrations. However, the contribution could be up to 91% during strong dust events. Assuming mass fraction of 4.4% and 0.082% of iron and phosphorus in the mineral dust, we estimate an annual mass flux of 35.3 (± 4.49) mg m-2 a-1 and 0.66 (± 0.084) mg m-2 a-1 of iron and phosphorus deposit in the Amazon rainforest, respectively.
How to cite: Wang, X., Ma, N., Prass, M., Pöhlker, C., and Wang, Q.: The long-range transport of African mineral dust to the Amazon basin, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12009, https://doi.org/10.5194/egusphere-egu2020-12009, 2020.
Being the largest mineral dust source, Africa contributes over half of the global mineral dust emission. The trans-Atlantic transport of the large amount of mineral dust is shown to enter the Amazon basin frequently, not only perturbing the near pristine condition in the Amazon during the wet season, but also fertilizing the Amazon rainforest due to dust deposition and associated nutrients input. In this study, we use a global chemical transport model (GEOS-Chem) to simulate the emission, the long-range trans-Atlantic transport, and the deposition flux of African mineral dust to the Amazon basin during the period of 2013-2017, with observational constraints from AERONET data, MODIS data, as well as the observation from the Amazon Tall Tower Observatory (ATTO). With optimized size distribution of African dust, we improve the simulation of dust over both source (north Africa) and remote region (Amazon basin). The trans-Atlantic transport of African dust reaching the Amazon Basin generally occurs in winter and spring (Northern Hemisphere) associated with the northeasterly trade wind advection. In winter, the transport of dust layer occurs below 2 km height while in other seasons it occurs between 1 Km and 3 Km. With average annual emission of 0.78 (±0.14) Pg a-1, African dust entering the amazon basin could reach 3.93 (± 0.76) ug m-3 at ATTO, account for 19% (± 2.5%) of total particle concentrations. However, the contribution could be up to 91% during strong dust events. Assuming mass fraction of 4.4% and 0.082% of iron and phosphorus in the mineral dust, we estimate an annual mass flux of 35.3 (± 4.49) mg m-2 a-1 and 0.66 (± 0.084) mg m-2 a-1 of iron and phosphorus deposit in the Amazon rainforest, respectively.
How to cite: Wang, X., Ma, N., Prass, M., Pöhlker, C., and Wang, Q.: The long-range transport of African mineral dust to the Amazon basin, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12009, https://doi.org/10.5194/egusphere-egu2020-12009, 2020.
EGU2020-12746 | Displays | ITS1.15/BG3.56
Amazon forest responses to drought: scaling from individuals to ecosystems.Scott R. Saleska, Natalia Restrepo-Coupe, Fernanda V. Barros, Paulo R. L. Bittencourt, Neill Prohaska, Deliane V. Penha, Loren P. Albert, Mauro Brum, Luciano Pereira, Leila S. M. Leal, Alessandro C. Araujo, Scott C. Stark, Luciana Alves, Edgard Tribuzy, Plinio B. Camargo, Raimundo Cosme de Oliveira, Valeriy Ivanov, Jose Mauro, Luiz Aragao, and Rafael S. Oliveira
Scaling from individuals or species to ecosystems is a fundamental challenge of modern ecology and understanding tropical forest response to drought is a key challenge of predicting responses to global climate change. We here synthesize our developing understanding of these twin challenges by examining individual and ecosystem responses to the 2015 El Niño drought at two sites in the central Amazon of Brazil, near Manaus and Santarem, which span a precipitation gradient from moderate (Manaus) to long (Santarem) dry seasons. We will focus on how ecosystem water and carbon cycling, measured by eddy flux towers, emerges from individual trait-based responses, including photosynthetic responses of individual leaves, and water cycle responses in terms of stomatal conductance and hydraulic xylem embolism resistance. We found the Santarem forest (with long dry seasons) responded strongly to drought: sensible heat values significantly increased and evapotranspiration decreased. Consistent with this, we also observed reductions in photosynthetic activity and ecosystem respiration, showing levels of stress not seen in the nearly two decades since measurements started at this site. Forests at the Manaus site showed significant, however, less consistent reductions in water and carbon exchange and a more pronounced water deficit. We report an apparent community level forest composition selecting for assemblies of traits and taxa manifest of higher drought tolerance at Santarem, compared to the Manaus forest (short dry seasons) and other forest sites across Amazonia. These results suggest that we may be able to use community trait compositions (as selected by past climate conditions) and environmental threshold values (e.g. cumulative rainfall, atmospheric moisture and radiation) as to help forecast ecosystem responses to future climate change.
How to cite: Saleska, S. R., Restrepo-Coupe, N., Barros, F. V., Bittencourt, P. R. L., Prohaska, N., Penha, D. V., Albert, L. P., Brum, M., Pereira, L., Leal, L. S. M., Araujo, A. C., Stark, S. C., Alves, L., Tribuzy, E., Camargo, P. B., Cosme de Oliveira, R., Ivanov, V., Mauro, J., Aragao, L., and Oliveira, R. S.: Amazon forest responses to drought: scaling from individuals to ecosystems., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12746, https://doi.org/10.5194/egusphere-egu2020-12746, 2020.
Scaling from individuals or species to ecosystems is a fundamental challenge of modern ecology and understanding tropical forest response to drought is a key challenge of predicting responses to global climate change. We here synthesize our developing understanding of these twin challenges by examining individual and ecosystem responses to the 2015 El Niño drought at two sites in the central Amazon of Brazil, near Manaus and Santarem, which span a precipitation gradient from moderate (Manaus) to long (Santarem) dry seasons. We will focus on how ecosystem water and carbon cycling, measured by eddy flux towers, emerges from individual trait-based responses, including photosynthetic responses of individual leaves, and water cycle responses in terms of stomatal conductance and hydraulic xylem embolism resistance. We found the Santarem forest (with long dry seasons) responded strongly to drought: sensible heat values significantly increased and evapotranspiration decreased. Consistent with this, we also observed reductions in photosynthetic activity and ecosystem respiration, showing levels of stress not seen in the nearly two decades since measurements started at this site. Forests at the Manaus site showed significant, however, less consistent reductions in water and carbon exchange and a more pronounced water deficit. We report an apparent community level forest composition selecting for assemblies of traits and taxa manifest of higher drought tolerance at Santarem, compared to the Manaus forest (short dry seasons) and other forest sites across Amazonia. These results suggest that we may be able to use community trait compositions (as selected by past climate conditions) and environmental threshold values (e.g. cumulative rainfall, atmospheric moisture and radiation) as to help forecast ecosystem responses to future climate change.
How to cite: Saleska, S. R., Restrepo-Coupe, N., Barros, F. V., Bittencourt, P. R. L., Prohaska, N., Penha, D. V., Albert, L. P., Brum, M., Pereira, L., Leal, L. S. M., Araujo, A. C., Stark, S. C., Alves, L., Tribuzy, E., Camargo, P. B., Cosme de Oliveira, R., Ivanov, V., Mauro, J., Aragao, L., and Oliveira, R. S.: Amazon forest responses to drought: scaling from individuals to ecosystems., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12746, https://doi.org/10.5194/egusphere-egu2020-12746, 2020.
EGU2020-12511 | Displays | ITS1.15/BG3.56
Drought responses of Amazon forests under climate change: Separating the roles of soil moisture and canopy responsesHao-wei Wey, Kim Naudts, Julia Pongratz, Julia Nabel, and Lena Boysen
The Amazon forests are one of the largest ecosystem carbon pools on Earth. While more frequent and prolonged droughts have been predicted under future climate change there, the vulnerability of Amazon forests to drought has yet remained largely uncertain, as previous studies have shown that few land surface models succeeded in capturing the vegetation responses to drought. In this study, we present an improved version of the land surface model JSBACH, which incorporates new formulations of leaf phenology and litter production based on intensive field measurement from the artificial drought experiments in the Amazon. Coupling the new JSBACH with the atmospheric model ECHAM, we investigate the drought responses of the Amazon forests and the resulting feedbacks under RCP8.5 scenario. The climatic effects resulted from (1) direct effects including declining soil moisture and stomatal responses, and (2) soil moisture-induced canopy responses are separated to give more insights, as the latter was poorly simulated. Preliminary results show that for net primary production and soil respiration, the direct effects and canopy responses have similar spatial patterns with the magnitude of the latter being 1/5 to 1/3 of the former. In addition, declining soil moisture enhances rainfall in Northern Amazon and suppresses rainfall in the south, while canopy responses have negligible effects on rainfall. Based on our findings, we suggest cautious interpretation of results from previous studies. To address this uncertainty, better strategy in modeling leaf phenology such as implemented in this study should be adopted.
How to cite: Wey, H., Naudts, K., Pongratz, J., Nabel, J., and Boysen, L.: Drought responses of Amazon forests under climate change: Separating the roles of soil moisture and canopy responses, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12511, https://doi.org/10.5194/egusphere-egu2020-12511, 2020.
The Amazon forests are one of the largest ecosystem carbon pools on Earth. While more frequent and prolonged droughts have been predicted under future climate change there, the vulnerability of Amazon forests to drought has yet remained largely uncertain, as previous studies have shown that few land surface models succeeded in capturing the vegetation responses to drought. In this study, we present an improved version of the land surface model JSBACH, which incorporates new formulations of leaf phenology and litter production based on intensive field measurement from the artificial drought experiments in the Amazon. Coupling the new JSBACH with the atmospheric model ECHAM, we investigate the drought responses of the Amazon forests and the resulting feedbacks under RCP8.5 scenario. The climatic effects resulted from (1) direct effects including declining soil moisture and stomatal responses, and (2) soil moisture-induced canopy responses are separated to give more insights, as the latter was poorly simulated. Preliminary results show that for net primary production and soil respiration, the direct effects and canopy responses have similar spatial patterns with the magnitude of the latter being 1/5 to 1/3 of the former. In addition, declining soil moisture enhances rainfall in Northern Amazon and suppresses rainfall in the south, while canopy responses have negligible effects on rainfall. Based on our findings, we suggest cautious interpretation of results from previous studies. To address this uncertainty, better strategy in modeling leaf phenology such as implemented in this study should be adopted.
How to cite: Wey, H., Naudts, K., Pongratz, J., Nabel, J., and Boysen, L.: Drought responses of Amazon forests under climate change: Separating the roles of soil moisture and canopy responses, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12511, https://doi.org/10.5194/egusphere-egu2020-12511, 2020.
EGU2020-19455 | Displays | ITS1.15/BG3.56
Quantifying deposition pathways of Ozone at a rainforest site (ATTO) in the central Amazon basinMatthias Sörgel, Anywhere Tsokankunku, Stefan Wolff, Alessandro Araùjo, Pedro Assis, Hartwig Harder, Giordane Martins, Marta Sá, Rodrigo Souza, Jonathan Williams, and Nora Zannoni
Direct eddy covariance flux measurements of O3 in tropical forests are sparse and deposition velocities of O3 for tropical forest have large uncertainties in models. Therefore, we measured O3 fluxes at different heights ( 4 m, 12 m, 46 m and 81 m), which is 2 levels within canopy (below crown layer) and two levels above. At the same levels heat and CO2 fluxes were measured by eddy covariance to differentiate upper canopy fluxes from understory and soil fluxes and to infer stomatal conductance based on the inverted Penman-Monteith equation. Further measurements include the profiles of O3, NOx, CO2 and H2O which are used to calculate storage fluxes and reactions of O3 with NOx within the air volume. Additionally, leaf surface temperature and leaf wetness were measured in the upper canopy (26 m) to infer their influence on the non-stomatal deposition. The measurements took place at the ATTO (Amazon Tall Tower Observatory) site that is located about 150 km northeast of the city of Manaus in the Amazon rainforest. (02°08’38.8’’S, 58°59’59.5’’W). The climate in this region is characterized by a rainy (350 mm around March) and a dry season (ca. 80 mm in September). During the wet months, the air quality is close to pristine, while strong pollution from biomass burning is evident in the dry season. Therefore, we will present results from two intensive campaigns (3- 4 flux levels) for the rainy season (March to May) and the dry season (September to December) 2018.
The focus of the analysis is the partitioning between a) the crown layer and understory and b) stomatal and non-stomatal deposition with a further analysis of the non-stomatal pathways. Non-stomatal deposition is analyzed by quantifying gas-phase reactions of O3 with NOx and an estimate of O3 reactivity by VOCs. Furthermore, the remaining (surface) deposition is analyzed according to its relations with leaf surface temperature and leaf wetness.
How to cite: Sörgel, M., Tsokankunku, A., Wolff, S., Araùjo, A., Assis, P., Harder, H., Martins, G., Sá, M., Souza, R., Williams, J., and Zannoni, N.: Quantifying deposition pathways of Ozone at a rainforest site (ATTO) in the central Amazon basin, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19455, https://doi.org/10.5194/egusphere-egu2020-19455, 2020.
Direct eddy covariance flux measurements of O3 in tropical forests are sparse and deposition velocities of O3 for tropical forest have large uncertainties in models. Therefore, we measured O3 fluxes at different heights ( 4 m, 12 m, 46 m and 81 m), which is 2 levels within canopy (below crown layer) and two levels above. At the same levels heat and CO2 fluxes were measured by eddy covariance to differentiate upper canopy fluxes from understory and soil fluxes and to infer stomatal conductance based on the inverted Penman-Monteith equation. Further measurements include the profiles of O3, NOx, CO2 and H2O which are used to calculate storage fluxes and reactions of O3 with NOx within the air volume. Additionally, leaf surface temperature and leaf wetness were measured in the upper canopy (26 m) to infer their influence on the non-stomatal deposition. The measurements took place at the ATTO (Amazon Tall Tower Observatory) site that is located about 150 km northeast of the city of Manaus in the Amazon rainforest. (02°08’38.8’’S, 58°59’59.5’’W). The climate in this region is characterized by a rainy (350 mm around March) and a dry season (ca. 80 mm in September). During the wet months, the air quality is close to pristine, while strong pollution from biomass burning is evident in the dry season. Therefore, we will present results from two intensive campaigns (3- 4 flux levels) for the rainy season (March to May) and the dry season (September to December) 2018.
The focus of the analysis is the partitioning between a) the crown layer and understory and b) stomatal and non-stomatal deposition with a further analysis of the non-stomatal pathways. Non-stomatal deposition is analyzed by quantifying gas-phase reactions of O3 with NOx and an estimate of O3 reactivity by VOCs. Furthermore, the remaining (surface) deposition is analyzed according to its relations with leaf surface temperature and leaf wetness.
How to cite: Sörgel, M., Tsokankunku, A., Wolff, S., Araùjo, A., Assis, P., Harder, H., Martins, G., Sá, M., Souza, R., Williams, J., and Zannoni, N.: Quantifying deposition pathways of Ozone at a rainforest site (ATTO) in the central Amazon basin, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19455, https://doi.org/10.5194/egusphere-egu2020-19455, 2020.
EGU2020-7341 | Displays | ITS1.15/BG3.56
Orographic gravity wave and low-level jet interaction above a tall and dense Amazonian forestLuca Mortarini, Polari Batista Corrêa, Daniela Cava, Cléo Quaresma Dias-Júnior, Antônio Ocimar Manzi Manzi, Otavio Acevedo, Alessandro Araújo, Matthias Sörgel, and Luiz Augusto Toledo Machado
The Wavelet and the Multiresolution analysis are applied to ten nocturnal hours of observations of 3-D wind velocity taken within and above a forest canopy in Central Amazonia. Data from the ATTO Project, consisting in 7 levels of turbulence observations along both 81 and 325-meter towers, are used. The presented night is dynamically rich presenting three distinct periods. In the first one the boundary layer is characterized by canopy waves and coherent structures generated at the canopy top. In the second period an intense orographic gravity wave generated at around 150 m strongly influences the boundary layer structure, both above and below the canopy. In the third period, a very stable stratification at the canopy top enables the development of a low-level jet that interferes and disrupts the vertical orographic wave. During the night the wavelet cospectra identified turbulent and non-turbulent structures with different length and time-scales that are generated at different levels above the canopy and propagated inside it. The contributions of the different temporal scales of the flow above and within the canopy were identified using Wavelet and Multiresolution two-point cospectra. The analysis showed how turbulent and wave-like structures propagates in different ways and, further, the ability of low-frequency processes to penetrate within the canopy and to influence the transport of energy and scalar in the roughness sublayer and within canopy.
Keywords: Coherent structures, Canopy Waves, Gravity Waves, Stable Boundary Layer, Low-Level Jet, wave-turbulence interaction.
How to cite: Mortarini, L., Corrêa, P. B., Cava, D., Dias-Júnior, C. Q., Manzi, A. O. M., Acevedo, O., Araújo, A., Sörgel, M., and Machado, L. A. T.: Orographic gravity wave and low-level jet interaction above a tall and dense Amazonian forest, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7341, https://doi.org/10.5194/egusphere-egu2020-7341, 2020.
The Wavelet and the Multiresolution analysis are applied to ten nocturnal hours of observations of 3-D wind velocity taken within and above a forest canopy in Central Amazonia. Data from the ATTO Project, consisting in 7 levels of turbulence observations along both 81 and 325-meter towers, are used. The presented night is dynamically rich presenting three distinct periods. In the first one the boundary layer is characterized by canopy waves and coherent structures generated at the canopy top. In the second period an intense orographic gravity wave generated at around 150 m strongly influences the boundary layer structure, both above and below the canopy. In the third period, a very stable stratification at the canopy top enables the development of a low-level jet that interferes and disrupts the vertical orographic wave. During the night the wavelet cospectra identified turbulent and non-turbulent structures with different length and time-scales that are generated at different levels above the canopy and propagated inside it. The contributions of the different temporal scales of the flow above and within the canopy were identified using Wavelet and Multiresolution two-point cospectra. The analysis showed how turbulent and wave-like structures propagates in different ways and, further, the ability of low-frequency processes to penetrate within the canopy and to influence the transport of energy and scalar in the roughness sublayer and within canopy.
Keywords: Coherent structures, Canopy Waves, Gravity Waves, Stable Boundary Layer, Low-Level Jet, wave-turbulence interaction.
How to cite: Mortarini, L., Corrêa, P. B., Cava, D., Dias-Júnior, C. Q., Manzi, A. O. M., Acevedo, O., Araújo, A., Sörgel, M., and Machado, L. A. T.: Orographic gravity wave and low-level jet interaction above a tall and dense Amazonian forest, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7341, https://doi.org/10.5194/egusphere-egu2020-7341, 2020.
ITS2.2/GM12.5 – Co-production and evolution in human-landscape interaction: from geoarchaeological records to geomorphological dynamics and human influence
EGU2020-6485 | Displays | ITS2.2/GM12.5 | Highlight
Geoarchaeological evidence of multiple climatic and anthropic triggers driving the breakdown of the Terramare civilization (Bronze Age, Northern Italy)Andrea Zerboni, Anna Maria Mercuri, Assunta Florenzano, Eleonora Clò, Giovanni Zanchetta, Eleonora Regattieri, Ilaria Isola, Filippo Brandolini, and Mauro Cremaschi
The Terramare civilization included hundreds of banked and moated villages, located in the alluvial plain of the Po River of northern Italy, and developed between the Middle and the Recent Bronze Ages (XVI-XII cent. BC). This civilization lasted for over 500 years, collapsing at around 1150 years BC, in a period marked by a great societal disruptionin the Mediterranean area. The timing and modalities of the collapse of the Terramare Bronze Age culture are widely debated, and a combined geoarchaeological and palaeoclimatic investigation – the SUCCESSO-TERRA Project –is shading new light on this enigma. The Terramare economy was based upon cereal farming, herding, and metallurgy; settlements were also sustained by a well-developed system for the management of water and abundant wood resources. They also established a wide network of commercial exchange between continental Europe and the Mediterranean region.The SUCCESSO-TERRA Project investigated two main Bronze Age sites in Northern Italy:(i) the Terramara Santa Rosa di Poviglio, and (ii) the San Michele di Valestra site, which is a coeval settlement outside the Terramare territory, but in the adjoining Apennine range. Human occupation at San Michele di Valestra persisted after the Terramare crisis and the site was settled with continuity throughout the whole Bronze Ages, up to the Iron Age. The combined geoarchaeological, palaeoclimatic, and archaeobotanical investigation on different archaeological sites and on independent archives for climatic proxies (offsite cores and speleothems) highlights the existence of both climatic and anthropic critical factors triggering a dramatic shift of the landuse of the Terramare civilization. The overexploitation of natural resources became excessive in the late period of the Terramare trajectory, when also a climatic change occurred. A fresh speleothem record for the same region suggests the occurrence of a short-lived period of climatic instability followed by a marked peak of aridity. The unfavourable concomitance between human overgrazing and climatic-triggered environmental pressure, amplified the on-going societal crisis, likely leading to the breakdown of the Terramare civilization in the turn of a generation.
How to cite: Zerboni, A., Mercuri, A. M., Florenzano, A., Clò, E., Zanchetta, G., Regattieri, E., Isola, I., Brandolini, F., and Cremaschi, M.: Geoarchaeological evidence of multiple climatic and anthropic triggers driving the breakdown of the Terramare civilization (Bronze Age, Northern Italy), EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6485, https://doi.org/10.5194/egusphere-egu2020-6485, 2020.
The Terramare civilization included hundreds of banked and moated villages, located in the alluvial plain of the Po River of northern Italy, and developed between the Middle and the Recent Bronze Ages (XVI-XII cent. BC). This civilization lasted for over 500 years, collapsing at around 1150 years BC, in a period marked by a great societal disruptionin the Mediterranean area. The timing and modalities of the collapse of the Terramare Bronze Age culture are widely debated, and a combined geoarchaeological and palaeoclimatic investigation – the SUCCESSO-TERRA Project –is shading new light on this enigma. The Terramare economy was based upon cereal farming, herding, and metallurgy; settlements were also sustained by a well-developed system for the management of water and abundant wood resources. They also established a wide network of commercial exchange between continental Europe and the Mediterranean region.The SUCCESSO-TERRA Project investigated two main Bronze Age sites in Northern Italy:(i) the Terramara Santa Rosa di Poviglio, and (ii) the San Michele di Valestra site, which is a coeval settlement outside the Terramare territory, but in the adjoining Apennine range. Human occupation at San Michele di Valestra persisted after the Terramare crisis and the site was settled with continuity throughout the whole Bronze Ages, up to the Iron Age. The combined geoarchaeological, palaeoclimatic, and archaeobotanical investigation on different archaeological sites and on independent archives for climatic proxies (offsite cores and speleothems) highlights the existence of both climatic and anthropic critical factors triggering a dramatic shift of the landuse of the Terramare civilization. The overexploitation of natural resources became excessive in the late period of the Terramare trajectory, when also a climatic change occurred. A fresh speleothem record for the same region suggests the occurrence of a short-lived period of climatic instability followed by a marked peak of aridity. The unfavourable concomitance between human overgrazing and climatic-triggered environmental pressure, amplified the on-going societal crisis, likely leading to the breakdown of the Terramare civilization in the turn of a generation.
How to cite: Zerboni, A., Mercuri, A. M., Florenzano, A., Clò, E., Zanchetta, G., Regattieri, E., Isola, I., Brandolini, F., and Cremaschi, M.: Geoarchaeological evidence of multiple climatic and anthropic triggers driving the breakdown of the Terramare civilization (Bronze Age, Northern Italy), EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6485, https://doi.org/10.5194/egusphere-egu2020-6485, 2020.
EGU2020-161 | Displays | ITS2.2/GM12.5 | Highlight
Investigate human responses to Late-Holocene changes of fluvial landforms through Spatial Point Pattern Analysis (Po Plain, N Italy)Filippo Brandolini and Francesco Carrer
In fluvial environments, alluvial geomorphological features had a huge influence on settlement strategies during the Holocene. However, a few projects investigate this topic through quantitative and question-driven analyses of the human-landscape correlation. The Po Valley (N Italy) – located between the Mediterranean regions and continental Europe – is as a key area for the investigation of environmental and cultural influences on settlement strategies since prehistoric times. For instance, the transition from Roman to Medieval times represented a crucial moment for the reorganisation of human settlement strategies in the Po Valley; the process was mainly driven by climate changes and socio-political factors. Spatial Point Pattern Analysis (SPPA) was here employed to provide a solid statistical assessment of these dynamics in the two historical phases. A point pattern (PP) corresponds to the location of spatial events generated by a stochastic process within a bounded region. The density of the PP is proportional to the intensity of the underlying process. The intensity, in turn, can be constant within the region or spatially variable, thus influencing the uniformity of distribution of spatial events. SPPA provides powerful techniques for the statistical analysis of PP data that consist of a complete set of locations of archaeological sites/findings within an observation window. The use of spatial covariates enables the investigation of environmental and non-environmental factors influencing the spatial homogeneity of the point process. Archaeologists have increasingly analyzed such datasets to quantify the characteristics of observed spatial patterns with the aims of deriving hypotheses on the underlying processes or testing hypotheses derived from archaeological theory. The aim of this paper is to assess whether a shift in water management strategies between the Roman and Medieval periods influenced the spatial distribution of settlements, and to evaluate the relative importance of agricultural suitability over flood risks in each historical phase. In particular, the variability settlement patterns between Roman and Medieval phases has been assessed against two related proxies for alluvial geomorphology and agricultural suitability: flood hazard and soil texture. The SPPA performed shows that Roman and Medieval settlement patterns mirror two different human responses to the geomorphological dynamics of the area. Roman land- and water-management were able to minimize the flood hazard, to drain the floodplain and organize a complex land use on different soil types. In the Medieval period, the alluvial geomorphology of the area, characterised by wide swampy meadows and frequent flood events, affected the spatial organisation of settlement, which privileged topographically prominent positions. Social and cultural dynamics played a crucial role in responding to alluvial geomorphological environmental challenges in different times.
How to cite: Brandolini, F. and Carrer, F.: Investigate human responses to Late-Holocene changes of fluvial landforms through Spatial Point Pattern Analysis (Po Plain, N Italy), EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-161, https://doi.org/10.5194/egusphere-egu2020-161, 2020.
In fluvial environments, alluvial geomorphological features had a huge influence on settlement strategies during the Holocene. However, a few projects investigate this topic through quantitative and question-driven analyses of the human-landscape correlation. The Po Valley (N Italy) – located between the Mediterranean regions and continental Europe – is as a key area for the investigation of environmental and cultural influences on settlement strategies since prehistoric times. For instance, the transition from Roman to Medieval times represented a crucial moment for the reorganisation of human settlement strategies in the Po Valley; the process was mainly driven by climate changes and socio-political factors. Spatial Point Pattern Analysis (SPPA) was here employed to provide a solid statistical assessment of these dynamics in the two historical phases. A point pattern (PP) corresponds to the location of spatial events generated by a stochastic process within a bounded region. The density of the PP is proportional to the intensity of the underlying process. The intensity, in turn, can be constant within the region or spatially variable, thus influencing the uniformity of distribution of spatial events. SPPA provides powerful techniques for the statistical analysis of PP data that consist of a complete set of locations of archaeological sites/findings within an observation window. The use of spatial covariates enables the investigation of environmental and non-environmental factors influencing the spatial homogeneity of the point process. Archaeologists have increasingly analyzed such datasets to quantify the characteristics of observed spatial patterns with the aims of deriving hypotheses on the underlying processes or testing hypotheses derived from archaeological theory. The aim of this paper is to assess whether a shift in water management strategies between the Roman and Medieval periods influenced the spatial distribution of settlements, and to evaluate the relative importance of agricultural suitability over flood risks in each historical phase. In particular, the variability settlement patterns between Roman and Medieval phases has been assessed against two related proxies for alluvial geomorphology and agricultural suitability: flood hazard and soil texture. The SPPA performed shows that Roman and Medieval settlement patterns mirror two different human responses to the geomorphological dynamics of the area. Roman land- and water-management were able to minimize the flood hazard, to drain the floodplain and organize a complex land use on different soil types. In the Medieval period, the alluvial geomorphology of the area, characterised by wide swampy meadows and frequent flood events, affected the spatial organisation of settlement, which privileged topographically prominent positions. Social and cultural dynamics played a crucial role in responding to alluvial geomorphological environmental challenges in different times.
How to cite: Brandolini, F. and Carrer, F.: Investigate human responses to Late-Holocene changes of fluvial landforms through Spatial Point Pattern Analysis (Po Plain, N Italy), EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-161, https://doi.org/10.5194/egusphere-egu2020-161, 2020.
EGU2020-13700 | Displays | ITS2.2/GM12.5
Understanding settlement-landscape interaction with literary records and geoinformatics: The case of Homer’s Late Bronze Age Southeast AegeanAthanasios Votsis and Dina Babushkina
Advances in Digital Humanities are providing increasingly rich research material for understanding (1) the environmental and locational attributes of ancient settlements and (2) the regional structure of systems of settlements. Non-material records, in particular, provide information about the social and cultural drivers of human-landscape interaction in settlements that, when combined with material records, aid in refining existing models of settlement-landscape evolution and sustainability. We present a case study from Late Bronze Age Southeast Aegean that utilizes literary records, biogeophysical data and geoinformatics methods to offer insights into the abovementioned topics in that region. Specifically, we utilize a georeferenced version of the record of cities and their sociocultural and environmental descriptions, provided in the Catalog of Ships in Homer’s Iliad. We combine this information with datasets from the spatial (physiography, climatology) and temporal (continuities/discontinuities, population) context of those settlements. Ultimately, we are interested in deriving identifiable patterns in our dataset – more specifically, whether there exist patterns of settlement-environment interaction that are inherently more sustainable than others, as well as getting a glimpse into the hierarchy of values underlying this interaction.
How to cite: Votsis, A. and Babushkina, D.: Understanding settlement-landscape interaction with literary records and geoinformatics: The case of Homer’s Late Bronze Age Southeast Aegean, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13700, https://doi.org/10.5194/egusphere-egu2020-13700, 2020.
Advances in Digital Humanities are providing increasingly rich research material for understanding (1) the environmental and locational attributes of ancient settlements and (2) the regional structure of systems of settlements. Non-material records, in particular, provide information about the social and cultural drivers of human-landscape interaction in settlements that, when combined with material records, aid in refining existing models of settlement-landscape evolution and sustainability. We present a case study from Late Bronze Age Southeast Aegean that utilizes literary records, biogeophysical data and geoinformatics methods to offer insights into the abovementioned topics in that region. Specifically, we utilize a georeferenced version of the record of cities and their sociocultural and environmental descriptions, provided in the Catalog of Ships in Homer’s Iliad. We combine this information with datasets from the spatial (physiography, climatology) and temporal (continuities/discontinuities, population) context of those settlements. Ultimately, we are interested in deriving identifiable patterns in our dataset – more specifically, whether there exist patterns of settlement-environment interaction that are inherently more sustainable than others, as well as getting a glimpse into the hierarchy of values underlying this interaction.
How to cite: Votsis, A. and Babushkina, D.: Understanding settlement-landscape interaction with literary records and geoinformatics: The case of Homer’s Late Bronze Age Southeast Aegean, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13700, https://doi.org/10.5194/egusphere-egu2020-13700, 2020.
EGU2020-1440 | Displays | ITS2.2/GM12.5
Human-landscape Interactions along the Danube from the Neolithic to the Present in Budapest, HungaryIstván Viczián, Gábor Szilas, Farkas Márton Tóth, György Sípos, Dávid Gergely Páll, and József Szeberényi
Geoarchaeological and geomorphological studies were carried out on the alluvial plain of the Danube in an urban environment in the Northwest part of Budapest. The human-landscape interactions were investigated from the Neolithic to the present times.
The environmental reconstruction was produced through inter- and multidisciplinary geomorphological, archaeological, environmental historical researches, using OSL and radiocarbon dating, malacology, stratigraphy, and sedimentological analyses of samples from archaeological excavations, GIS data processing of contemporary and historical maps, archival documents and the spatial pattern of prehistoric archaeological sites.
The Danube is Europe's second longest river with a large catchment area. Its drainage basins’ climatic and environmental changes have significant effects on our case study area’s environment and its societies. The geomorphological and hydrographical evolutions’ long-term and short-term processes as well as the landscape’s episodic events were studied by investigating the geomorphological responses to climatic, fluvial and human impacts on the environment.
The landscape evolution from a nature-dominated fluvial environment to a densely built up anthropogenic landscape of a metropolis was revealed. An active river channel used to cross the research area in the Early Holocene. Today only some moderate-sized swampy, waterlogged areas refer to the existence of this former river channel and the subsequent lake and marshy environment. Through time this relict form of the Danube’s paleochannel was occupied by streams, draining surface water, ground water and abundant karstic springs. The location of the two prehistoric settlement concentrations along the Danube can be linked with the former existence of the significant tributary streams’ confluence. Geomorphological-topographical investigations of the area’s archaeological sites revealed that one of the streams has reversed its flow direction through time. From the Roman Period onward, but especially during the Modern Times, the watercourses have been canalised and their channels have been relocated. Today hardly anything is reminiscent of the former alluvial environment in this part of the capital city.
How to cite: Viczián, I., Szilas, G., Tóth, F. M., Sípos, G., Páll, D. G., and Szeberényi, J.: Human-landscape Interactions along the Danube from the Neolithic to the Present in Budapest, Hungary, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1440, https://doi.org/10.5194/egusphere-egu2020-1440, 2020.
Geoarchaeological and geomorphological studies were carried out on the alluvial plain of the Danube in an urban environment in the Northwest part of Budapest. The human-landscape interactions were investigated from the Neolithic to the present times.
The environmental reconstruction was produced through inter- and multidisciplinary geomorphological, archaeological, environmental historical researches, using OSL and radiocarbon dating, malacology, stratigraphy, and sedimentological analyses of samples from archaeological excavations, GIS data processing of contemporary and historical maps, archival documents and the spatial pattern of prehistoric archaeological sites.
The Danube is Europe's second longest river with a large catchment area. Its drainage basins’ climatic and environmental changes have significant effects on our case study area’s environment and its societies. The geomorphological and hydrographical evolutions’ long-term and short-term processes as well as the landscape’s episodic events were studied by investigating the geomorphological responses to climatic, fluvial and human impacts on the environment.
The landscape evolution from a nature-dominated fluvial environment to a densely built up anthropogenic landscape of a metropolis was revealed. An active river channel used to cross the research area in the Early Holocene. Today only some moderate-sized swampy, waterlogged areas refer to the existence of this former river channel and the subsequent lake and marshy environment. Through time this relict form of the Danube’s paleochannel was occupied by streams, draining surface water, ground water and abundant karstic springs. The location of the two prehistoric settlement concentrations along the Danube can be linked with the former existence of the significant tributary streams’ confluence. Geomorphological-topographical investigations of the area’s archaeological sites revealed that one of the streams has reversed its flow direction through time. From the Roman Period onward, but especially during the Modern Times, the watercourses have been canalised and their channels have been relocated. Today hardly anything is reminiscent of the former alluvial environment in this part of the capital city.
How to cite: Viczián, I., Szilas, G., Tóth, F. M., Sípos, G., Páll, D. G., and Szeberényi, J.: Human-landscape Interactions along the Danube from the Neolithic to the Present in Budapest, Hungary, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1440, https://doi.org/10.5194/egusphere-egu2020-1440, 2020.
EGU2020-7066 | Displays | ITS2.2/GM12.5
Human-waterscape interactions during the early-mid Holocene: insights from a multi-disciplinary approach in Southern Mesopotamia (Iraq)Giulia Iacobucci, Davide Nadali, and Francesco Troiani
The question of human-waterscape interactions worldwide has been and still is a central topic in historical and archaeological research. The Southern Mesopotamia Plain, where the ancient State of Lagash developed, represents an ideal case study. Indeed, at Tell Zurghul archaeological site, extensive field-works have been recently carried on by the Italian Archaeological Mission and an interdisciplinary approach, combining field surveys and geomorphological mapping through remote sensing techniques, has been applied for analyzing the function and role of the waterscape on the early civilization. Indeed, the geomorphological analysis through remote sensing techniques and archaeological surveys are essential for the reconstruction of a complex environmental system, where landforms due to different morphogenetic processes occur, related to the presence of a wide fluvial-deltaic paleo-system and the human activities.
The Southern Mesopotamian Plain coincides with the Tigris and Euphrates deltaic plain, developed starting since the Mid-Holocene: the maximum marine ingression reached Nasiriyah and Al-Almara about 6000 yrs BP; after that, the paleo-delta progradation shifted the shoreline up to the modern position. The development of a typical bird-foot delta guaranteed an amount of water indispensable for agriculture, settlements, and transport. Indeed, the high mobility of the channels and avulsions processes (i.e. levees breaks and related crevasse splays formation) are the main features typically connected to a multi-channel system, guarantying the water supply through seasonal floods. In the area, the management of water during the mid-Holocene, digging an extensive network of canals and building dams, improved the socio-economic conditions. However, the occurrence of the so-called Megadrought Event, dated 4.2 ka BP, drastically modified the hydroclimatic conditions of the area, favoring arid conditions and improving the frequency of unpredictable extreme hydrological events.
The main aim of the work is to contribute to the reconstruction of the waterscape surrounding the archaeological sites of Tell Zurghul and Lagash and know more about waterscape-human interactions during the Holocene. A multi-sensor approach has been adopted to identify the main geomorphological features and describe the associated morphogenetic processes. The availability of the multispectral Landsat-8 satellite imagery and 30-meter spatial resolution DEMs (i.e. the optical DSM from ALOS and the infrared DTM from ASTER) allowed a supervised classification through specific spectral signature and a microtopographic analysis. The spectral signatures of active and inactive crevasse splays have been extracted, discerning among crevasse channels, proximal and distal deposits characterized by coarsest and finest sediment respectively. Moreover, the microtopographic analysis led to recognize channels above inter-floodplains, upward convexity of active crevasse splays and roughly flat topography of inactive ones. The excavations in Area B of the archaeological site shows evidence of the presence of water and the proximity of the sea. Brackish-marine marshes environment has been confirmed by fish vertebras (belong to “Bull Shark”, i.e. Carcharhinus leucas) and fishing net recovered into a mudbrick structure. Moreover, the patron deity of the city in the 3rd millennium BC, was the goddess of the sea and sea species (fish and birds), confirming the strong connection between water and the ancient settlement.
How to cite: Iacobucci, G., Nadali, D., and Troiani, F.: Human-waterscape interactions during the early-mid Holocene: insights from a multi-disciplinary approach in Southern Mesopotamia (Iraq), EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7066, https://doi.org/10.5194/egusphere-egu2020-7066, 2020.
The question of human-waterscape interactions worldwide has been and still is a central topic in historical and archaeological research. The Southern Mesopotamia Plain, where the ancient State of Lagash developed, represents an ideal case study. Indeed, at Tell Zurghul archaeological site, extensive field-works have been recently carried on by the Italian Archaeological Mission and an interdisciplinary approach, combining field surveys and geomorphological mapping through remote sensing techniques, has been applied for analyzing the function and role of the waterscape on the early civilization. Indeed, the geomorphological analysis through remote sensing techniques and archaeological surveys are essential for the reconstruction of a complex environmental system, where landforms due to different morphogenetic processes occur, related to the presence of a wide fluvial-deltaic paleo-system and the human activities.
The Southern Mesopotamian Plain coincides with the Tigris and Euphrates deltaic plain, developed starting since the Mid-Holocene: the maximum marine ingression reached Nasiriyah and Al-Almara about 6000 yrs BP; after that, the paleo-delta progradation shifted the shoreline up to the modern position. The development of a typical bird-foot delta guaranteed an amount of water indispensable for agriculture, settlements, and transport. Indeed, the high mobility of the channels and avulsions processes (i.e. levees breaks and related crevasse splays formation) are the main features typically connected to a multi-channel system, guarantying the water supply through seasonal floods. In the area, the management of water during the mid-Holocene, digging an extensive network of canals and building dams, improved the socio-economic conditions. However, the occurrence of the so-called Megadrought Event, dated 4.2 ka BP, drastically modified the hydroclimatic conditions of the area, favoring arid conditions and improving the frequency of unpredictable extreme hydrological events.
The main aim of the work is to contribute to the reconstruction of the waterscape surrounding the archaeological sites of Tell Zurghul and Lagash and know more about waterscape-human interactions during the Holocene. A multi-sensor approach has been adopted to identify the main geomorphological features and describe the associated morphogenetic processes. The availability of the multispectral Landsat-8 satellite imagery and 30-meter spatial resolution DEMs (i.e. the optical DSM from ALOS and the infrared DTM from ASTER) allowed a supervised classification through specific spectral signature and a microtopographic analysis. The spectral signatures of active and inactive crevasse splays have been extracted, discerning among crevasse channels, proximal and distal deposits characterized by coarsest and finest sediment respectively. Moreover, the microtopographic analysis led to recognize channels above inter-floodplains, upward convexity of active crevasse splays and roughly flat topography of inactive ones. The excavations in Area B of the archaeological site shows evidence of the presence of water and the proximity of the sea. Brackish-marine marshes environment has been confirmed by fish vertebras (belong to “Bull Shark”, i.e. Carcharhinus leucas) and fishing net recovered into a mudbrick structure. Moreover, the patron deity of the city in the 3rd millennium BC, was the goddess of the sea and sea species (fish and birds), confirming the strong connection between water and the ancient settlement.
How to cite: Iacobucci, G., Nadali, D., and Troiani, F.: Human-waterscape interactions during the early-mid Holocene: insights from a multi-disciplinary approach in Southern Mesopotamia (Iraq), EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7066, https://doi.org/10.5194/egusphere-egu2020-7066, 2020.
EGU2020-3763 * | Displays | ITS2.2/GM12.5 | Highlight
Apocalypse then? The Laacher See eruption (13ka BP) and its human impact along a proximal-to-distal transectFelix Riede
Approximately 13,000 years BP, the Laacher See volcano, located in present-day western Germany (East Eifel volcanic field, Rhenish Shield) erupted cataclysmically. Airfall tephra covered Europe from the Alps to the Baltic. As part of an on-going project investigating the potential ecological and human impacts of this eruption, legacy data harvested from a variety of disciplinary sources (palynology, pedology, archaeology, geological grey literature) is now combined with recent geoarchaeological work, to provide new insights into the distribution of the Laacher See fallout and its impact on contemporaneous hunter-gatherer populations. This detailed reconstruction of human impact 13,000 years ago also forms the basis for reflection on modern strategies for coping with the emerging risks posed by extreme and compound events in the present and near future.
How to cite: Riede, F.: Apocalypse then? The Laacher See eruption (13ka BP) and its human impact along a proximal-to-distal transect , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3763, https://doi.org/10.5194/egusphere-egu2020-3763, 2020.
Approximately 13,000 years BP, the Laacher See volcano, located in present-day western Germany (East Eifel volcanic field, Rhenish Shield) erupted cataclysmically. Airfall tephra covered Europe from the Alps to the Baltic. As part of an on-going project investigating the potential ecological and human impacts of this eruption, legacy data harvested from a variety of disciplinary sources (palynology, pedology, archaeology, geological grey literature) is now combined with recent geoarchaeological work, to provide new insights into the distribution of the Laacher See fallout and its impact on contemporaneous hunter-gatherer populations. This detailed reconstruction of human impact 13,000 years ago also forms the basis for reflection on modern strategies for coping with the emerging risks posed by extreme and compound events in the present and near future.
How to cite: Riede, F.: Apocalypse then? The Laacher See eruption (13ka BP) and its human impact along a proximal-to-distal transect , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3763, https://doi.org/10.5194/egusphere-egu2020-3763, 2020.
EGU2020-1868 | Displays | ITS2.2/GM12.5
Geoarchaeological study of big Essentuksky 1 kurgan in Ciscaucasia, RussiaOlga Khokhlova and Alena Sverchkova
The chrono-sequence of paleosols, buried under different constructions in the big kurgan Essentuksky 1 in Ciscaucasia (Stavropol region), built by people of the Maikop culture in the second quarter of the 4th millennium BC, was studied. The height of the kurgan was 5.5-6 m and diameter – more than 60 m. It had four earthen constructions and three – made of stones. We studied the composition of the material of kurgan’s constructions, paleosols buried under four earthen kurgan's constructions and the surface soil on the area adjoining to the kurgan. The macro- and micromorphological observations and set of analytical and instrumental methods were used to study the properties of soils in the chrono-sequence and composition of material from the earthen constructions. According to archaeological data, the kurgan was built for time-span from 25, but not more than 50 years. During this interval, the morphological and physicochemical properties of soils changed, namely, there was a decrease in the thickness of the humus profile and the content of organic carbon, an increase in the content of gypsum, carbon of carbonates, a shift of the area of their accumulation up the profile, and transformation of the forms of carbonate features. The percentage of the exchangeable sodium and magnesium in the composition of exchangeable bases increased and magnetic susceptibility decreased. The most “arid” properties are found in the paleosol buried last in the studied chronological sequence: the humus horizon is the lightest, the profile is most enriched in carbonates, there is the highest content of exchangeable sodium and magnesium in the composition of exchange bases, the lowest magnetic susceptibility and the maximum amount of gypsum in the second meter of the profile. During the indicated time-span of the construction of the kurgan, Haplic Chernozems Loamic changed in Calcic Chernozems Loamic. For the studied time-span, a palynological analysis revealed a decrease in forest area and an increase in the portion of grassy vegetation. In the composition of grasses, there was an increase in the proportion of steppe and xerophytic species. The climate of the studied interval (the beginning of the development of the Maikop culture in the Ciscaucasia) is characterized as drier and hotter in comparison with nowadays. The material for the earthen layers of the kurgan's constructions was taken from the gleyic horizons of the Gleysols (the lowest layer in the first and second constructions) and from the Ah and AhB horizons of the Chernozems (the overwhelming majority of the layers). This study was supported by the Russian Science Foundation, project no. 16-17-10280.
How to cite: Khokhlova, O. and Sverchkova, A.: Geoarchaeological study of big Essentuksky 1 kurgan in Ciscaucasia, Russia, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1868, https://doi.org/10.5194/egusphere-egu2020-1868, 2020.
The chrono-sequence of paleosols, buried under different constructions in the big kurgan Essentuksky 1 in Ciscaucasia (Stavropol region), built by people of the Maikop culture in the second quarter of the 4th millennium BC, was studied. The height of the kurgan was 5.5-6 m and diameter – more than 60 m. It had four earthen constructions and three – made of stones. We studied the composition of the material of kurgan’s constructions, paleosols buried under four earthen kurgan's constructions and the surface soil on the area adjoining to the kurgan. The macro- and micromorphological observations and set of analytical and instrumental methods were used to study the properties of soils in the chrono-sequence and composition of material from the earthen constructions. According to archaeological data, the kurgan was built for time-span from 25, but not more than 50 years. During this interval, the morphological and physicochemical properties of soils changed, namely, there was a decrease in the thickness of the humus profile and the content of organic carbon, an increase in the content of gypsum, carbon of carbonates, a shift of the area of their accumulation up the profile, and transformation of the forms of carbonate features. The percentage of the exchangeable sodium and magnesium in the composition of exchangeable bases increased and magnetic susceptibility decreased. The most “arid” properties are found in the paleosol buried last in the studied chronological sequence: the humus horizon is the lightest, the profile is most enriched in carbonates, there is the highest content of exchangeable sodium and magnesium in the composition of exchange bases, the lowest magnetic susceptibility and the maximum amount of gypsum in the second meter of the profile. During the indicated time-span of the construction of the kurgan, Haplic Chernozems Loamic changed in Calcic Chernozems Loamic. For the studied time-span, a palynological analysis revealed a decrease in forest area and an increase in the portion of grassy vegetation. In the composition of grasses, there was an increase in the proportion of steppe and xerophytic species. The climate of the studied interval (the beginning of the development of the Maikop culture in the Ciscaucasia) is characterized as drier and hotter in comparison with nowadays. The material for the earthen layers of the kurgan's constructions was taken from the gleyic horizons of the Gleysols (the lowest layer in the first and second constructions) and from the Ah and AhB horizons of the Chernozems (the overwhelming majority of the layers). This study was supported by the Russian Science Foundation, project no. 16-17-10280.
How to cite: Khokhlova, O. and Sverchkova, A.: Geoarchaeological study of big Essentuksky 1 kurgan in Ciscaucasia, Russia, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1868, https://doi.org/10.5194/egusphere-egu2020-1868, 2020.
EGU2020-21482 | Displays | ITS2.2/GM12.5
The Mesolithic site Ullafelsen in the Fotsch Valley (Tyrol, Austria) – a biomarker perspectiveMichael Zech, Marcel Lerch, Marcel Bliedtner, Clemens Geitner, Dieter Schäfer, Jean Nicolas Haas, Roland Zech, and Bruno Glaser
The archaeology of high mountain regions got high attention since the discovery of the copper age mummy called "Ötzi" in the Ötztaler Alps in 1991. Results of former archaeological research projects show that mesolithic hunter-gatherers lived in Alpine regions since the beginning of the Holocene, 11,700 years ago (Cornelissen & Reitmaier 2016). Amongst others, the Mesolithic site Ullafelsen (1860 m a.s.l.) and surroundings represent a very important archaeological reference site in the Fotsch Valley (Stubaier Alps, Tyrol) (Schäfer 2011). Many archaeological artifacts and fire places were found at different places in the Fotschertal, which provides evidence for the presence and the way of living of our ancestor. The "Mesolithic project Ullafelsen" includes different scientific disciplines ranging from high mountain archaeology over geology, geomorphology, soil science, sedimentology, petrography to palaeobotany (Schäfer 2011). Within an ongoing DFG project we aim at addressing questions related to past vegetation and climate, human history as well as their influence on pedogenesis from a biomarker and stable isotope perspective (cf. Zech et al. 2011). Our results for instance suggest that (i) the dominant recent and past vegetation can be chemotaxonomically differentiated based on leaf wax-derived n-alkane biomarkers, (ii) there is no evidence for buried Late Glacial topsoils being preserved on the Ullafelsen as argued by Geitner et al. (2014), rather humic-rich subsoils were formed as Bh-horizons by podsolisation and (iii) marked vegetations changes likely associated with alpine pasture activities since the Bronce Age are documented in Holocene peat bogs in the Fotsch Valley. Nevertheless, there remain some challenges by joining all analytical data in order to get a consistent overall picture of human-environmental history of this high mountain region.
Cornelissen & Reitmaier (2016): Filling the gap: Recent Mesolithic discoveries in the central and south-eastern Swiss Alps. In: Quaternary International, 423.
Geitner, C., Schäfer, D., Bertola, S., Bussemer, S., Heinrich, K. und J. Waroszewski (2014): Landscape archaeological results and discussion of Mesolithic research in the Fotsch valley (Tyrol). In: Kerschner, H., Krainer, K. and C. Spötl: From the foreland to the Central Alps – Field trips to selected sites of Quaternary research in the Tyrolean and Bavarian Alps (DEUQUA EXCURSIONS), Berlin, 106-115.
Schäfer (2011): Das Mesolithikum-Projekt Ullafelsen (Teil 1). Mensch und Umwelt im Holozän Tirols (Band 1). 560 p., Innsbruck: Philipp von Zabern.
Zech, M., Zech, R., Buggle, B., Zöller, L. (2011): Novel methodological approaches in loess research - interrogating biomarkers and compound-specific stable isotopes. In: E&G Quaternary Science Journal, 60.
How to cite: Zech, M., Lerch, M., Bliedtner, M., Geitner, C., Schäfer, D., Haas, J. N., Zech, R., and Glaser, B.: The Mesolithic site Ullafelsen in the Fotsch Valley (Tyrol, Austria) – a biomarker perspective, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21482, https://doi.org/10.5194/egusphere-egu2020-21482, 2020.
The archaeology of high mountain regions got high attention since the discovery of the copper age mummy called "Ötzi" in the Ötztaler Alps in 1991. Results of former archaeological research projects show that mesolithic hunter-gatherers lived in Alpine regions since the beginning of the Holocene, 11,700 years ago (Cornelissen & Reitmaier 2016). Amongst others, the Mesolithic site Ullafelsen (1860 m a.s.l.) and surroundings represent a very important archaeological reference site in the Fotsch Valley (Stubaier Alps, Tyrol) (Schäfer 2011). Many archaeological artifacts and fire places were found at different places in the Fotschertal, which provides evidence for the presence and the way of living of our ancestor. The "Mesolithic project Ullafelsen" includes different scientific disciplines ranging from high mountain archaeology over geology, geomorphology, soil science, sedimentology, petrography to palaeobotany (Schäfer 2011). Within an ongoing DFG project we aim at addressing questions related to past vegetation and climate, human history as well as their influence on pedogenesis from a biomarker and stable isotope perspective (cf. Zech et al. 2011). Our results for instance suggest that (i) the dominant recent and past vegetation can be chemotaxonomically differentiated based on leaf wax-derived n-alkane biomarkers, (ii) there is no evidence for buried Late Glacial topsoils being preserved on the Ullafelsen as argued by Geitner et al. (2014), rather humic-rich subsoils were formed as Bh-horizons by podsolisation and (iii) marked vegetations changes likely associated with alpine pasture activities since the Bronce Age are documented in Holocene peat bogs in the Fotsch Valley. Nevertheless, there remain some challenges by joining all analytical data in order to get a consistent overall picture of human-environmental history of this high mountain region.
Cornelissen & Reitmaier (2016): Filling the gap: Recent Mesolithic discoveries in the central and south-eastern Swiss Alps. In: Quaternary International, 423.
Geitner, C., Schäfer, D., Bertola, S., Bussemer, S., Heinrich, K. und J. Waroszewski (2014): Landscape archaeological results and discussion of Mesolithic research in the Fotsch valley (Tyrol). In: Kerschner, H., Krainer, K. and C. Spötl: From the foreland to the Central Alps – Field trips to selected sites of Quaternary research in the Tyrolean and Bavarian Alps (DEUQUA EXCURSIONS), Berlin, 106-115.
Schäfer (2011): Das Mesolithikum-Projekt Ullafelsen (Teil 1). Mensch und Umwelt im Holozän Tirols (Band 1). 560 p., Innsbruck: Philipp von Zabern.
Zech, M., Zech, R., Buggle, B., Zöller, L. (2011): Novel methodological approaches in loess research - interrogating biomarkers and compound-specific stable isotopes. In: E&G Quaternary Science Journal, 60.
How to cite: Zech, M., Lerch, M., Bliedtner, M., Geitner, C., Schäfer, D., Haas, J. N., Zech, R., and Glaser, B.: The Mesolithic site Ullafelsen in the Fotsch Valley (Tyrol, Austria) – a biomarker perspective, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21482, https://doi.org/10.5194/egusphere-egu2020-21482, 2020.
EGU2020-2949 | Displays | ITS2.2/GM12.5
Fingerprints from past charcoal burning - lessons learned and future perspective studying Relict Charcoal Hearths (RCH)Thomas Raab, Alexandra Raab, Florian Hirsch, Alexander Bonhage, and Anna Schneider
Digital Elevation Models (DEMs) recorded by LiDAR are now available for large areas, providing an opportunity to map small landforms for the first time in high resolution and over larger areas. The majority of these small earth surface structures is of anthropogenic origin, and their formation is often ancient. The newly visible microrelief can therefore reflect the imprints of centuries or millennia of past land uses. Among the anthropogenic structures identified in the new high-resolution DEMs, Relict Charcoal Hearths (RCHs) are particularly widespread and abundant. RCHs are remains of past charcoal burning and mainly found in pre-industrial mining areas of Europe and North America. They normally have a relative height of fewer than 50 centimetres on flat terrain and a horizontal dimension ranging from about 5-30 metres. Despite the small spatial dimensions, RCHs can reach significant land coverage due to their enormous numbers. Recent LiDAR data show that a remarkable area of our landscape has this human fingerprint from the past. We therefore need to ask about its effect on soil landscapes and ecosystems in general. The growing relevance of RCHs is also noticeable in the rising number of RCH case studies that have been conducted. This study reviews the state of knowledge about RCHs mainly by addressing three coupled legacies of historic charcoal burning: the geomorphological, the pedological, and the ecological legacy. We are going to present recent findings on these three legacies.
How to cite: Raab, T., Raab, A., Hirsch, F., Bonhage, A., and Schneider, A.: Fingerprints from past charcoal burning - lessons learned and future perspective studying Relict Charcoal Hearths (RCH), EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2949, https://doi.org/10.5194/egusphere-egu2020-2949, 2020.
Digital Elevation Models (DEMs) recorded by LiDAR are now available for large areas, providing an opportunity to map small landforms for the first time in high resolution and over larger areas. The majority of these small earth surface structures is of anthropogenic origin, and their formation is often ancient. The newly visible microrelief can therefore reflect the imprints of centuries or millennia of past land uses. Among the anthropogenic structures identified in the new high-resolution DEMs, Relict Charcoal Hearths (RCHs) are particularly widespread and abundant. RCHs are remains of past charcoal burning and mainly found in pre-industrial mining areas of Europe and North America. They normally have a relative height of fewer than 50 centimetres on flat terrain and a horizontal dimension ranging from about 5-30 metres. Despite the small spatial dimensions, RCHs can reach significant land coverage due to their enormous numbers. Recent LiDAR data show that a remarkable area of our landscape has this human fingerprint from the past. We therefore need to ask about its effect on soil landscapes and ecosystems in general. The growing relevance of RCHs is also noticeable in the rising number of RCH case studies that have been conducted. This study reviews the state of knowledge about RCHs mainly by addressing three coupled legacies of historic charcoal burning: the geomorphological, the pedological, and the ecological legacy. We are going to present recent findings on these three legacies.
How to cite: Raab, T., Raab, A., Hirsch, F., Bonhage, A., and Schneider, A.: Fingerprints from past charcoal burning - lessons learned and future perspective studying Relict Charcoal Hearths (RCH), EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2949, https://doi.org/10.5194/egusphere-egu2020-2949, 2020.
EGU2020-22674 | Displays | ITS2.2/GM12.5
Changes in topography of Cracow centre during the last millennium, PolandAdam Łajczak and Roksana Zarychta
In the investigations on changes of topography of historical town centres the attention is focused on estimation of the thickness of cultural layers and on determination of changes of land topography in selected small areas or along profiles. Less often the attention is focused on determination of spatial differentiation of these changes within larger parts of centres of historical towns. The aim of presentation is to reconstruct differences between paleotopography and modern topography of historical centre of Cracow, Poland, during the last millennium. The paleotopography studied represents situation before the 10th century without any significant human impact. The paleotopography was reconstructed using the published contour-line maps basing on archeological and geoengineering investigations and showing the roof of in situ fossil soil. The preliminary contour-line map represented a Digital Elevation Model (DEM) base map. DEM from aerial laser scanning (ALS DEM) shows the contemporary topography of Cracow centre. The application of selected morphometric indices makes it possible to describe quantitatively changes in spatial aspect in altitude, local relative height, slope, and aspect classes. The analysis of changes of values of the studied elements of topography shows that in the scale of the whole study area, the changes are directed towards the flattening of the area. In more local scale, the areas of flattening trends are adjacent to the areas of undulating trends.
Only few papers discuss the changes in town topography as the consequence of long lasting increase of anthropogenic deposits resulting in land flattening or undulation increase. These papers, however, do not consider the quantitative evaluation of many-sided character of this process. Similar remarks concern the papers on modern development of towns. Revealed in the newest literature positive vertical changes in the topography of Cracow centre which occurred during the last millennium show large spatial differentiation and range to over 10 m. In the older literature the value 5 m was so far suggested in the area of Old Town in Cracow. Other parameters of changes in Cracow topography studied by the Authors have never been considered in literature.
How to cite: Łajczak, A. and Zarychta, R.: Changes in topography of Cracow centre during the last millennium, Poland, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22674, https://doi.org/10.5194/egusphere-egu2020-22674, 2020.
In the investigations on changes of topography of historical town centres the attention is focused on estimation of the thickness of cultural layers and on determination of changes of land topography in selected small areas or along profiles. Less often the attention is focused on determination of spatial differentiation of these changes within larger parts of centres of historical towns. The aim of presentation is to reconstruct differences between paleotopography and modern topography of historical centre of Cracow, Poland, during the last millennium. The paleotopography studied represents situation before the 10th century without any significant human impact. The paleotopography was reconstructed using the published contour-line maps basing on archeological and geoengineering investigations and showing the roof of in situ fossil soil. The preliminary contour-line map represented a Digital Elevation Model (DEM) base map. DEM from aerial laser scanning (ALS DEM) shows the contemporary topography of Cracow centre. The application of selected morphometric indices makes it possible to describe quantitatively changes in spatial aspect in altitude, local relative height, slope, and aspect classes. The analysis of changes of values of the studied elements of topography shows that in the scale of the whole study area, the changes are directed towards the flattening of the area. In more local scale, the areas of flattening trends are adjacent to the areas of undulating trends.
Only few papers discuss the changes in town topography as the consequence of long lasting increase of anthropogenic deposits resulting in land flattening or undulation increase. These papers, however, do not consider the quantitative evaluation of many-sided character of this process. Similar remarks concern the papers on modern development of towns. Revealed in the newest literature positive vertical changes in the topography of Cracow centre which occurred during the last millennium show large spatial differentiation and range to over 10 m. In the older literature the value 5 m was so far suggested in the area of Old Town in Cracow. Other parameters of changes in Cracow topography studied by the Authors have never been considered in literature.
How to cite: Łajczak, A. and Zarychta, R.: Changes in topography of Cracow centre during the last millennium, Poland, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22674, https://doi.org/10.5194/egusphere-egu2020-22674, 2020.
EGU2020-2202 | Displays | ITS2.2/GM12.5
Gravity aspects from recent Earth gravity model EIGEN 6C4 for geoscience and archaeology in Sahara, EgyptJaroslav Klokocnik, Vaclav Cilek, Jan Kostelecky, and Ales Bezdek
A new method to detect paleolakes via their gravity signal is presented (here with implications for geoscience and archaeology). The gravity aspects or descriptors (gravity anomalies/disturbances, second radial derivatives, strike angles and virtual deformations) were computed from the global static combined gravity field model EIGEN 6C4 for an application in archaeology and geoscience in Egypt and surrounding countries. The model consists of the best now available satellite and terrestrial data, including gradiometry from the GOCE mission. EIGEN 6C4 has the ground resolution ~10 km. From archaeological literarure we took the positions of archaeological sites of the Holocene occupations between 8500 and 5300 BC (8.5-5.3 ky BC) in the Eastern Sahara, Western Desert, Egypt. We correlated the features found from the gravity data with the locations; the correlation is good, assuming that the sites were mostly at paleolake boarders or at rivers. We suggest position, extent and shape of a paleolake. Then, we have estimated a possible location, extent and shape of the putative paleolake(s). We also reconsider the origin of Libyan Desert glass (LDG) in the Great Sand Sea (GSS) and support a hypothesis about an older impact structure created in GSS, repeatedly filled by water, which might be a part of some of the possible paleolake(s).
How to cite: Klokocnik, J., Cilek, V., Kostelecky, J., and Bezdek, A.: Gravity aspects from recent Earth gravity model EIGEN 6C4 for geoscience and archaeology in Sahara, Egypt, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2202, https://doi.org/10.5194/egusphere-egu2020-2202, 2020.
A new method to detect paleolakes via their gravity signal is presented (here with implications for geoscience and archaeology). The gravity aspects or descriptors (gravity anomalies/disturbances, second radial derivatives, strike angles and virtual deformations) were computed from the global static combined gravity field model EIGEN 6C4 for an application in archaeology and geoscience in Egypt and surrounding countries. The model consists of the best now available satellite and terrestrial data, including gradiometry from the GOCE mission. EIGEN 6C4 has the ground resolution ~10 km. From archaeological literarure we took the positions of archaeological sites of the Holocene occupations between 8500 and 5300 BC (8.5-5.3 ky BC) in the Eastern Sahara, Western Desert, Egypt. We correlated the features found from the gravity data with the locations; the correlation is good, assuming that the sites were mostly at paleolake boarders or at rivers. We suggest position, extent and shape of a paleolake. Then, we have estimated a possible location, extent and shape of the putative paleolake(s). We also reconsider the origin of Libyan Desert glass (LDG) in the Great Sand Sea (GSS) and support a hypothesis about an older impact structure created in GSS, repeatedly filled by water, which might be a part of some of the possible paleolake(s).
How to cite: Klokocnik, J., Cilek, V., Kostelecky, J., and Bezdek, A.: Gravity aspects from recent Earth gravity model EIGEN 6C4 for geoscience and archaeology in Sahara, Egypt, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2202, https://doi.org/10.5194/egusphere-egu2020-2202, 2020.
EGU2020-306 | Displays | ITS2.2/GM12.5
The terraces of Petra, Jordan: archives of a lost agricultural hinterlandRupert Bäumler, Bernhard Lucke, Jago Birk, Patrick Keilholz, Christopher O. Hunt, Sofia Laparidou, Nizar Abu-Jaber, Paula Kouki, and Sabine Fiedler
Petra is hidden in rugged arid mountains prone to flash floods, while the dry climate and barren landscape seem hostile to cultivation. Nevertheless, there are countless remains of terraces of so far unknown purpose. We investigated three well-preserved terraces at Jabal Haroun to the south-west of Petra which seemed representative for the diverse geology and types of terraces. A hydrological model shows that the terraces were effective at both control of runoff and collection of water and sediments: they minimized flash floods and allowed for an agricultural use. However, rare extreme rainfall events could only be controlled to a limited degree, and drought years without floods caused crop failures. Pollen and phytoliths in the sediments attest to the past presence of well-watered fields including reservoirs storing collected runoff, which suggest a sophisticated irrigation system. In addition, faeces biomarkers and plant-available phosphorus indicate planned manuring. Ancient land use as documented by the terraces created a green oasis in the desert. They seem to represent Petra's agricultural hinterland, which was lost during the Islamic period due growing aridity and an increased frequency of devastating extreme precipitation events. The heirs of the Nabateans reverted to their original Bedouin subsistence strategies but continue to opportunistically cultivate terrace remains.
How to cite: Bäumler, R., Lucke, B., Birk, J., Keilholz, P., Hunt, C. O., Laparidou, S., Abu-Jaber, N., Kouki, P., and Fiedler, S.: The terraces of Petra, Jordan: archives of a lost agricultural hinterland, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-306, https://doi.org/10.5194/egusphere-egu2020-306, 2020.
Petra is hidden in rugged arid mountains prone to flash floods, while the dry climate and barren landscape seem hostile to cultivation. Nevertheless, there are countless remains of terraces of so far unknown purpose. We investigated three well-preserved terraces at Jabal Haroun to the south-west of Petra which seemed representative for the diverse geology and types of terraces. A hydrological model shows that the terraces were effective at both control of runoff and collection of water and sediments: they minimized flash floods and allowed for an agricultural use. However, rare extreme rainfall events could only be controlled to a limited degree, and drought years without floods caused crop failures. Pollen and phytoliths in the sediments attest to the past presence of well-watered fields including reservoirs storing collected runoff, which suggest a sophisticated irrigation system. In addition, faeces biomarkers and plant-available phosphorus indicate planned manuring. Ancient land use as documented by the terraces created a green oasis in the desert. They seem to represent Petra's agricultural hinterland, which was lost during the Islamic period due growing aridity and an increased frequency of devastating extreme precipitation events. The heirs of the Nabateans reverted to their original Bedouin subsistence strategies but continue to opportunistically cultivate terrace remains.
How to cite: Bäumler, R., Lucke, B., Birk, J., Keilholz, P., Hunt, C. O., Laparidou, S., Abu-Jaber, N., Kouki, P., and Fiedler, S.: The terraces of Petra, Jordan: archives of a lost agricultural hinterland, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-306, https://doi.org/10.5194/egusphere-egu2020-306, 2020.
EGU2020-7421 | Displays | ITS2.2/GM12.5
The last traces. Historical images and the reconstruction of lost archaeological landscapesNehemie Strupler
Since one century, aerial photography has a successful track record of detecting and mapping archaeological traces of human activity in the landscape. The tools and procedures evolved gradually, following technological and methodological advancements of earth remote sensing. It started with the use of crop marks and other proxies such as soil, shadow or snow to distinguish observable differences caused by subsurface archaeological remains, locating buried archaeological features. Beside theses data gathered by archaeologists, the declassification at the end of the last century of millions of photographs (such as the CORONA, ARGON or LANYARD US satellite programs as well as other non-US military programs) has resulted in a vast archive.
Historical images represent a fundamental tool in archaeological research, particularly for Western Asia. They document nowadays inaccessible landscapes that has been recovered by modern human infrastructure (i.e. building, roads) or heavily modified (notably by the increasing use of mechanized agricultural methods), erasing fragile traces from thousands of years ago. Only through the detailed analysis of archives from the 20th century, is it possible to recover archaeological evidences and paleo-environmental features.
The traditional workflow uses historical images as a first step prior to archaeological fieldwork, asserting and dating detected features. One main problem arises when ground truthing of these detected features is not possible anymore. How trustful are the detections and how to date them? My poster/talk will present sources as well as state-of-the-art analysis of historical aerial images based on the Scaling Territories Project (SCATTER). The combined use of historical maps, aerial images and ground acquired archaeological data from nearby field-walking prospections enables to reconstruct the paleo-landscapes and the location of (presumed and know lost) settlements in Central Anatolia.
How to cite: Strupler, N.: The last traces. Historical images and the reconstruction of lost archaeological landscapes, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7421, https://doi.org/10.5194/egusphere-egu2020-7421, 2020.
Since one century, aerial photography has a successful track record of detecting and mapping archaeological traces of human activity in the landscape. The tools and procedures evolved gradually, following technological and methodological advancements of earth remote sensing. It started with the use of crop marks and other proxies such as soil, shadow or snow to distinguish observable differences caused by subsurface archaeological remains, locating buried archaeological features. Beside theses data gathered by archaeologists, the declassification at the end of the last century of millions of photographs (such as the CORONA, ARGON or LANYARD US satellite programs as well as other non-US military programs) has resulted in a vast archive.
Historical images represent a fundamental tool in archaeological research, particularly for Western Asia. They document nowadays inaccessible landscapes that has been recovered by modern human infrastructure (i.e. building, roads) or heavily modified (notably by the increasing use of mechanized agricultural methods), erasing fragile traces from thousands of years ago. Only through the detailed analysis of archives from the 20th century, is it possible to recover archaeological evidences and paleo-environmental features.
The traditional workflow uses historical images as a first step prior to archaeological fieldwork, asserting and dating detected features. One main problem arises when ground truthing of these detected features is not possible anymore. How trustful are the detections and how to date them? My poster/talk will present sources as well as state-of-the-art analysis of historical aerial images based on the Scaling Territories Project (SCATTER). The combined use of historical maps, aerial images and ground acquired archaeological data from nearby field-walking prospections enables to reconstruct the paleo-landscapes and the location of (presumed and know lost) settlements in Central Anatolia.
How to cite: Strupler, N.: The last traces. Historical images and the reconstruction of lost archaeological landscapes, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7421, https://doi.org/10.5194/egusphere-egu2020-7421, 2020.
EGU2020-62 | Displays | ITS2.2/GM12.5
What is the future for our earthen heritage? Modelling the risk of environmentally-driven deterioration at sites located in dryland areasJenny Richards, Jerome Mayaud, Richard Bailey, and Heather Viles
Earthen heritage forms ~10% of UNESCO’s World Heritage List, with sites generally concentrated in dryland environments. Many sites are exposed to environmental processes such as wind, sediment movement and rain, which can result in extensive deterioration of the earthen heritage. To improve the effectiveness of conservation strategies that aim to minimise deterioration, there is an urgent need to understand how multiple environmental processes interact and impact earthen heritage, particularly over longer (centennial) timescales. We therefore apply the ViSTA-HD model (Vegetation and Sediment TrAnsport model for Heritage Deterioration) to Suoyang Ancient City, an archaeological site in north-west China made of rammed earth. ViSTA-HD is a cellular automata model developed by the authors to model the risk of environmentally-driven deterioration at earthen heritage sites. It is comprised of two modules: (i) an environmental module that spatially resolves environmental processes across an earthen site, and (ii) a deterioration module that spatially resolves the risk of deterioration across a wall face. The risk of deterioration is simulated for three common deterioration features at Suoyang - polishing, pitting and slurry. We use ViSTA-HD to investigate variations in deterioration risk under future potential climate scenarios across the 21st century. We also use the model to robustly test the impact of potential nature-based conservation strategies.
How to cite: Richards, J., Mayaud, J., Bailey, R., and Viles, H.: What is the future for our earthen heritage? Modelling the risk of environmentally-driven deterioration at sites located in dryland areas, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-62, https://doi.org/10.5194/egusphere-egu2020-62, 2020.
Earthen heritage forms ~10% of UNESCO’s World Heritage List, with sites generally concentrated in dryland environments. Many sites are exposed to environmental processes such as wind, sediment movement and rain, which can result in extensive deterioration of the earthen heritage. To improve the effectiveness of conservation strategies that aim to minimise deterioration, there is an urgent need to understand how multiple environmental processes interact and impact earthen heritage, particularly over longer (centennial) timescales. We therefore apply the ViSTA-HD model (Vegetation and Sediment TrAnsport model for Heritage Deterioration) to Suoyang Ancient City, an archaeological site in north-west China made of rammed earth. ViSTA-HD is a cellular automata model developed by the authors to model the risk of environmentally-driven deterioration at earthen heritage sites. It is comprised of two modules: (i) an environmental module that spatially resolves environmental processes across an earthen site, and (ii) a deterioration module that spatially resolves the risk of deterioration across a wall face. The risk of deterioration is simulated for three common deterioration features at Suoyang - polishing, pitting and slurry. We use ViSTA-HD to investigate variations in deterioration risk under future potential climate scenarios across the 21st century. We also use the model to robustly test the impact of potential nature-based conservation strategies.
How to cite: Richards, J., Mayaud, J., Bailey, R., and Viles, H.: What is the future for our earthen heritage? Modelling the risk of environmentally-driven deterioration at sites located in dryland areas, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-62, https://doi.org/10.5194/egusphere-egu2020-62, 2020.
EGU2020-22132 | Displays | ITS2.2/GM12.5
Climate and nature management in the Middle Ages in the Upper Volga basinVyacheslav Nizovtsev and Natalia Erman
A paired analysis of historical documents was performed for the Upper Volga Basin (primary chronical sources published in the Complete Collection of Russian annals were analyzed), and papers on the dynamics of fluctuations in lake levels, river water levels, dendrological and palynological data were published. The peak of the medieval optimum was at the turn of the first and second millennia, and its maximum in the region was noted at the end of the X century. During this period there were no severe winters. A small amount of summer rainfalls led to a reduction in shallow water bodies, water-logging and a decrease in river floods. This is evidenced by the settlements on the floodplains of a number of Upper Volga rivers. At this time, the Upper Volga route and the "route from the Varangians to the Greeks" began to function. The exploration by the Slavs of the Upper Volga basin and the development of the settlement structure took place in favorable conditions for agriculture and settlement. Climatic conditions not only provided good harvests, but also contributed to the economic growth and development of relations between Slavic tribes during the formation of the ancient Russian state. The transition period of the XIII - XIV centuries was called the “period of contrasts,” because it was a harbinger of the Little Ice Age. It was characterized by the following features: an increase in the intra-seasonal climate variability, an increase in humidity, drastic fluctuation in humidity and relative warmth from year to year, a widespread decrease in summer temperatures by 1-2 ° C. The XIII century accounts for one of the longest periods in which various extreme natural phenomena concentrated. It refers to the years 1211-1233, 15 of which were years of famine. Climatologists call XIV-XIX centuries the Little Ice Age (LIA). The average annual temperature dropped by - 1.4 ° С, and the average summer temperature dropped by 2-3° С. Periods of increased humidity alternated with dry periods more frequently, cyclonic activity increased dramatically, and the duration of the growing season decreased by almost three weeks. In the XV century already more than 150 extreme adverse natural phenomena were recorded. In the era of the Little Ice Age, dramatic climate fluctuations were recorded by various sources more and more often. In Central Russia chroniclers recorded drastic climate cooling in the last third of the XVI century. Simultaneously with the beginning of the Little Ice Age, the process of developing watershed areas took place during the internal colonization of the land. The determining factors were demographic, socio-economic and historical, but the role of the natural factor cannot be ignored. The climax of the increase in the number of extreme natural phenomena falls on the XV-XVII centuries. Only at the end of the XVII century climate conditions in Russia somewhat leveled off.
This work was financially supported by the RFBR (Russian Foundation of Basic Research) grant: Project № 19-05-00233.
How to cite: Nizovtsev, V. and Erman, N.: Climate and nature management in the Middle Ages in the Upper Volga basin, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22132, https://doi.org/10.5194/egusphere-egu2020-22132, 2020.
A paired analysis of historical documents was performed for the Upper Volga Basin (primary chronical sources published in the Complete Collection of Russian annals were analyzed), and papers on the dynamics of fluctuations in lake levels, river water levels, dendrological and palynological data were published. The peak of the medieval optimum was at the turn of the first and second millennia, and its maximum in the region was noted at the end of the X century. During this period there were no severe winters. A small amount of summer rainfalls led to a reduction in shallow water bodies, water-logging and a decrease in river floods. This is evidenced by the settlements on the floodplains of a number of Upper Volga rivers. At this time, the Upper Volga route and the "route from the Varangians to the Greeks" began to function. The exploration by the Slavs of the Upper Volga basin and the development of the settlement structure took place in favorable conditions for agriculture and settlement. Climatic conditions not only provided good harvests, but also contributed to the economic growth and development of relations between Slavic tribes during the formation of the ancient Russian state. The transition period of the XIII - XIV centuries was called the “period of contrasts,” because it was a harbinger of the Little Ice Age. It was characterized by the following features: an increase in the intra-seasonal climate variability, an increase in humidity, drastic fluctuation in humidity and relative warmth from year to year, a widespread decrease in summer temperatures by 1-2 ° C. The XIII century accounts for one of the longest periods in which various extreme natural phenomena concentrated. It refers to the years 1211-1233, 15 of which were years of famine. Climatologists call XIV-XIX centuries the Little Ice Age (LIA). The average annual temperature dropped by - 1.4 ° С, and the average summer temperature dropped by 2-3° С. Periods of increased humidity alternated with dry periods more frequently, cyclonic activity increased dramatically, and the duration of the growing season decreased by almost three weeks. In the XV century already more than 150 extreme adverse natural phenomena were recorded. In the era of the Little Ice Age, dramatic climate fluctuations were recorded by various sources more and more often. In Central Russia chroniclers recorded drastic climate cooling in the last third of the XVI century. Simultaneously with the beginning of the Little Ice Age, the process of developing watershed areas took place during the internal colonization of the land. The determining factors were demographic, socio-economic and historical, but the role of the natural factor cannot be ignored. The climax of the increase in the number of extreme natural phenomena falls on the XV-XVII centuries. Only at the end of the XVII century climate conditions in Russia somewhat leveled off.
This work was financially supported by the RFBR (Russian Foundation of Basic Research) grant: Project № 19-05-00233.
How to cite: Nizovtsev, V. and Erman, N.: Climate and nature management in the Middle Ages in the Upper Volga basin, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22132, https://doi.org/10.5194/egusphere-egu2020-22132, 2020.
EGU2020-19139 | Displays | ITS2.2/GM12.5
Exploration of submerged Mesolithic landscapes around the Brown Bank, southern North SeaMerle Muru, Rachel Harding, Simon Fitch, Tine Missiaen, and Vince Gaffney
During the late glacial and early Holocene, vast areas of dry land stretched from the British Isles to continental Europe over what is now the southern part of the North Sea. Whilst it is known that this landscape was inhabited, little is known about the cultures that lived there and the surrounding environment. This study focuses on the Brown Bank area, between the UK and Dutch coasts, with its significant 25 km long and 10-15 m high ridge on the seabed which has provided many Mesolithic ex-situ finds. However, all of these finds have been recovered serendipitously due to commercial fishing and dredging, and thus the landscape and sedimentary context of these archaeological finds is unclear.
The goal of this study is to map the terrestrial features in the Brown Bank area and reconstruct the palaeolandscape and its inundation to determine the potential locations from which this archaeological material derives, and potentially locate Mesolithic settlement sites. The project uses high-resolution parametric echosounder surveys in a dense survey network to record the area and facilitate later targeted dredging and vibro-core sampling.
The seismic surveys revealed a pre-marine inundation landscape with fluvial channels eroded into post glacial sediments. A peat layer was located on the top of the banks of the channels where it continues laterally hundreds of metres. Radiocarbon dating of the top part of the peat layer, just below the transgressive deposits gave ages around 10.2-9.9 cal ka BP. Palaeogeographic reconstructions based on the mapped terrestrial features and the available relative sea level change data suggest that the final inundation of the area happened c. 1000 years later. Where dredging was carried out in areas of interest, primarily where the early Holocene surface outcropped onto the seabed, a large number of blocks of peat with pieces of wood and other macrofossils were recovered, suggesting a good potential for preservation of possible archaeological material and possible locations of origin for the serendipitous finds made by fishermen.
We conclude that this study provides new insights into the palaeogeography and the timing of the inundation of the Brown Bank area and gives the landscape context to the potential Mesolithic habitation of this part of the southern North Sea.
How to cite: Muru, M., Harding, R., Fitch, S., Missiaen, T., and Gaffney, V.: Exploration of submerged Mesolithic landscapes around the Brown Bank, southern North Sea, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19139, https://doi.org/10.5194/egusphere-egu2020-19139, 2020.
During the late glacial and early Holocene, vast areas of dry land stretched from the British Isles to continental Europe over what is now the southern part of the North Sea. Whilst it is known that this landscape was inhabited, little is known about the cultures that lived there and the surrounding environment. This study focuses on the Brown Bank area, between the UK and Dutch coasts, with its significant 25 km long and 10-15 m high ridge on the seabed which has provided many Mesolithic ex-situ finds. However, all of these finds have been recovered serendipitously due to commercial fishing and dredging, and thus the landscape and sedimentary context of these archaeological finds is unclear.
The goal of this study is to map the terrestrial features in the Brown Bank area and reconstruct the palaeolandscape and its inundation to determine the potential locations from which this archaeological material derives, and potentially locate Mesolithic settlement sites. The project uses high-resolution parametric echosounder surveys in a dense survey network to record the area and facilitate later targeted dredging and vibro-core sampling.
The seismic surveys revealed a pre-marine inundation landscape with fluvial channels eroded into post glacial sediments. A peat layer was located on the top of the banks of the channels where it continues laterally hundreds of metres. Radiocarbon dating of the top part of the peat layer, just below the transgressive deposits gave ages around 10.2-9.9 cal ka BP. Palaeogeographic reconstructions based on the mapped terrestrial features and the available relative sea level change data suggest that the final inundation of the area happened c. 1000 years later. Where dredging was carried out in areas of interest, primarily where the early Holocene surface outcropped onto the seabed, a large number of blocks of peat with pieces of wood and other macrofossils were recovered, suggesting a good potential for preservation of possible archaeological material and possible locations of origin for the serendipitous finds made by fishermen.
We conclude that this study provides new insights into the palaeogeography and the timing of the inundation of the Brown Bank area and gives the landscape context to the potential Mesolithic habitation of this part of the southern North Sea.
How to cite: Muru, M., Harding, R., Fitch, S., Missiaen, T., and Gaffney, V.: Exploration of submerged Mesolithic landscapes around the Brown Bank, southern North Sea, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19139, https://doi.org/10.5194/egusphere-egu2020-19139, 2020.
EGU2020-5185 | Displays | ITS2.2/GM12.5
Discontinuities in sediment connectivity controlled by human-environment interaction along the sediment cascade of a mesoscale catchment in Central GermanyMarkus Fuchs, Raphael Steup, Katja Korthiringer, and Timo Seregely
In many Central European river catchments changes in long-term sediment dynamics are caused by external driving forces (e.g. human impact, climate change). In addition, the sensitivity of fluvial systems to environmental change is controlled by the catchment’s geomorphic connectivity of individual sediment sinks. In this study, we reconstruct the temporal evolution of different types of sediment reservoirs along the sediment cascade in a mesoscale upland catchment to assess its sensitivity to external changes. The chronological evolution of hillslope and floodplain sediments is based on 79 OSL and 83 C14 ages. Our results show that deposition of hillslope sediments coincides with the first evidence for human-induced soil erosion triggered by the earliest European farmers, but were decoupled from the river network for more than two millennia when the aggradation of overbank fines started and steadily increased. Therefore, the connectivity between the colluvial and alluvial sediment sinks of the catchment is mainly controlled by the landscape geometry and frequency and magnitude of erosion, transport and deposition processes.
How to cite: Fuchs, M., Steup, R., Korthiringer, K., and Seregely, T.: Discontinuities in sediment connectivity controlled by human-environment interaction along the sediment cascade of a mesoscale catchment in Central Germany, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5185, https://doi.org/10.5194/egusphere-egu2020-5185, 2020.
In many Central European river catchments changes in long-term sediment dynamics are caused by external driving forces (e.g. human impact, climate change). In addition, the sensitivity of fluvial systems to environmental change is controlled by the catchment’s geomorphic connectivity of individual sediment sinks. In this study, we reconstruct the temporal evolution of different types of sediment reservoirs along the sediment cascade in a mesoscale upland catchment to assess its sensitivity to external changes. The chronological evolution of hillslope and floodplain sediments is based on 79 OSL and 83 C14 ages. Our results show that deposition of hillslope sediments coincides with the first evidence for human-induced soil erosion triggered by the earliest European farmers, but were decoupled from the river network for more than two millennia when the aggradation of overbank fines started and steadily increased. Therefore, the connectivity between the colluvial and alluvial sediment sinks of the catchment is mainly controlled by the landscape geometry and frequency and magnitude of erosion, transport and deposition processes.
How to cite: Fuchs, M., Steup, R., Korthiringer, K., and Seregely, T.: Discontinuities in sediment connectivity controlled by human-environment interaction along the sediment cascade of a mesoscale catchment in Central Germany, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5185, https://doi.org/10.5194/egusphere-egu2020-5185, 2020.
EGU2020-16886 | Displays | ITS2.2/GM12.5
Understanding risk and resilience in alpine communities: A conceptual model for coupling human and landscape systemsMargreth Keiler, Jorge Alberto Ramirez, Md Sarwar Hossain, Tina Haisch, Olivia Martius, Chinwe Ifejika Speranza, and Heike Mayer
Disasters induced by natural hazards or extreme events consist of interacting human and natural components. While progress has been made to mitigate and adapt to natural hazards, much of the existing research lacks interdisciplinary approaches that equally consider both natural and social processes. More importantly, this lack of integration between approaches remains a major challenge in developing disaster risk management plans for communities. In this study, we made a first attempt to develop a conceptual model of a coupled human-landscape system in Swiss Alpine communities. The conceptual model contains a system dynamics (e.g. interaction, feedbacks) component to reproduce community level, socio-economic developments and shocks that include economic crises leading to unemployment, depopulation and diminished community revenue. Additionally, the conceptual model contains climate, hydrology, and geomorphic components that are sources of natural hazards such as floods and debris flows. Feedbacks between the socio-economic and biophysical systems permit adaptation to flood and debris flow risks by implementing spatially explicit mitigation options including flood defences and land cover changes. Here we justify the components, scales, and feedbacks present in the conceptual model and provide guidance on how to operationalize the conceptual model to assess risk and community resilience of Swiss Alpine communities.
How to cite: Keiler, M., Ramirez, J. A., Hossain, M. S., Haisch, T., Martius, O., Ifejika Speranza, C., and Mayer, H.: Understanding risk and resilience in alpine communities: A conceptual model for coupling human and landscape systems, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-16886, https://doi.org/10.5194/egusphere-egu2020-16886, 2020.
Disasters induced by natural hazards or extreme events consist of interacting human and natural components. While progress has been made to mitigate and adapt to natural hazards, much of the existing research lacks interdisciplinary approaches that equally consider both natural and social processes. More importantly, this lack of integration between approaches remains a major challenge in developing disaster risk management plans for communities. In this study, we made a first attempt to develop a conceptual model of a coupled human-landscape system in Swiss Alpine communities. The conceptual model contains a system dynamics (e.g. interaction, feedbacks) component to reproduce community level, socio-economic developments and shocks that include economic crises leading to unemployment, depopulation and diminished community revenue. Additionally, the conceptual model contains climate, hydrology, and geomorphic components that are sources of natural hazards such as floods and debris flows. Feedbacks between the socio-economic and biophysical systems permit adaptation to flood and debris flow risks by implementing spatially explicit mitigation options including flood defences and land cover changes. Here we justify the components, scales, and feedbacks present in the conceptual model and provide guidance on how to operationalize the conceptual model to assess risk and community resilience of Swiss Alpine communities.
How to cite: Keiler, M., Ramirez, J. A., Hossain, M. S., Haisch, T., Martius, O., Ifejika Speranza, C., and Mayer, H.: Understanding risk and resilience in alpine communities: A conceptual model for coupling human and landscape systems, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-16886, https://doi.org/10.5194/egusphere-egu2020-16886, 2020.
EGU2020-10654 | Displays | ITS2.2/GM12.5
A qualitative approach to evaluating the impact of human interventions on the middle Charente River (West France)Amélie Duquesne, Christine Plumejeaud-Perreau, and Jean-Michel Carozza
Although many studies have analyzed the impact of human interventions on European rivers over decades or centuries, researchers have rarely evaluated the geomorphological effects of these anthropogenic pressures on fluvial systems. However, quantifying anthropogenic impacts is fundamental to understanding how rivers are affected by human interventions and to improving the river management and restoration. The aim of this study is to propose a new and original qualitative method to estimate the importance of human impacts on rivers over the last three centuries using the middle Charente River as a test case. The study area is an anastomosing, low-energy and little mobile river of the lowlands of Western France. It extends from the city of Angoulême (Charente) to the city of Saintes (Charente-Maritime), with a length of approximately 100 km. The study segment has been subjected to high anthropogenic pressure since the High Middle Ages, and it was enhanced during the 19th century to facilitate navigation and terrestrial transportation, to ensure the exploitation of the water's driving force (water mills and paper mills), to maintain the local people (fishing dams and agro-pastoral uses) and to allow for flood protection. To understand and estimate the anthropogenic heritage of the Charente River, this study employed a two-stage method: 1) an inventory of the human interventions on the fluvial system through the consultation of geo-historical data (textual archives, historical maps and iconography) dating from the end of the 17th century to the 2010s and 2) an evaluation of the human impact of each human intervention, sub-category and category of intervention based on the calculation of the Cumulative Human Impact Index. The Cumulative Human Impact Index is composed of several qualitative attributes graded by an evaluator. The results allow one 1) to generate a database and typology of the human interventions affecting the middle Charente River over the long term; 2) to map the cumulative impacts of human interventions on the study area; and 3) to analyze the unitary and overall impact of each human intervention, sub-category and category of intervention on the river landscape's heritage. Finally, this study concludes with 1) a discussion of the advantages of using a qualitative methodology for the estimation of anthropogenic impacts and 2) a reflection on the use of the maps of cumulative human impacts for Charente River management and restoration.
How to cite: Duquesne, A., Plumejeaud-Perreau, C., and Carozza, J.-M.: A qualitative approach to evaluating the impact of human interventions on the middle Charente River (West France), EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10654, https://doi.org/10.5194/egusphere-egu2020-10654, 2020.
Although many studies have analyzed the impact of human interventions on European rivers over decades or centuries, researchers have rarely evaluated the geomorphological effects of these anthropogenic pressures on fluvial systems. However, quantifying anthropogenic impacts is fundamental to understanding how rivers are affected by human interventions and to improving the river management and restoration. The aim of this study is to propose a new and original qualitative method to estimate the importance of human impacts on rivers over the last three centuries using the middle Charente River as a test case. The study area is an anastomosing, low-energy and little mobile river of the lowlands of Western France. It extends from the city of Angoulême (Charente) to the city of Saintes (Charente-Maritime), with a length of approximately 100 km. The study segment has been subjected to high anthropogenic pressure since the High Middle Ages, and it was enhanced during the 19th century to facilitate navigation and terrestrial transportation, to ensure the exploitation of the water's driving force (water mills and paper mills), to maintain the local people (fishing dams and agro-pastoral uses) and to allow for flood protection. To understand and estimate the anthropogenic heritage of the Charente River, this study employed a two-stage method: 1) an inventory of the human interventions on the fluvial system through the consultation of geo-historical data (textual archives, historical maps and iconography) dating from the end of the 17th century to the 2010s and 2) an evaluation of the human impact of each human intervention, sub-category and category of intervention based on the calculation of the Cumulative Human Impact Index. The Cumulative Human Impact Index is composed of several qualitative attributes graded by an evaluator. The results allow one 1) to generate a database and typology of the human interventions affecting the middle Charente River over the long term; 2) to map the cumulative impacts of human interventions on the study area; and 3) to analyze the unitary and overall impact of each human intervention, sub-category and category of intervention on the river landscape's heritage. Finally, this study concludes with 1) a discussion of the advantages of using a qualitative methodology for the estimation of anthropogenic impacts and 2) a reflection on the use of the maps of cumulative human impacts for Charente River management and restoration.
How to cite: Duquesne, A., Plumejeaud-Perreau, C., and Carozza, J.-M.: A qualitative approach to evaluating the impact of human interventions on the middle Charente River (West France), EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10654, https://doi.org/10.5194/egusphere-egu2020-10654, 2020.
The first prayer to the Sun (abstract)
At the end of Pleistocene, from the 15th to the 12th millennia BP people of the Magdalenian culture one of the last cultures of Paleolith were living in Pyrenean at the basin of Ariège River. Magdalenian people had left great works of art on the walls of the caves in the region. Also many tools and sculptures had been found inside these caves. According to the French archeologists who were investigating the caves (i.a. Pailhaugue 1998, Clottes 1999), Magdalenians were hunting in the tundra for the bison, reindeer, horse, deer and antelope during the Summer. During the cold Winter they were moving to the Pyrenees, and that time they had been visiting the Pyrenean caves. On the basis of the paintings and sculptures found in Ariège, French archeologists draw conclusions concerning the structure of the social groups and shamanism.
After electrification of the Grotte de la Vache (one of the caves in Ariège), a very modest image became visible. French archeologists agreed that it represents the image of sun. However, nobody has been analyzing it in a more detailed way.
According to the proposed paper, this modest image of the sun on the wall of the Grotte de la Vache is the most important among all other images and paintings. This is because it shows the transition from shamanism to the first religion, the Sun worship. Basing on the chronology of the known volcanic eruptions and the data gathered in the framework of the Greenland Ice Core Project (2009) we are able to place this transition in time as 12 945 (+/- 15) years BP.
Keywords: shamanism, first religion, Sun worship, Magdalenian culture, Ariége, Niaux, Vache
How to cite: Krukowski, J.: The first prayer to the Sun, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-835, https://doi.org/10.5194/egusphere-egu2020-835, 2020.
The first prayer to the Sun (abstract)
At the end of Pleistocene, from the 15th to the 12th millennia BP people of the Magdalenian culture one of the last cultures of Paleolith were living in Pyrenean at the basin of Ariège River. Magdalenian people had left great works of art on the walls of the caves in the region. Also many tools and sculptures had been found inside these caves. According to the French archeologists who were investigating the caves (i.a. Pailhaugue 1998, Clottes 1999), Magdalenians were hunting in the tundra for the bison, reindeer, horse, deer and antelope during the Summer. During the cold Winter they were moving to the Pyrenees, and that time they had been visiting the Pyrenean caves. On the basis of the paintings and sculptures found in Ariège, French archeologists draw conclusions concerning the structure of the social groups and shamanism.
After electrification of the Grotte de la Vache (one of the caves in Ariège), a very modest image became visible. French archeologists agreed that it represents the image of sun. However, nobody has been analyzing it in a more detailed way.
According to the proposed paper, this modest image of the sun on the wall of the Grotte de la Vache is the most important among all other images and paintings. This is because it shows the transition from shamanism to the first religion, the Sun worship. Basing on the chronology of the known volcanic eruptions and the data gathered in the framework of the Greenland Ice Core Project (2009) we are able to place this transition in time as 12 945 (+/- 15) years BP.
Keywords: shamanism, first religion, Sun worship, Magdalenian culture, Ariége, Niaux, Vache
How to cite: Krukowski, J.: The first prayer to the Sun, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-835, https://doi.org/10.5194/egusphere-egu2020-835, 2020.
EGU2020-830 | Displays | ITS2.2/GM12.5
Fire history and the relationship with late Holocene mining activities in the NW Romanian Carpathians reconstructed from two peat core sequencesAncuta Petras, Gabriela Florescu, Simon M. Hutchinson, Cécile Brun, Marie-Claude Bal, Vanessa Py Saragaglia, and Marcel Mindrescu
Little is known about how areas of high ecological value and biodiversity hotpots will be impacted in the long-term by increasing anthropogenic pressure, added to future climate warming. One such example is the Romanian Carpathians, among the richest biogeographical regions in Europe in terms of biodiversity indicators and home to the largest unmanaged old-growth forests in Europe. This area is currently threatened by forest clearance and other anthropogenic land-use change, poor management practices and increased risk to wildfire. Peat bogs are among the most important palaeo-archives for the reconstruction of past environmental changes and disturbance regimes, with the potential to provide the longer-term perspective at a local to regional scale necessary for a sustainable management and restoration of these areas. Here we reconstruct late Holocene fire history and the relationship with anthropogenic disturbance, particularly mining, in a former mining area located in Lapus Mts, NW Romanian Carpathians, based on two peat sequences.
To reconstruct past fire activity, we used sedimentary macroscopic charcoal and also employed macro-charcoal morphologies to determine the type of material burnt (wood, grass, forbs). Past local soil/bedrock erosion and regional atmospheric pollution from historical mining were reconstructed on the basis of abiotic sediment properties such as elemental geochemistry, magnetic mineral characteristics, organic matter content and particle size. Our results show clear variations in macro-charcoal concentration, which coincide with changes in the geochemical, magnetic and grain-size indicators. Specifically, increases in macro-charcoal concentration, particularly the wood charcoal morphotype, were shortly followed in both cores by marked increases in heavy metal concentration and by enhanced soil and bedrock erosion, as inferred from geochemical, magnetic and grain-size proxies. This suggests increased local disturbance during intervals with mining activities and indicates the likelihood that humans used fire to clear the forests and open the access to the mining sites. Such actions likely resulted in topsoil removal and bedrock left exposed to environmental and climatic factors. Over the last centuries, the recovery of the local environment is evident in the proxies, with low fire activity and low soil/bedrock erosion, which coincides with the cessation of local mining activities.
By showing both impact and recovery of the landscape, our study offers insight into the past evolution of this area and can be used to predict future possible responses of the local environment to anthropogenic stressors.
How to cite: Petras, A., Florescu, G., Hutchinson, S. M., Brun, C., Bal, M.-C., Saragaglia, V. P., and Mindrescu, M.: Fire history and the relationship with late Holocene mining activities in the NW Romanian Carpathians reconstructed from two peat core sequences, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-830, https://doi.org/10.5194/egusphere-egu2020-830, 2020.
Little is known about how areas of high ecological value and biodiversity hotpots will be impacted in the long-term by increasing anthropogenic pressure, added to future climate warming. One such example is the Romanian Carpathians, among the richest biogeographical regions in Europe in terms of biodiversity indicators and home to the largest unmanaged old-growth forests in Europe. This area is currently threatened by forest clearance and other anthropogenic land-use change, poor management practices and increased risk to wildfire. Peat bogs are among the most important palaeo-archives for the reconstruction of past environmental changes and disturbance regimes, with the potential to provide the longer-term perspective at a local to regional scale necessary for a sustainable management and restoration of these areas. Here we reconstruct late Holocene fire history and the relationship with anthropogenic disturbance, particularly mining, in a former mining area located in Lapus Mts, NW Romanian Carpathians, based on two peat sequences.
To reconstruct past fire activity, we used sedimentary macroscopic charcoal and also employed macro-charcoal morphologies to determine the type of material burnt (wood, grass, forbs). Past local soil/bedrock erosion and regional atmospheric pollution from historical mining were reconstructed on the basis of abiotic sediment properties such as elemental geochemistry, magnetic mineral characteristics, organic matter content and particle size. Our results show clear variations in macro-charcoal concentration, which coincide with changes in the geochemical, magnetic and grain-size indicators. Specifically, increases in macro-charcoal concentration, particularly the wood charcoal morphotype, were shortly followed in both cores by marked increases in heavy metal concentration and by enhanced soil and bedrock erosion, as inferred from geochemical, magnetic and grain-size proxies. This suggests increased local disturbance during intervals with mining activities and indicates the likelihood that humans used fire to clear the forests and open the access to the mining sites. Such actions likely resulted in topsoil removal and bedrock left exposed to environmental and climatic factors. Over the last centuries, the recovery of the local environment is evident in the proxies, with low fire activity and low soil/bedrock erosion, which coincides with the cessation of local mining activities.
By showing both impact and recovery of the landscape, our study offers insight into the past evolution of this area and can be used to predict future possible responses of the local environment to anthropogenic stressors.
How to cite: Petras, A., Florescu, G., Hutchinson, S. M., Brun, C., Bal, M.-C., Saragaglia, V. P., and Mindrescu, M.: Fire history and the relationship with late Holocene mining activities in the NW Romanian Carpathians reconstructed from two peat core sequences, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-830, https://doi.org/10.5194/egusphere-egu2020-830, 2020.
EGU2020-19383 | Displays | ITS2.2/GM12.5
Geomorphological and climatic controls on the settling of river valleys in NW Transylvania (Romania) in the HoloceneIoana Persoiu and Aurel Persoiu
The wide river valleys and their lower terraces in NW Transylvania were the main avenue along which people and cultures crossed the Carpathian Mountains (East Central Europe) in the early Holocene and later established communities up to the present. This colonization process was marked by constant shifts between the locations of the main settlements, in response to changes in climate and associated geomorphological processes. In this paper, we have combined paleoclimatic, paleovegetation and geomorphological data from the Someșul Mic catchment to provide a narrative of interactions between human settlers and their natural and built environment between ca. 8000 cal BP and 1850 AD.
The climate of the region had a high degree of continentality (warm summers and cold winters) in the early Holocene that started to decrease after ca. 7000 cal BP, to reach a minimum in the mid-Holocene. After ca. 4000 cal BP, summer temperatures slightly increased while winter ones decreased, leading to renewed continentality. Contrary, the precipitation regime was dominated by low values in the first half of the Holocene, followed by an abrupt increase after 5500 cal BP, when Mediterranean climate expanded northwards. Pollen records indicate large-scale increases in temperate forests from the early Holocene onwards; with a general decrease in openness after 8500 cal BP. Following the spread of Neolithic societies, arable land expanded after ca. 7500 cal BP, while forested areas started to decrease subsequently. The absolute ages of alluvial sediments along the the median reach of Someșul Mic river suggest the river flows at the floodplain level since the Last Glacial Maximum. In the Late Glacial the channel has transformed from a coarse gravel braided channel type in an incised, meandering or anabranching one, except in the area of the former alluvial fan of the river, developed at the entrance in the hilly area. In this case, the Bolling – Allerod Interstadial is marked by a slight diminish of flow regime, with the maintenance of the braided pattern. Generalized channel change in a narrow, incised meandering one occurred with few hundred years delay after the edge of the Holocene, and most probably was predated by a transitory channel type (wandering or subadapted braided pattern).
Mesolithic, Neolithic, Bronze, Iron, Roman and Mediaeval findings are preferentially (82 %) positioned on alluvial fans, glacises or positive floodplain forms imposed by tectonic uplifts. Only 18 % of them are located in areas affected by local subsidence or with evidences of fluvial activity (active channel, meander belt, palaeochannels).
The human communities have fully used the local opportunities in placing their constructions: alluvial fans, glacis, positive morphologies imposed by local tectonics, stable channel reaches at millennial or even Holocene scale. The centennial and millennial climatic variations (precipitation) most probably influenced the spatial dynamics of human settlements and constructions, with advancements during warm and dry periods in more vulnerable areas to floods, torrential activity or ground level variations, and retreats during cold and humid ones. The role of abrupt climate oscillation changes is not well understood.
How to cite: Persoiu, I. and Persoiu, A.: Geomorphological and climatic controls on the settling of river valleys in NW Transylvania (Romania) in the Holocene, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19383, https://doi.org/10.5194/egusphere-egu2020-19383, 2020.
The wide river valleys and their lower terraces in NW Transylvania were the main avenue along which people and cultures crossed the Carpathian Mountains (East Central Europe) in the early Holocene and later established communities up to the present. This colonization process was marked by constant shifts between the locations of the main settlements, in response to changes in climate and associated geomorphological processes. In this paper, we have combined paleoclimatic, paleovegetation and geomorphological data from the Someșul Mic catchment to provide a narrative of interactions between human settlers and their natural and built environment between ca. 8000 cal BP and 1850 AD.
The climate of the region had a high degree of continentality (warm summers and cold winters) in the early Holocene that started to decrease after ca. 7000 cal BP, to reach a minimum in the mid-Holocene. After ca. 4000 cal BP, summer temperatures slightly increased while winter ones decreased, leading to renewed continentality. Contrary, the precipitation regime was dominated by low values in the first half of the Holocene, followed by an abrupt increase after 5500 cal BP, when Mediterranean climate expanded northwards. Pollen records indicate large-scale increases in temperate forests from the early Holocene onwards; with a general decrease in openness after 8500 cal BP. Following the spread of Neolithic societies, arable land expanded after ca. 7500 cal BP, while forested areas started to decrease subsequently. The absolute ages of alluvial sediments along the the median reach of Someșul Mic river suggest the river flows at the floodplain level since the Last Glacial Maximum. In the Late Glacial the channel has transformed from a coarse gravel braided channel type in an incised, meandering or anabranching one, except in the area of the former alluvial fan of the river, developed at the entrance in the hilly area. In this case, the Bolling – Allerod Interstadial is marked by a slight diminish of flow regime, with the maintenance of the braided pattern. Generalized channel change in a narrow, incised meandering one occurred with few hundred years delay after the edge of the Holocene, and most probably was predated by a transitory channel type (wandering or subadapted braided pattern).
Mesolithic, Neolithic, Bronze, Iron, Roman and Mediaeval findings are preferentially (82 %) positioned on alluvial fans, glacises or positive floodplain forms imposed by tectonic uplifts. Only 18 % of them are located in areas affected by local subsidence or with evidences of fluvial activity (active channel, meander belt, palaeochannels).
The human communities have fully used the local opportunities in placing their constructions: alluvial fans, glacis, positive morphologies imposed by local tectonics, stable channel reaches at millennial or even Holocene scale. The centennial and millennial climatic variations (precipitation) most probably influenced the spatial dynamics of human settlements and constructions, with advancements during warm and dry periods in more vulnerable areas to floods, torrential activity or ground level variations, and retreats during cold and humid ones. The role of abrupt climate oscillation changes is not well understood.
How to cite: Persoiu, I. and Persoiu, A.: Geomorphological and climatic controls on the settling of river valleys in NW Transylvania (Romania) in the Holocene, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19383, https://doi.org/10.5194/egusphere-egu2020-19383, 2020.
EGU2020-8975 | Displays | ITS2.2/GM12.5
Ancient to modern metallurgical slags: evolving smelting techniques and their interaction with the environmentIrene Rocchi, Sergio Rocchi, and Matteo Masotta
The discovery of metals and how to extract and use them was a turning point in human history, because it changed the economy and socio-cultural structure of ancient civilisations and started to severely affect the impact of human activities on the environment. In fact, a lot of societies developed near extraction sites and founded their economy on the use and trade of metals.
In Tuscany (Italy) there has been a long history of mining and metal extraction. From archaeological studies it has been reconstructed that the earliest records of these activities date back to the Etruscan period (VII century B.C.). Exploitation continued intermittently until a few decades ago. This extended period of mining exploitation left a wealth of both iron and copper metallurgical slags that can usually be found as abandoned and unsupervised heaps.
These slags, apparently just a waste from the metallurgical process, actually carry information about the evolution of the metallurgical process through which they were generated. Information about the charge, flux and fuel can be inferred from chemical and mineralogical composition of the slags.
Slags from three different smelting districts, ranging from ancient Etruscan-Roman period to modern age (1900 A.D.) were studied macroscopically, identifying distinctive features related to the smelting process in different time periods. Then, thin sections obtained from representative samples were examined, using optical microscopy and electron microscopy. Chemical analyses were performed for major and trace elements by X-ray fluorescence spectroscopy and by inductively coupled plasma mass spectrometry, respectively.
Leaching experiments on some carefully selected samples were also completed, to investigate the release of potentially toxic elements during the interaction of the slags with the surrounding environment.
This kind of investigation allows to reconstruct part of the history of metal utilisation as well as to predict the impact that these remains will have on the environment.
How to cite: Rocchi, I., Rocchi, S., and Masotta, M.: Ancient to modern metallurgical slags: evolving smelting techniques and their interaction with the environment, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8975, https://doi.org/10.5194/egusphere-egu2020-8975, 2020.
The discovery of metals and how to extract and use them was a turning point in human history, because it changed the economy and socio-cultural structure of ancient civilisations and started to severely affect the impact of human activities on the environment. In fact, a lot of societies developed near extraction sites and founded their economy on the use and trade of metals.
In Tuscany (Italy) there has been a long history of mining and metal extraction. From archaeological studies it has been reconstructed that the earliest records of these activities date back to the Etruscan period (VII century B.C.). Exploitation continued intermittently until a few decades ago. This extended period of mining exploitation left a wealth of both iron and copper metallurgical slags that can usually be found as abandoned and unsupervised heaps.
These slags, apparently just a waste from the metallurgical process, actually carry information about the evolution of the metallurgical process through which they were generated. Information about the charge, flux and fuel can be inferred from chemical and mineralogical composition of the slags.
Slags from three different smelting districts, ranging from ancient Etruscan-Roman period to modern age (1900 A.D.) were studied macroscopically, identifying distinctive features related to the smelting process in different time periods. Then, thin sections obtained from representative samples were examined, using optical microscopy and electron microscopy. Chemical analyses were performed for major and trace elements by X-ray fluorescence spectroscopy and by inductively coupled plasma mass spectrometry, respectively.
Leaching experiments on some carefully selected samples were also completed, to investigate the release of potentially toxic elements during the interaction of the slags with the surrounding environment.
This kind of investigation allows to reconstruct part of the history of metal utilisation as well as to predict the impact that these remains will have on the environment.
How to cite: Rocchi, I., Rocchi, S., and Masotta, M.: Ancient to modern metallurgical slags: evolving smelting techniques and their interaction with the environment, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8975, https://doi.org/10.5194/egusphere-egu2020-8975, 2020.
EGU2020-1045 | Displays | ITS2.2/GM12.5
Lake Lerna: investigating Hercules' ancient mythDanae Thivaiou, Efterpi Koskeridou, Christos Psarras, Konstantina Michalopoulou, Niki Evelpidou, Giannis Saitis, and George Lyras
Greece and the Aegean area are among the first areas in Europe to have been occupied by humans. The record of human interventions in natural environments is thus particularly rich. Some of the interventions of the people inhabiting various localities of the country have been recorded in local mythology. Through the interdisciplinary field of geomythology it is possible to attempt to uncover the relationships between the geological history of early civilizations and ancient myths.
In the present work, we focused on the history of Lake Lerni in the Eastern Peloponnese, an area that is better known through the myth of Hercules and the Lernaean Hydra. The area of the lake – now dried and cultivated – was part of a karstic system and constituted a marshland that was a source of diseases and needed to be dried.
A new core is studied from the area of modern-day Lerni using palaeontological methods in order to reconstruct environmental changes that occurred during the last 6.000 years approximately. The area is known to have gone from marsh-lacustrine environments to dryer environments after human intervention or the intervention of Hercules according to mythology. Levels of peat considered to represent humid intervals were dated using the radiocarbon method so as to have an age model of the core. Samples of sediment were taken every 10 cm; the grain size was analysed for each sample as well as the fossil content for the environmental reconstruction.
The presence of numerous freshwater gastropods reflects the intervals of lacustrine environment accompanied with extremely fine dark sediment. Sedimentology is stable throughout the core with few levels of coarse sand/fine gravel, only changes in colour hint to multiple levels richer in organic material.
How to cite: Thivaiou, D., Koskeridou, E., Psarras, C., Michalopoulou, K., Evelpidou, N., Saitis, G., and Lyras, G.: Lake Lerna: investigating Hercules' ancient myth, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1045, https://doi.org/10.5194/egusphere-egu2020-1045, 2020.
Greece and the Aegean area are among the first areas in Europe to have been occupied by humans. The record of human interventions in natural environments is thus particularly rich. Some of the interventions of the people inhabiting various localities of the country have been recorded in local mythology. Through the interdisciplinary field of geomythology it is possible to attempt to uncover the relationships between the geological history of early civilizations and ancient myths.
In the present work, we focused on the history of Lake Lerni in the Eastern Peloponnese, an area that is better known through the myth of Hercules and the Lernaean Hydra. The area of the lake – now dried and cultivated – was part of a karstic system and constituted a marshland that was a source of diseases and needed to be dried.
A new core is studied from the area of modern-day Lerni using palaeontological methods in order to reconstruct environmental changes that occurred during the last 6.000 years approximately. The area is known to have gone from marsh-lacustrine environments to dryer environments after human intervention or the intervention of Hercules according to mythology. Levels of peat considered to represent humid intervals were dated using the radiocarbon method so as to have an age model of the core. Samples of sediment were taken every 10 cm; the grain size was analysed for each sample as well as the fossil content for the environmental reconstruction.
The presence of numerous freshwater gastropods reflects the intervals of lacustrine environment accompanied with extremely fine dark sediment. Sedimentology is stable throughout the core with few levels of coarse sand/fine gravel, only changes in colour hint to multiple levels richer in organic material.
How to cite: Thivaiou, D., Koskeridou, E., Psarras, C., Michalopoulou, K., Evelpidou, N., Saitis, G., and Lyras, G.: Lake Lerna: investigating Hercules' ancient myth, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1045, https://doi.org/10.5194/egusphere-egu2020-1045, 2020.
EGU2020-19359 | Displays | ITS2.2/GM12.5
Records of climate changes and anthropogenic actions over dune fields in historical timesMihaela Tudor, Ana Ramos-Pereira, and Joana Gaspar de Freitas
EGU2020-19782 | Displays | ITS2.2/GM12.5
Modeling drift-induced maritime connectivity between Cyprus and its surrounding coastal areas during early HoloceneAndreas Nikolaidis, Evangelos Akylas, Constantine Michailides, Theodora Moutsiou, Georgios Leventis, Alexandros Constantinides, Carole McCartney, Stella Demesticha, Vasiliki Kassianidou, Zomenia Zomeni, Daniella Bar-Yosef Mayer, Yizhaq Makovsky, and Phaedon Kyriakidis
Maritime connectivity between Cyprus and other Eastern Mediterranean coastal regions on the mainland constitutes a critical factor towards understanding the origins of the early visitors to Cyprus during the onset of the Holocene (circa 12,000 years before present) in connection with the spread of the Neolithic in the region (Dawson, 2014).
In this work, ocean circulation modeling and particle tracking are employed for characterizing drift-induced sea-borne connectivity for that period, using data and assumptions to approximate prevailing paleo-geographical conditions (re-constructed coastline from global sea level curves), and rudimentary vessel (rafts, dugouts) characteristics, as well as present-day weather conditions. The Regional Ocean Modeling System (ROMS, Shchepetkin and mcWilliams, 2005), forced by Copernicus Marine portal hydrological data, with wave and wind forcing derived from a combination of global reanalysis data and regional-scale numerical weather predictions (ERA5 and E-WAVE project products), are employed to provide the physical domain and atmospheric conditions. Particle-tracking is carried out using the OpenDrift model (Dagestad et al., 2018) to simulate drift-induced (involuntary) sea-borne movement. The sensitivity of the results on the hydrodynamic response (e.g. drag) of rudimentary vessels, such as rafts of postulated shape, size, and weight, that are believed to have been used for maritime travel during the period of interest, is also investigated. The simulation results are used to estimate the degree of maritime connectivity, due to drift-induced sea-borne movement, between segments of Cyprus coastline as well as its neighboring mainlands, and identify areas of both coastlines where landing/departure might be most favorable.
This work aims to provide novel insights into the possible prehistoric maritime pathways between Cyprus and other Eastern Mediterranean coastal regions, and is carried out within the context of project SaRoCy (https://sarocy.cut.ac.cy), a two-year research project implemented under the “Excellence Hubs” Programme (contract number EXCELLENCE/0198/0143) of the RESTART 2016-2020 Programmes for Research, Technological Development and Innovation administered by the Research and Innovation Foundation of Cyprus.
References
Dagestad K.-F., Röhrs J., Breivik Ø., Aadlandsvik B. 2018. “OpenDrift: A generic framework for trajectory modeling'', Geoscientific Model Development 11, 1405-1420. https://doi.org/10.5194/gmd-11-1405-2018.
Dawson, H. 2014. Mediterranean Voyages: The Archaeology of Island Colonisation and Abandonment. Publications of the Institute of Archaeology, University College London. Walnut Creek, California: Left Coast Press Inc.
Shchepetkin, A. F., & McWilliams, J. C. 2005. “The regional oceanic modeling system (ROMS): A split-explicit, free-surface, topography-following-coordinate oceanic model”. Ocean Modelling 9, no. 4, 347-404. https://doi:10.1016/j.ocemod.2004.08.002.
How to cite: Nikolaidis, A., Akylas, E., Michailides, C., Moutsiou, T., Leventis, G., Constantinides, A., McCartney, C., Demesticha, S., Kassianidou, V., Zomeni, Z., Bar-Yosef Mayer, D., Makovsky, Y., and Kyriakidis, P.: Modeling drift-induced maritime connectivity between Cyprus and its surrounding coastal areas during early Holocene, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19782, https://doi.org/10.5194/egusphere-egu2020-19782, 2020.
Maritime connectivity between Cyprus and other Eastern Mediterranean coastal regions on the mainland constitutes a critical factor towards understanding the origins of the early visitors to Cyprus during the onset of the Holocene (circa 12,000 years before present) in connection with the spread of the Neolithic in the region (Dawson, 2014).
In this work, ocean circulation modeling and particle tracking are employed for characterizing drift-induced sea-borne connectivity for that period, using data and assumptions to approximate prevailing paleo-geographical conditions (re-constructed coastline from global sea level curves), and rudimentary vessel (rafts, dugouts) characteristics, as well as present-day weather conditions. The Regional Ocean Modeling System (ROMS, Shchepetkin and mcWilliams, 2005), forced by Copernicus Marine portal hydrological data, with wave and wind forcing derived from a combination of global reanalysis data and regional-scale numerical weather predictions (ERA5 and E-WAVE project products), are employed to provide the physical domain and atmospheric conditions. Particle-tracking is carried out using the OpenDrift model (Dagestad et al., 2018) to simulate drift-induced (involuntary) sea-borne movement. The sensitivity of the results on the hydrodynamic response (e.g. drag) of rudimentary vessels, such as rafts of postulated shape, size, and weight, that are believed to have been used for maritime travel during the period of interest, is also investigated. The simulation results are used to estimate the degree of maritime connectivity, due to drift-induced sea-borne movement, between segments of Cyprus coastline as well as its neighboring mainlands, and identify areas of both coastlines where landing/departure might be most favorable.
This work aims to provide novel insights into the possible prehistoric maritime pathways between Cyprus and other Eastern Mediterranean coastal regions, and is carried out within the context of project SaRoCy (https://sarocy.cut.ac.cy), a two-year research project implemented under the “Excellence Hubs” Programme (contract number EXCELLENCE/0198/0143) of the RESTART 2016-2020 Programmes for Research, Technological Development and Innovation administered by the Research and Innovation Foundation of Cyprus.
References
Dagestad K.-F., Röhrs J., Breivik Ø., Aadlandsvik B. 2018. “OpenDrift: A generic framework for trajectory modeling'', Geoscientific Model Development 11, 1405-1420. https://doi.org/10.5194/gmd-11-1405-2018.
Dawson, H. 2014. Mediterranean Voyages: The Archaeology of Island Colonisation and Abandonment. Publications of the Institute of Archaeology, University College London. Walnut Creek, California: Left Coast Press Inc.
Shchepetkin, A. F., & McWilliams, J. C. 2005. “The regional oceanic modeling system (ROMS): A split-explicit, free-surface, topography-following-coordinate oceanic model”. Ocean Modelling 9, no. 4, 347-404. https://doi:10.1016/j.ocemod.2004.08.002.
How to cite: Nikolaidis, A., Akylas, E., Michailides, C., Moutsiou, T., Leventis, G., Constantinides, A., McCartney, C., Demesticha, S., Kassianidou, V., Zomeni, Z., Bar-Yosef Mayer, D., Makovsky, Y., and Kyriakidis, P.: Modeling drift-induced maritime connectivity between Cyprus and its surrounding coastal areas during early Holocene, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19782, https://doi.org/10.5194/egusphere-egu2020-19782, 2020.
EGU2020-545 | Displays | ITS2.2/GM12.5
Geoarchaeological and paleoenvironmental reconstruction of the Late Quaternary climate- environmental-human nexus in the Kurdistan region of IraqLuca Forti, Eleonora Regattieri, Anna Maria Mercuri, Ilaria Mazzini, Andrea Pezzotta, Assunta Florenzano, Cecilia Conati Barbaro, Luca Peyronel, Daniele Morandi Bonacossi, and Andrea Zerboni
During the late Quaternary, Iraqi Kurdistan was the scenario of several fundamental human-related
events including the dispersion of Homo in Asia and Europe, the origin of agriculture, the beginning
of urbanization, and the formation of the first state entities. We present the initial results of a
geoarchaeological investigation in this area, which aims to reconstruct a detailed framework of the
relationship between climatic changes, landscape responses, human adaptation, and settlement
distribution during the Late Quaternary. Paleoenvironmental and paleoclimatic data were collected
from two key areas: the territory of the Navkur and Faideh plains, in northern Kurdistan, and a portion
of the Erbil plain, in southern Kurdistan. In the two regions, the Land of Niniveh and MAIPE
archaeological missions are operating. Remote sensing, GIS analyses, and geomorphological survey
are the tools used for the geomorphological reconstruction of ancient hydrology (fluvial pattern) and
the evolution of distinct landforms. Geochemical and geochronological analyses on speleothems from
the Zagros piedmont caves of same region provide information on Holocene climatic variability in
the area. Whereas environmental settings and human land use are investigated on the basis of
sedimentological, palynological, micropaleontological, and geochemical analyses of a fluvio-
lacustrine sequences preliminary dated between 40 and 9 ka BP. The lacustrine sequence is composed
by clayey and silt-sandy sediments alternating calcareous and organic matter-rich layers.
Environmental and geomorphological data have been compared with archaeological information
(mostly the chronological distribution of the archaeological sites) to interpret exploitation of natural
resources, the settlement dynamics and shift in land use.
How to cite: Forti, L., Regattieri, E., Mercuri, A. M., Mazzini, I., Pezzotta, A., Florenzano, A., Conati Barbaro, C., Peyronel, L., Morandi Bonacossi, D., and Zerboni, A.: Geoarchaeological and paleoenvironmental reconstruction of the Late Quaternary climate- environmental-human nexus in the Kurdistan region of Iraq , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-545, https://doi.org/10.5194/egusphere-egu2020-545, 2020.
During the late Quaternary, Iraqi Kurdistan was the scenario of several fundamental human-related
events including the dispersion of Homo in Asia and Europe, the origin of agriculture, the beginning
of urbanization, and the formation of the first state entities. We present the initial results of a
geoarchaeological investigation in this area, which aims to reconstruct a detailed framework of the
relationship between climatic changes, landscape responses, human adaptation, and settlement
distribution during the Late Quaternary. Paleoenvironmental and paleoclimatic data were collected
from two key areas: the territory of the Navkur and Faideh plains, in northern Kurdistan, and a portion
of the Erbil plain, in southern Kurdistan. In the two regions, the Land of Niniveh and MAIPE
archaeological missions are operating. Remote sensing, GIS analyses, and geomorphological survey
are the tools used for the geomorphological reconstruction of ancient hydrology (fluvial pattern) and
the evolution of distinct landforms. Geochemical and geochronological analyses on speleothems from
the Zagros piedmont caves of same region provide information on Holocene climatic variability in
the area. Whereas environmental settings and human land use are investigated on the basis of
sedimentological, palynological, micropaleontological, and geochemical analyses of a fluvio-
lacustrine sequences preliminary dated between 40 and 9 ka BP. The lacustrine sequence is composed
by clayey and silt-sandy sediments alternating calcareous and organic matter-rich layers.
Environmental and geomorphological data have been compared with archaeological information
(mostly the chronological distribution of the archaeological sites) to interpret exploitation of natural
resources, the settlement dynamics and shift in land use.
How to cite: Forti, L., Regattieri, E., Mercuri, A. M., Mazzini, I., Pezzotta, A., Florenzano, A., Conati Barbaro, C., Peyronel, L., Morandi Bonacossi, D., and Zerboni, A.: Geoarchaeological and paleoenvironmental reconstruction of the Late Quaternary climate- environmental-human nexus in the Kurdistan region of Iraq , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-545, https://doi.org/10.5194/egusphere-egu2020-545, 2020.
EGU2020-10575 | Displays | ITS2.2/GM12.5
Determining parameters and chronology of a sustainable water harvest system in desert oases; case study Qurayyah, northwest Arabian PeninsulaSabrina Prochazka, Marta Luciani, and Christopher Lüthgens
The arid regions of the world occupy 46% of the total surface area, providing a habitat for 3 billion people. More than 630 million people are directly affected by desertification. Extreme events like droughts and flash floods increase the pressure on plants, animals and above all, humans and their settlements. In the context of a climate change with such far-reaching consequences, historical oases settlements stand out as best practice examples, because their water supply systems must have been adapted to the changing climate during the Holocene to guarantee the viability of the oases and their inhabitants. I will focus on the ancient oasis Qurayyah, located in the northwest of the Arabian Peninsula, a unique example in this context. Recent research has proven that, lacking a groundwater spring, the formation of a permanent settlement in Qurayyah was made possible mainly by surface-water harvesting, with local fracture springs potentially only providing drinking water. First numerical dating results for the water harvesting system from optically stimulated luminescence (OSL) dating of quartz confirm that the system was erected in a period characterized by changing climatic conditions from the Holocene climate optimum to the recent arid phase. This study aims to determine parameters and chronology of this sustainable irrigation system and intends to learn and understand how ancient settlers accomplished the construction of such a highly developed water supply system. To reach this research aim the irrigation system was reconstructed using field mapping and remote sensing techniques. It was shown that the reconstructed irrigation system worked as a flood irrigation system. Dams and channels were built to maximize the flooded area and at the same time to prevent catastrophic flooding under high discharge conditions. Contemporaneous historical irrigation systems in comparable size and complexity are known from Mesopotamia or Egypt. In addition to the system’s reconstruction, a new reverse engineering approach based on palaeobotany was developed for Qurayyah to reconstruct the climate conditions during the time of its operation. Compared to today’s precipitation of 32 mm per year in the research area, our results imply that the irrigation system was constructed in a time of significant climate change, because significantly higher amounts of precipitation would have been necessary to enable the cultivation of olive trees (reference plant for the reverse engineering approach), with a sufficient amount of water.
How to cite: Prochazka, S., Luciani, M., and Lüthgens, C.: Determining parameters and chronology of a sustainable water harvest system in desert oases; case study Qurayyah, northwest Arabian Peninsula, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10575, https://doi.org/10.5194/egusphere-egu2020-10575, 2020.
The arid regions of the world occupy 46% of the total surface area, providing a habitat for 3 billion people. More than 630 million people are directly affected by desertification. Extreme events like droughts and flash floods increase the pressure on plants, animals and above all, humans and their settlements. In the context of a climate change with such far-reaching consequences, historical oases settlements stand out as best practice examples, because their water supply systems must have been adapted to the changing climate during the Holocene to guarantee the viability of the oases and their inhabitants. I will focus on the ancient oasis Qurayyah, located in the northwest of the Arabian Peninsula, a unique example in this context. Recent research has proven that, lacking a groundwater spring, the formation of a permanent settlement in Qurayyah was made possible mainly by surface-water harvesting, with local fracture springs potentially only providing drinking water. First numerical dating results for the water harvesting system from optically stimulated luminescence (OSL) dating of quartz confirm that the system was erected in a period characterized by changing climatic conditions from the Holocene climate optimum to the recent arid phase. This study aims to determine parameters and chronology of this sustainable irrigation system and intends to learn and understand how ancient settlers accomplished the construction of such a highly developed water supply system. To reach this research aim the irrigation system was reconstructed using field mapping and remote sensing techniques. It was shown that the reconstructed irrigation system worked as a flood irrigation system. Dams and channels were built to maximize the flooded area and at the same time to prevent catastrophic flooding under high discharge conditions. Contemporaneous historical irrigation systems in comparable size and complexity are known from Mesopotamia or Egypt. In addition to the system’s reconstruction, a new reverse engineering approach based on palaeobotany was developed for Qurayyah to reconstruct the climate conditions during the time of its operation. Compared to today’s precipitation of 32 mm per year in the research area, our results imply that the irrigation system was constructed in a time of significant climate change, because significantly higher amounts of precipitation would have been necessary to enable the cultivation of olive trees (reference plant for the reverse engineering approach), with a sufficient amount of water.
How to cite: Prochazka, S., Luciani, M., and Lüthgens, C.: Determining parameters and chronology of a sustainable water harvest system in desert oases; case study Qurayyah, northwest Arabian Peninsula, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10575, https://doi.org/10.5194/egusphere-egu2020-10575, 2020.
EGU2020-18646 | Displays | ITS2.2/GM12.5
Landscape reconstruction and the relationship between human and environment in Yaowuyao area, Northeastern Tibetan Plateau since 15000 yr BPNaimeng Zhang, Qinghai Xu, Dongju Zhang, Ulrike Herzschuh, Zhongwei Shen, Wei Peng, Sisi Liu, and Fahu Chen
Understanding the paleoenvironment (such as climate and landscape) in the area where the early ancient human appears on the Tibetan Plateau is an interesting topic. Based on the results of pollen data on the Yaowuyao loess section of the Qinghai Lake Basin, we used landscape reconstruction algorithms to reconstruct the changes in vegetation cover for 15,000 years. It is shown that the vegetation in the Yaowuyao area changed from temperate steppe (15-7.5 ka) to forest-steppe (7.5-4 ka). Compared with previous studies on the sediment in Qinghai Lake, our study can better reflect the local environment of the Qinghai Lake basin. Furthermore, based on the paleoclimate change data and archeological data from the surrounding areas, it is noticed that while precipitation increases and trees increase, human activities decrease. This may be caused by the substance and strategies of the ancient human beings that have adapted to the steppe. In addition, our results also show that the intensity of ancient human activity has a negative correlation with plant biodiversity, which may be related to human disturbance to the environment. Our paleoecological and environmental study not only shows the paleoenvironment of the early human activities on the Qinghai-Tibet Plateau but also revealed possible early human activity signals.
How to cite: Zhang, N., Xu, Q., Zhang, D., Herzschuh, U., Shen, Z., Peng, W., Liu, S., and Chen, F.: Landscape reconstruction and the relationship between human and environment in Yaowuyao area, Northeastern Tibetan Plateau since 15000 yr BP, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18646, https://doi.org/10.5194/egusphere-egu2020-18646, 2020.
Understanding the paleoenvironment (such as climate and landscape) in the area where the early ancient human appears on the Tibetan Plateau is an interesting topic. Based on the results of pollen data on the Yaowuyao loess section of the Qinghai Lake Basin, we used landscape reconstruction algorithms to reconstruct the changes in vegetation cover for 15,000 years. It is shown that the vegetation in the Yaowuyao area changed from temperate steppe (15-7.5 ka) to forest-steppe (7.5-4 ka). Compared with previous studies on the sediment in Qinghai Lake, our study can better reflect the local environment of the Qinghai Lake basin. Furthermore, based on the paleoclimate change data and archeological data from the surrounding areas, it is noticed that while precipitation increases and trees increase, human activities decrease. This may be caused by the substance and strategies of the ancient human beings that have adapted to the steppe. In addition, our results also show that the intensity of ancient human activity has a negative correlation with plant biodiversity, which may be related to human disturbance to the environment. Our paleoecological and environmental study not only shows the paleoenvironment of the early human activities on the Qinghai-Tibet Plateau but also revealed possible early human activity signals.
How to cite: Zhang, N., Xu, Q., Zhang, D., Herzschuh, U., Shen, Z., Peng, W., Liu, S., and Chen, F.: Landscape reconstruction and the relationship between human and environment in Yaowuyao area, Northeastern Tibetan Plateau since 15000 yr BP, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18646, https://doi.org/10.5194/egusphere-egu2020-18646, 2020.
EGU2020-6569 | Displays | ITS2.2/GM12.5
An Archaeometric Characterization of Ancient Pottery from Huagangshan Site, Eastern TaiwanYing San Liou
Micro-Raman spectroscopy and petrographic analysis was carried out on ancient potsherds and sediments excavated from the Huagangshan site and river sediments collected from the northern part of eastern Taiwan. The ceramic fragments analyzed, dating back to 1600-2100 B.P., are recognized to be Early Metal Age of Taiwan. The aims of this study are mainly to identify the mineralogical compositions of ceramics, to explore technical processes such as firing temperature and redox state, and to decipher the nature of clays and its raw materials source.
The results of micro-Raman analysis for ancient potsherds show the presence of 12 minerals. Quartz, anatase, amorphous carbon, hematite, and pyroxenes are the main components of tempers. In addition, amorphous carbon and hematite are the main constitutes for black- and red- hues pottery, respectively. From the point of view of manufacturing techniques, a large amount of amorphous amorphous carbon indicates that the gray-black pottery is fired under a reducing condition. On the contrary, hematite reveals an oxidizing atmosphere for red-hues pottery. The presence of quartz and anatase implies that the firing temperature is estimated to be 750-950°C. A total of 66 samples, containing 23 ceramic fragments (local and imported products) and 6 sediment from cultural strata of archaeological site and 33 river sediments around the site, is implemented by petrographic analysis of thin sections. Petrographic analytic results of 23 potshards show that the proportion of clay is consistent (60.5~69.1%). The inclusions principally include quartz (polycrystalline and monocrystalline quartz), feldspar, muscovite, and volcanic, sedimentary and metamorphic lithic fragments, and quartz is the main component. In addition, the triangle map with ingredients (volcanic lithics+quartz-sedimentary lithics-metamorphic lithics) shows that the raw materials source of local and main stream pottery recognized by archaeologist is not local, but comes from a distance area (the Coastal Range). On the other hand, imported pottery indicates the raw materials source is indeed from the central and southern Central Range (some distance south of the site). The result further illustrates the vigorous exchange and/or trade activities between the populations of eastern Taiwan during the Early Metal Age (1600-2100 B.P.).
How to cite: Liou, Y. S.: An Archaeometric Characterization of Ancient Pottery from Huagangshan Site, Eastern Taiwan, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6569, https://doi.org/10.5194/egusphere-egu2020-6569, 2020.
Micro-Raman spectroscopy and petrographic analysis was carried out on ancient potsherds and sediments excavated from the Huagangshan site and river sediments collected from the northern part of eastern Taiwan. The ceramic fragments analyzed, dating back to 1600-2100 B.P., are recognized to be Early Metal Age of Taiwan. The aims of this study are mainly to identify the mineralogical compositions of ceramics, to explore technical processes such as firing temperature and redox state, and to decipher the nature of clays and its raw materials source.
The results of micro-Raman analysis for ancient potsherds show the presence of 12 minerals. Quartz, anatase, amorphous carbon, hematite, and pyroxenes are the main components of tempers. In addition, amorphous carbon and hematite are the main constitutes for black- and red- hues pottery, respectively. From the point of view of manufacturing techniques, a large amount of amorphous amorphous carbon indicates that the gray-black pottery is fired under a reducing condition. On the contrary, hematite reveals an oxidizing atmosphere for red-hues pottery. The presence of quartz and anatase implies that the firing temperature is estimated to be 750-950°C. A total of 66 samples, containing 23 ceramic fragments (local and imported products) and 6 sediment from cultural strata of archaeological site and 33 river sediments around the site, is implemented by petrographic analysis of thin sections. Petrographic analytic results of 23 potshards show that the proportion of clay is consistent (60.5~69.1%). The inclusions principally include quartz (polycrystalline and monocrystalline quartz), feldspar, muscovite, and volcanic, sedimentary and metamorphic lithic fragments, and quartz is the main component. In addition, the triangle map with ingredients (volcanic lithics+quartz-sedimentary lithics-metamorphic lithics) shows that the raw materials source of local and main stream pottery recognized by archaeologist is not local, but comes from a distance area (the Coastal Range). On the other hand, imported pottery indicates the raw materials source is indeed from the central and southern Central Range (some distance south of the site). The result further illustrates the vigorous exchange and/or trade activities between the populations of eastern Taiwan during the Early Metal Age (1600-2100 B.P.).
How to cite: Liou, Y. S.: An Archaeometric Characterization of Ancient Pottery from Huagangshan Site, Eastern Taiwan, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6569, https://doi.org/10.5194/egusphere-egu2020-6569, 2020.
EGU2020-5013 | Displays | ITS2.2/GM12.5
Recreational impacts on the microclimate of the limestone caves and management in Shoushan National Nature Park of TaiwanChun Chen, Lih-Der Ho, and tzung-ying Li
This study reports a continuous microclimate monitoring carried out in Gorilla Cave、Beifeng Cave、Jingua Cave and Tienyu Cave(Kaohsiung, Taiwan) between June 2018 and August 2019. These limestone caves are located in the Mt. Shoushan, which is mainly composed of limestone and mudstone. This study tried to assess the recreational impacts to the microclimate of the caves by monitoring the CO2, temperature, humidity and barometric pressure, and provide effective management strategies. A monitoring station was set up at the middle of each cave. We also set up an auto-operated time-lapse camera at the entrance of the caves to record the numbers of tourists and their entering time and the durations in caves. As carbon dioxide in the limestone caves may have negative impact to both speleothems and visitors, our presentation focuses on the variations of CO2 concentration in the caves.
Daily and seasonal fluctuations of CO2 concentration were observed. Monitoring data show that the concentration of carbon dioxide in the caves also changes significantly with the wet and dry seasons. The monthly average of the carbon dioxide concentration in the cave has a good correlation with rainfall and temperature, which means that the higher the temperature and humidity, the higher the carbon dioxide concentration in the cave. Besides, the difference between the day-night temperature change outside the cave and the temperature inside the caves also seems to affect whether the carbon dioxide inside the cave is easily dissipated or not. Especially when the temperature outside the cave at night is lower than the temperature inside the cave, the carbon dioxide concentration inside the cave often drops to the environmental background value (around 420 ppm). Therefore, the difference in air density caused by high and low temperature may be an important mechanism driving the gas exchange inside and outside the cave.
Based on the monitoring results, we suggest that (1) The cave is open during the dry seasons from November to April. Although monitoring data indicate that the caves have gradually dried up in October, cave exploration activities have also become active. However, the period from wet to dry in the cave is theoretically the stage of cave rock development. Considering the continuous dripping in the cave at this time, in order to avoid disturbing the development of speleothems, it is recommended to close the caves until most of the caves are dry in November. (2) The caves are open daily from 8 am to 12 am, from 1 pm to 5 pm, with a break of an hour at noon. (3) There are one batch per hour and 8 batches per day to allow visitors enter the caves, and the stay time is limited to 1 hour. (4) The monitoring results also help us reasonably estimate the number of visitors in each batch, that is, Gorilla Cave is about 15 people, Tienyu Cave is 20 to 30 people, Beifeng Cave is about 20 people and Jingua Cave is 10 to 15 people.
How to cite: Chen, C., Ho, L.-D., and Li, T.: Recreational impacts on the microclimate of the limestone caves and management in Shoushan National Nature Park of Taiwan, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5013, https://doi.org/10.5194/egusphere-egu2020-5013, 2020.
This study reports a continuous microclimate monitoring carried out in Gorilla Cave、Beifeng Cave、Jingua Cave and Tienyu Cave(Kaohsiung, Taiwan) between June 2018 and August 2019. These limestone caves are located in the Mt. Shoushan, which is mainly composed of limestone and mudstone. This study tried to assess the recreational impacts to the microclimate of the caves by monitoring the CO2, temperature, humidity and barometric pressure, and provide effective management strategies. A monitoring station was set up at the middle of each cave. We also set up an auto-operated time-lapse camera at the entrance of the caves to record the numbers of tourists and their entering time and the durations in caves. As carbon dioxide in the limestone caves may have negative impact to both speleothems and visitors, our presentation focuses on the variations of CO2 concentration in the caves.
Daily and seasonal fluctuations of CO2 concentration were observed. Monitoring data show that the concentration of carbon dioxide in the caves also changes significantly with the wet and dry seasons. The monthly average of the carbon dioxide concentration in the cave has a good correlation with rainfall and temperature, which means that the higher the temperature and humidity, the higher the carbon dioxide concentration in the cave. Besides, the difference between the day-night temperature change outside the cave and the temperature inside the caves also seems to affect whether the carbon dioxide inside the cave is easily dissipated or not. Especially when the temperature outside the cave at night is lower than the temperature inside the cave, the carbon dioxide concentration inside the cave often drops to the environmental background value (around 420 ppm). Therefore, the difference in air density caused by high and low temperature may be an important mechanism driving the gas exchange inside and outside the cave.
Based on the monitoring results, we suggest that (1) The cave is open during the dry seasons from November to April. Although monitoring data indicate that the caves have gradually dried up in October, cave exploration activities have also become active. However, the period from wet to dry in the cave is theoretically the stage of cave rock development. Considering the continuous dripping in the cave at this time, in order to avoid disturbing the development of speleothems, it is recommended to close the caves until most of the caves are dry in November. (2) The caves are open daily from 8 am to 12 am, from 1 pm to 5 pm, with a break of an hour at noon. (3) There are one batch per hour and 8 batches per day to allow visitors enter the caves, and the stay time is limited to 1 hour. (4) The monitoring results also help us reasonably estimate the number of visitors in each batch, that is, Gorilla Cave is about 15 people, Tienyu Cave is 20 to 30 people, Beifeng Cave is about 20 people and Jingua Cave is 10 to 15 people.
How to cite: Chen, C., Ho, L.-D., and Li, T.: Recreational impacts on the microclimate of the limestone caves and management in Shoushan National Nature Park of Taiwan, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5013, https://doi.org/10.5194/egusphere-egu2020-5013, 2020.
EGU2020-5175 | Displays | ITS2.2/GM12.5
Structure and function of coupled human—natural systems: from fitting to sustainabilityShuai Wang
Under the intense disturbances of human activities, the global resources and environment are facing unprecedented stresses. Now, because the earth has entered a new era of “anthropocence”, coupling natural and social systems, analyzing the structure and function of the human-land system has become the key to ensuring the sustainability of the earth system. Human-land coupled systems, whose structures are the relationships between internal components and functions are their properties to meet a certain demand, are composed of a natural ecological subsystem and a human social subsystem with their interactions. A human-land coupled system has structural and functional characteristics that are different from social or natural systems’ respectively. While structure determines function, functional feedback structure. “Fit” is a sustainable system structure configuration. Here, we summarized the four main types of “fit” within coupled human-land system. (1) Fit of totality: to the allocation of the total amount of key indicators does not exceed the threshold; (2) Fit of structure: the interaction relationships configuration to sustain good performance of the system; (3) Fit of dynamic: adjusting and optimizing the configuration when new changes or disturbances occurred; (4) Fit of scale: the rational configuration of the structure-function effect relationship between different scales. Coupled human-land systems researches are aiming at the aspects of quantity, order, time, and space to propose ways to regulate and control the structure of the system to achieve sustainable functions, so as to keep fit. In the future, priority can be given to the following three aspects: (1) Developing theories and methods of coupled human-land systems’ structure; (2) Analyzing the changes in the structure of the coupled system and their functional effects; (3) Further identifying and clarifying the approaches to keep fit.
How to cite: Wang, S.: Structure and function of coupled human—natural systems: from fitting to sustainability, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5175, https://doi.org/10.5194/egusphere-egu2020-5175, 2020.
Under the intense disturbances of human activities, the global resources and environment are facing unprecedented stresses. Now, because the earth has entered a new era of “anthropocence”, coupling natural and social systems, analyzing the structure and function of the human-land system has become the key to ensuring the sustainability of the earth system. Human-land coupled systems, whose structures are the relationships between internal components and functions are their properties to meet a certain demand, are composed of a natural ecological subsystem and a human social subsystem with their interactions. A human-land coupled system has structural and functional characteristics that are different from social or natural systems’ respectively. While structure determines function, functional feedback structure. “Fit” is a sustainable system structure configuration. Here, we summarized the four main types of “fit” within coupled human-land system. (1) Fit of totality: to the allocation of the total amount of key indicators does not exceed the threshold; (2) Fit of structure: the interaction relationships configuration to sustain good performance of the system; (3) Fit of dynamic: adjusting and optimizing the configuration when new changes or disturbances occurred; (4) Fit of scale: the rational configuration of the structure-function effect relationship between different scales. Coupled human-land systems researches are aiming at the aspects of quantity, order, time, and space to propose ways to regulate and control the structure of the system to achieve sustainable functions, so as to keep fit. In the future, priority can be given to the following three aspects: (1) Developing theories and methods of coupled human-land systems’ structure; (2) Analyzing the changes in the structure of the coupled system and their functional effects; (3) Further identifying and clarifying the approaches to keep fit.
How to cite: Wang, S.: Structure and function of coupled human—natural systems: from fitting to sustainability, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5175, https://doi.org/10.5194/egusphere-egu2020-5175, 2020.
EGU2020-13567 | Displays | ITS2.2/GM12.5
Erosional processes in the natural-anthropic geosystem of Vulcano Island (Italy)Paolo Madonia and Cipriano Di Maggio
Vulcano, the southernmost island of the Aeolian Archipelago, has been characterized by an intense fumarolic activity since its last eruption from La Fossa cone (1888-1890). This island has a strong touristic vocation and frequentation, and here volcano-hydrothermal activity represents, at the same time, a landmark, one of the main causes of hydrogeological instability and a severe risk for human health. The space-time dynamic of this complex system is controlled by the mutual interactions among micro-meteorological, volcanic, tectonic, morphogenetic and anthropic processes.
La Fossa cone is affected by intense water erosion phenomena, also controlled by fumarolic activity as an obstacle for the growth of vegetation and a weathering factor. Man-made structures, with particular reference to deep modifications in the natural stream network induced by buildings and roads, exert a strong influence on these erosion processes, also fostered by episodic wildfires.
Another relevant theme is the acceleration of the coastal erosion processes in the Baia di levante area, driven by the circulation of chemically-aggressive hydrothermal fluids, which transforms the pristine volcanic minerals into phases like gypsum, anhydrite and clay minerals, significantly reducing the mechanical resistance of the rocks to the action of wave erosion. A general retreatment of the coastline (several meters in some locations) has been observed in the last twenty years, caused by the combined effect of volcanic activity, anthropic modifications and changes in sea level.
How to cite: Madonia, P. and Di Maggio, C.: Erosional processes in the natural-anthropic geosystem of Vulcano Island (Italy), EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13567, https://doi.org/10.5194/egusphere-egu2020-13567, 2020.
Vulcano, the southernmost island of the Aeolian Archipelago, has been characterized by an intense fumarolic activity since its last eruption from La Fossa cone (1888-1890). This island has a strong touristic vocation and frequentation, and here volcano-hydrothermal activity represents, at the same time, a landmark, one of the main causes of hydrogeological instability and a severe risk for human health. The space-time dynamic of this complex system is controlled by the mutual interactions among micro-meteorological, volcanic, tectonic, morphogenetic and anthropic processes.
La Fossa cone is affected by intense water erosion phenomena, also controlled by fumarolic activity as an obstacle for the growth of vegetation and a weathering factor. Man-made structures, with particular reference to deep modifications in the natural stream network induced by buildings and roads, exert a strong influence on these erosion processes, also fostered by episodic wildfires.
Another relevant theme is the acceleration of the coastal erosion processes in the Baia di levante area, driven by the circulation of chemically-aggressive hydrothermal fluids, which transforms the pristine volcanic minerals into phases like gypsum, anhydrite and clay minerals, significantly reducing the mechanical resistance of the rocks to the action of wave erosion. A general retreatment of the coastline (several meters in some locations) has been observed in the last twenty years, caused by the combined effect of volcanic activity, anthropic modifications and changes in sea level.
How to cite: Madonia, P. and Di Maggio, C.: Erosional processes in the natural-anthropic geosystem of Vulcano Island (Italy), EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13567, https://doi.org/10.5194/egusphere-egu2020-13567, 2020.
EGU2020-1426 | Displays | ITS2.2/GM12.5
Risk of gully erosion: methods and examples of estimatesAleksey Sidorchuk and Andrei Entin
Risk of damage of buildings and infrastructure by gully erosion can be estimated on the net of flowlines or by evaluation of depths of gullies with erosion model, or by calculation of some simplified measures of erosion rate, which are correlated with such calculated gullies depths and/or with the measurements of gully erosion. The most exact approach is based on calculation of the transformation of longitudinal profiles of linear erosion features along all flowlines on DEM with GULTEM model. The model includes calculation of gully erosion and thermoerosion, gully bank widening and collapsing. This requires detailed meteorological, hydrological, morphological and lithological information and includes model calibration on the measurement data. The simplified methods are based on the calculation of critical runoff depth at which linear erosion of the soil begins for each point on the catchment. The total sediment yield at each point by all flows above critical or difference between the maximum runoff depth and its critical value is calculated within such approach. This requires much less hydrological, morphological and lithological information, but takes into account only initial conditions on the catchment. Calculations of the risk of gully erosion were performed on the net of flowlines for the gas fields on the Yamal Peninsula with existing and designed structures and buildings. Comparison of the results of evaluating the gully erosion potential by the simplified methods with the data of calculations of gully erosion using the detailed dynamic model and field measurements showed their satisfactory agreement. This confirms the possibility of using express-methods for a quick assessment of the scope of using territories for development with the following detailed calculations with the use of GULTEM on certain areas of construction for evaluation of the risks of landscape and infrastructure disturbance.
Funding: This research was funded by RFBR grant 18-05-60147 "Extreme hydrometeorological phenomena in the Kara Sea and the Arctic coast".
How to cite: Sidorchuk, A. and Entin, A.: Risk of gully erosion: methods and examples of estimates, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1426, https://doi.org/10.5194/egusphere-egu2020-1426, 2020.
Risk of damage of buildings and infrastructure by gully erosion can be estimated on the net of flowlines or by evaluation of depths of gullies with erosion model, or by calculation of some simplified measures of erosion rate, which are correlated with such calculated gullies depths and/or with the measurements of gully erosion. The most exact approach is based on calculation of the transformation of longitudinal profiles of linear erosion features along all flowlines on DEM with GULTEM model. The model includes calculation of gully erosion and thermoerosion, gully bank widening and collapsing. This requires detailed meteorological, hydrological, morphological and lithological information and includes model calibration on the measurement data. The simplified methods are based on the calculation of critical runoff depth at which linear erosion of the soil begins for each point on the catchment. The total sediment yield at each point by all flows above critical or difference between the maximum runoff depth and its critical value is calculated within such approach. This requires much less hydrological, morphological and lithological information, but takes into account only initial conditions on the catchment. Calculations of the risk of gully erosion were performed on the net of flowlines for the gas fields on the Yamal Peninsula with existing and designed structures and buildings. Comparison of the results of evaluating the gully erosion potential by the simplified methods with the data of calculations of gully erosion using the detailed dynamic model and field measurements showed their satisfactory agreement. This confirms the possibility of using express-methods for a quick assessment of the scope of using territories for development with the following detailed calculations with the use of GULTEM on certain areas of construction for evaluation of the risks of landscape and infrastructure disturbance.
Funding: This research was funded by RFBR grant 18-05-60147 "Extreme hydrometeorological phenomena in the Kara Sea and the Arctic coast".
How to cite: Sidorchuk, A. and Entin, A.: Risk of gully erosion: methods and examples of estimates, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1426, https://doi.org/10.5194/egusphere-egu2020-1426, 2020.
EGU2020-22485 | Displays | ITS2.2/GM12.5
Quantification of Regional Risk from Failure of Earth DamJeong Ah Um, Sungsu Lee, and Hee Jung Ham
In order to predict the loss and the damage from the hazards such as debris flow resulted from dam failures, three important factors must be taken into account; the strength of hazard, the inventory and the vulnerability of the inventory to the hazard. In the case of the debris flow, the flow speed, the inundation boundary and depth, and the flow force can be the hazard. The inventory corresponds to the list of assets and demographic distribution while the vulnerability is the probability of the damage of each inventory by the specified hazard. In this study, the hazard is assessed from 3D numerical simulation of the debris flow incurred by the dam failure. Since the detail description and modeling of the inventory is nearly impossible, the present study utilized GIS-based regional assessment of the vulnerability combined with the inventory, in which the distribution of the inventory represents the exposure and the performance of the inventory such as age of building represents the sensitivity. As an example, building vulnerability index is measured by combining weighted five proxy variables; density of hazard exposed area of building, building importance level, type of building structural material, status of building structural design, and deterioration level of building. The selected proxy variables are evaluated with predefined scoring criteria and nondimensionalized based on a standardization method. The resulting vulnerability is normalized for the relative assessment with the region of interests. The computed strength of the hazard is then convoluted with the normalized vulnerability and the results show the risk of the region. This research was supported by a grant (2018-MOIS31-009) from Fundamental Technology Development Program for Extreme Disaster Response funded by Korean Ministry of Interior and Safety(MOIS).
How to cite: Um, J. A., Lee, S., and Ham, H. J.: Quantification of Regional Risk from Failure of Earth Dam, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22485, https://doi.org/10.5194/egusphere-egu2020-22485, 2020.
In order to predict the loss and the damage from the hazards such as debris flow resulted from dam failures, three important factors must be taken into account; the strength of hazard, the inventory and the vulnerability of the inventory to the hazard. In the case of the debris flow, the flow speed, the inundation boundary and depth, and the flow force can be the hazard. The inventory corresponds to the list of assets and demographic distribution while the vulnerability is the probability of the damage of each inventory by the specified hazard. In this study, the hazard is assessed from 3D numerical simulation of the debris flow incurred by the dam failure. Since the detail description and modeling of the inventory is nearly impossible, the present study utilized GIS-based regional assessment of the vulnerability combined with the inventory, in which the distribution of the inventory represents the exposure and the performance of the inventory such as age of building represents the sensitivity. As an example, building vulnerability index is measured by combining weighted five proxy variables; density of hazard exposed area of building, building importance level, type of building structural material, status of building structural design, and deterioration level of building. The selected proxy variables are evaluated with predefined scoring criteria and nondimensionalized based on a standardization method. The resulting vulnerability is normalized for the relative assessment with the region of interests. The computed strength of the hazard is then convoluted with the normalized vulnerability and the results show the risk of the region. This research was supported by a grant (2018-MOIS31-009) from Fundamental Technology Development Program for Extreme Disaster Response funded by Korean Ministry of Interior and Safety(MOIS).
How to cite: Um, J. A., Lee, S., and Ham, H. J.: Quantification of Regional Risk from Failure of Earth Dam, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22485, https://doi.org/10.5194/egusphere-egu2020-22485, 2020.
EGU2020-22490 | Displays | ITS2.2/GM12.5
Numerical Simulation of Debris Flow incurred by Earth Dam CollapseSungsu Lee, Joo Yong Lee, and Selugi Lee
More than 70% of domestic reservoirs in Korea are earth dams that are more than 50 years old, and until recently, large and small reservoirs have repeatedly collapsed and resulted in damages. However, most of the domestic and foreign techniques of reservoir collapse simulation, which are used as the techniques for damage prediction, are not only based on two-dimensional flow analysis, but are also performed by ignoring the dam collapse process. The dimensional flow, of course, has limitations that do not reflect the effect of the soil mass at the beginning of the collapse. To compensate for the limitations, we used computational fluid dynamics to simulate the collapses of reservoir collapses with three-dimensional Navier-Stokes equations and assumed the multiphase flow technique of soils as three phases of soil, suspension, and air. . In addition, the Herschel-Bulkely fluid was modified to take into account the water content and concentration of the soil, and the coulomb-viscoplastic fluid was introduced to simulate the nonlinear viscosity of the initial soil breakdown by considering the interaction inside the soil. Using 3D simulation techniques, 3D simulation was performed on the assumption of total collapse and partial collapse of the mountain reservoir collapse in 2013. Prediction and comparison of inundation ranges were made and comparison with the inundation area created through previous studies. This research was supported by a grant (2018-MOIS31-009) from Fundamental Technology Development Program for Extreme Disaster Response funded by Korean Ministry of Interior and Safety(MOIS).
How to cite: Lee, S., Lee, J. Y., and Lee, S.: Numerical Simulation of Debris Flow incurred by Earth Dam Collapse, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22490, https://doi.org/10.5194/egusphere-egu2020-22490, 2020.
More than 70% of domestic reservoirs in Korea are earth dams that are more than 50 years old, and until recently, large and small reservoirs have repeatedly collapsed and resulted in damages. However, most of the domestic and foreign techniques of reservoir collapse simulation, which are used as the techniques for damage prediction, are not only based on two-dimensional flow analysis, but are also performed by ignoring the dam collapse process. The dimensional flow, of course, has limitations that do not reflect the effect of the soil mass at the beginning of the collapse. To compensate for the limitations, we used computational fluid dynamics to simulate the collapses of reservoir collapses with three-dimensional Navier-Stokes equations and assumed the multiphase flow technique of soils as three phases of soil, suspension, and air. . In addition, the Herschel-Bulkely fluid was modified to take into account the water content and concentration of the soil, and the coulomb-viscoplastic fluid was introduced to simulate the nonlinear viscosity of the initial soil breakdown by considering the interaction inside the soil. Using 3D simulation techniques, 3D simulation was performed on the assumption of total collapse and partial collapse of the mountain reservoir collapse in 2013. Prediction and comparison of inundation ranges were made and comparison with the inundation area created through previous studies. This research was supported by a grant (2018-MOIS31-009) from Fundamental Technology Development Program for Extreme Disaster Response funded by Korean Ministry of Interior and Safety(MOIS).
How to cite: Lee, S., Lee, J. Y., and Lee, S.: Numerical Simulation of Debris Flow incurred by Earth Dam Collapse, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22490, https://doi.org/10.5194/egusphere-egu2020-22490, 2020.
ITS2.3/CL1.19 – Climate and Environment Changes and Impact on Civilization development along the Ancient Silk Road
EGU2020-7335 | Displays | ITS2.3/CL1.19 | Highlight
Climate change and Silk Road civilization in the arid central AsiaFahu Chen, Jianhui Chen, Guanghui Dong, Wei Huang, Juzhi Hou, and Tao Wang
The arid central Asia composed of northwestern China and central Asia, is one of the most arid regions in the mid-latitudes and also the core area of the Silk Road civilization. Climate have dramatically changed during Holocene in the region. Prior to 6 ka, moisture conditions increased gradually, and then rapidly, with the most humid period occurring during the late Holocene. Over the last millennium, a dry climate during the Medieval Warm Period and a wet climate during the Little Ice Age is present on the centennial timescale. Instrumental observations showed that precipitation, moisture, and stream runoff all have gradually increased on the decadal scale under global warming. Comparing these results to those in the mid-latitude monsoonal Asia and Mediterranean, the moisture evolution since the Holocene over westerlies Asia featured unique characteristics on various timescales. We proposed the theoretical framework of a ‘westerlies-dominated climatic regime’ (WDCR) for hydroclimatic changes. Further studies of physical mechanisms showed that external factors e.g. orbital-induced insolation changes generated WDCR on the sub-orbital timescale, while a circum-global teleconnection/Silk Road pattern was the most important factor responsible for WDCR on the centennial and decadal timescales. Climate change has impacted on the civilization evolution along the Silk road in arid central Asia. The oasis route in this region played a significant role in the development of trans-Eurasia exchange since the late third Millennium BCE. Such route laid the foundation for the formation of ancient Silk Road during the second century BCE that accounted for the most important center for civilization evolution in the planet till the sixteenth century CE. Multi-discipline studies suggest that special warm-humid climate might have facilitated the rise and development of ancient empires, e.g. Tubo Empire (618-842 CE) in and around the high Tibetan Plateau. But climate deterioration, especially severe droughts lasting decades-centuries, triggered the expansion of deserts and shrinkage of oases along the Silk Road. Such land degradation led to the delayed onset of transcontinental exchange, the decline of ancient civilizations such as ancient Loulan Kingdom (176 BCE-630 CE), and the abandonment of Dunhuang area between 1539-1723 CE by Chinese central government that was ascribed as a landmark event for the end of the traditional Silk Road. Further analysis proposed that the evolution of ancient civilizations was likely influenced by precipitation variation in surrounding mountains instead of basins in arid areas of the Silk Road.
How to cite: Chen, F., Chen, J., Dong, G., Huang, W., Hou, J., and Wang, T.: Climate change and Silk Road civilization in the arid central Asia, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7335, https://doi.org/10.5194/egusphere-egu2020-7335, 2020.
The arid central Asia composed of northwestern China and central Asia, is one of the most arid regions in the mid-latitudes and also the core area of the Silk Road civilization. Climate have dramatically changed during Holocene in the region. Prior to 6 ka, moisture conditions increased gradually, and then rapidly, with the most humid period occurring during the late Holocene. Over the last millennium, a dry climate during the Medieval Warm Period and a wet climate during the Little Ice Age is present on the centennial timescale. Instrumental observations showed that precipitation, moisture, and stream runoff all have gradually increased on the decadal scale under global warming. Comparing these results to those in the mid-latitude monsoonal Asia and Mediterranean, the moisture evolution since the Holocene over westerlies Asia featured unique characteristics on various timescales. We proposed the theoretical framework of a ‘westerlies-dominated climatic regime’ (WDCR) for hydroclimatic changes. Further studies of physical mechanisms showed that external factors e.g. orbital-induced insolation changes generated WDCR on the sub-orbital timescale, while a circum-global teleconnection/Silk Road pattern was the most important factor responsible for WDCR on the centennial and decadal timescales. Climate change has impacted on the civilization evolution along the Silk road in arid central Asia. The oasis route in this region played a significant role in the development of trans-Eurasia exchange since the late third Millennium BCE. Such route laid the foundation for the formation of ancient Silk Road during the second century BCE that accounted for the most important center for civilization evolution in the planet till the sixteenth century CE. Multi-discipline studies suggest that special warm-humid climate might have facilitated the rise and development of ancient empires, e.g. Tubo Empire (618-842 CE) in and around the high Tibetan Plateau. But climate deterioration, especially severe droughts lasting decades-centuries, triggered the expansion of deserts and shrinkage of oases along the Silk Road. Such land degradation led to the delayed onset of transcontinental exchange, the decline of ancient civilizations such as ancient Loulan Kingdom (176 BCE-630 CE), and the abandonment of Dunhuang area between 1539-1723 CE by Chinese central government that was ascribed as a landmark event for the end of the traditional Silk Road. Further analysis proposed that the evolution of ancient civilizations was likely influenced by precipitation variation in surrounding mountains instead of basins in arid areas of the Silk Road.
How to cite: Chen, F., Chen, J., Dong, G., Huang, W., Hou, J., and Wang, T.: Climate change and Silk Road civilization in the arid central Asia, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7335, https://doi.org/10.5194/egusphere-egu2020-7335, 2020.
EGU2020-7827 | Displays | ITS2.3/CL1.19
Holocene Water Isotope Records Not Reflecting Aridity Changes in Arid Central AsiaZhonghui Liu, Jiawei Jiang, Zheng Wang, Sergey Krivonogov, Qingfeng Jiang, Juzhi Hou, Cheng Zhao, Aifeng Zhou, Weiguo Liu, and Fahu Chen
Holocene moisture evolution in the arid Central Asia region, dominated by the westerly circulation system, has been shown to be in drastic contrast with that in Asian monsoonal regions. Yet, water isotope records, including stalagmite oxygen isotopes and terrestrial long-chain n-alkane/acid hydrogen isotopes, show many common features in the two regions. Here we present several new isotopic records from the arid Central Asia region to examine the isotopic differences from various archives/media, together with existing water isotopic records from both regions. Isotopic records more reflecting terrestrial signal in arid regions appear to follow the pattern in monsoonal regions, while those likely affected by isotopic enrichment due to lake water evaporation display various patterns, and not necessarily resemble moisture changes inferred from the same lakes. It thus appears that the terrestrial water isotopes in both regions may record the isotopic signature in precipitation, but not necessarily linked to aridity changes. Meanwhile, those isotopic records affected by lake evaporation, after subtracting the original precipitation isotopic signal, show good correspondence to moisture changes.
How to cite: Liu, Z., Jiang, J., Wang, Z., Krivonogov, S., Jiang, Q., Hou, J., Zhao, C., Zhou, A., Liu, W., and Chen, F.: Holocene Water Isotope Records Not Reflecting Aridity Changes in Arid Central Asia, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7827, https://doi.org/10.5194/egusphere-egu2020-7827, 2020.
Holocene moisture evolution in the arid Central Asia region, dominated by the westerly circulation system, has been shown to be in drastic contrast with that in Asian monsoonal regions. Yet, water isotope records, including stalagmite oxygen isotopes and terrestrial long-chain n-alkane/acid hydrogen isotopes, show many common features in the two regions. Here we present several new isotopic records from the arid Central Asia region to examine the isotopic differences from various archives/media, together with existing water isotopic records from both regions. Isotopic records more reflecting terrestrial signal in arid regions appear to follow the pattern in monsoonal regions, while those likely affected by isotopic enrichment due to lake water evaporation display various patterns, and not necessarily resemble moisture changes inferred from the same lakes. It thus appears that the terrestrial water isotopes in both regions may record the isotopic signature in precipitation, but not necessarily linked to aridity changes. Meanwhile, those isotopic records affected by lake evaporation, after subtracting the original precipitation isotopic signal, show good correspondence to moisture changes.
How to cite: Liu, Z., Jiang, J., Wang, Z., Krivonogov, S., Jiang, Q., Hou, J., Zhao, C., Zhou, A., Liu, W., and Chen, F.: Holocene Water Isotope Records Not Reflecting Aridity Changes in Arid Central Asia, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7827, https://doi.org/10.5194/egusphere-egu2020-7827, 2020.
EGU2020-7906 | Displays | ITS2.3/CL1.19
Environmental survey and trial excavation at prehistoric settlement site in Neyshabur Plain, Northeastern IranJuzhong Zhang, Qilong Cui, Wuhong Luo, Yuzhang Yang, and Omran Garazhian
Razavi Khorasan Province in the northeast of Iran, located at the Crossroads of Eurasia, was an important point of the middle part of the Silk Road. Neyshabur Plain is situated an important transporting hub of the major thoroughfare of Eurasia. A large number of sites are distributed in the river valleys and the alluvial fans in front of the mountains. Archaeological survey was carried out in Neyshabur Plain, and more than 10 sites were discovered, which are in form of mounds of earth, named as Tape. Seen from the cultural relics on the surface, these sites were occupied by successive cultural sequences, mainly ranging from Neolithic, Chalcolithic, Bronze to Iron Age. This appearance indicates that the climate and environment in the past was better than now. Today, the region is characterized by dry climate, and poor land resources. The land is dominated by Gobi Desert, and the wide vegetation is dominated by Camel thorn (Alhagi sparsifolia). Only in which Karez irrigation system exists, can wheat (Triticum aestivum), barley (Hordeum vulgare) and saffron (Crocus sativus) be cultivated, while a few orchard is present in some river valley areas.
Tape Borj, which is the largest prehistoric settlement site in the east part of Neyshabur plain, Razavi Khorasan Province, NE Iran, covers an area of 13.5 ha. A total area of 110 m2 was excavated in the north and northwest part of the site, and some geological survey were also conducted around the site in 2019. A total of 14 ash pits, 4 houses, 6 ovens, and one well were unearthed during the excavation. According to the AMS dates and material culture, the cultural deposits can be divided into two phases, including Chalcolithic Age during 6500 BP and 6000 BP and early Bronze Age during 5500 BP and 5000 BP. Some wheat, barley, oats (Avena sativa), and seeds of Celtis sinensis, as well as a large number of animal bones, which are dominated by sheep and goats were discovered. The results can basically reflect the economic structure and subsistence strategy of prehistoric ancestors. Geological survey indicates that two paleo river course ever went through the east and west sides of the site during the prehistoric period. In addition, some samples were systematically collected for pollen and phytolith analysis, in order to understand the paleoenvorinment and the utilization of plant resources by the ancient people at the site from the Chalcolithic to the Bronze Age. Our work can provide some precious material data for studying the evolution of the paleoenvrionment and development of agriculture and animal husbandry in this region.
How to cite: Zhang, J., Cui, Q., Luo, W., Yang, Y., and Garazhian, O.: Environmental survey and trial excavation at prehistoric settlement site in Neyshabur Plain, Northeastern Iran, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7906, https://doi.org/10.5194/egusphere-egu2020-7906, 2020.
Razavi Khorasan Province in the northeast of Iran, located at the Crossroads of Eurasia, was an important point of the middle part of the Silk Road. Neyshabur Plain is situated an important transporting hub of the major thoroughfare of Eurasia. A large number of sites are distributed in the river valleys and the alluvial fans in front of the mountains. Archaeological survey was carried out in Neyshabur Plain, and more than 10 sites were discovered, which are in form of mounds of earth, named as Tape. Seen from the cultural relics on the surface, these sites were occupied by successive cultural sequences, mainly ranging from Neolithic, Chalcolithic, Bronze to Iron Age. This appearance indicates that the climate and environment in the past was better than now. Today, the region is characterized by dry climate, and poor land resources. The land is dominated by Gobi Desert, and the wide vegetation is dominated by Camel thorn (Alhagi sparsifolia). Only in which Karez irrigation system exists, can wheat (Triticum aestivum), barley (Hordeum vulgare) and saffron (Crocus sativus) be cultivated, while a few orchard is present in some river valley areas.
Tape Borj, which is the largest prehistoric settlement site in the east part of Neyshabur plain, Razavi Khorasan Province, NE Iran, covers an area of 13.5 ha. A total area of 110 m2 was excavated in the north and northwest part of the site, and some geological survey were also conducted around the site in 2019. A total of 14 ash pits, 4 houses, 6 ovens, and one well were unearthed during the excavation. According to the AMS dates and material culture, the cultural deposits can be divided into two phases, including Chalcolithic Age during 6500 BP and 6000 BP and early Bronze Age during 5500 BP and 5000 BP. Some wheat, barley, oats (Avena sativa), and seeds of Celtis sinensis, as well as a large number of animal bones, which are dominated by sheep and goats were discovered. The results can basically reflect the economic structure and subsistence strategy of prehistoric ancestors. Geological survey indicates that two paleo river course ever went through the east and west sides of the site during the prehistoric period. In addition, some samples were systematically collected for pollen and phytolith analysis, in order to understand the paleoenvorinment and the utilization of plant resources by the ancient people at the site from the Chalcolithic to the Bronze Age. Our work can provide some precious material data for studying the evolution of the paleoenvrionment and development of agriculture and animal husbandry in this region.
How to cite: Zhang, J., Cui, Q., Luo, W., Yang, Y., and Garazhian, O.: Environmental survey and trial excavation at prehistoric settlement site in Neyshabur Plain, Northeastern Iran, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7906, https://doi.org/10.5194/egusphere-egu2020-7906, 2020.
EGU2020-21976 | Displays | ITS2.3/CL1.19
Pollen-based quantitative land-cover reconstruction for northern Asia covering the last 40 kaXianyong Cao, Fang Tian, Furong Li, Marie-José Gaillard, Natalia Rudaya, Qinghai Xu, and Ulrike Herzschuh
We collected the available relative pollen productivity estimates (PPEs) for 27 major pollen taxa from Eurasia and applied them to estimate plant abundances during the last 40 cal. ka BP (calibrated thousand years before present) using pollen counts from 203 fossil pollen records in northern Asia (north of 40°N). These pollen records were organised into 42 site-groups, and regional mean plant abundances calculated using the REVEALS (Regional Estimates of Vegetation Abundance from Large Sites) model. Time-series clustering, constrained hierarchical clustering, and detrended canonical correspondence analysis were performed to investigate the regional pattern, time, and strength of vegetation changes, respectively. Reconstructed regional plant-functional type (PFT) components for each site-group are generally consistent with modern vegetation, in that vegetation changes within the regions are characterized by minor changes in the abundance of PFTs rather than by increase in new PFTs, particularly during the Holocene. We argue that pollen-based REVEALS estimates of plant abundances should be a more reliable reflection of the vegetation as pollen may overestimate the turnover, particularly when a high pollen producer invades areas dominated by low pollen producers. Comparisons with vegetation-independent climate records show that climate change is the primary factor driving land-cover changes at broad spatial and temporal scales. Vegetation changes in certain regions or periods, however, could not be explained by direct climate change, for example inland Siberia, where a sharp increase in evergreen conifer tree abundance occurred at ca. 7–8 cal. ka BP despite an unchanging climate, potentially reflecting their response to complex climate–permafrost–fire–vegetation interactions and thus a possible long-term-scale lagged climate response.
How to cite: Cao, X., Tian, F., Li, F., Gaillard, M.-J., Rudaya, N., Xu, Q., and Herzschuh, U.: Pollen-based quantitative land-cover reconstruction for northern Asia covering the last 40 ka, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21976, https://doi.org/10.5194/egusphere-egu2020-21976, 2020.
We collected the available relative pollen productivity estimates (PPEs) for 27 major pollen taxa from Eurasia and applied them to estimate plant abundances during the last 40 cal. ka BP (calibrated thousand years before present) using pollen counts from 203 fossil pollen records in northern Asia (north of 40°N). These pollen records were organised into 42 site-groups, and regional mean plant abundances calculated using the REVEALS (Regional Estimates of Vegetation Abundance from Large Sites) model. Time-series clustering, constrained hierarchical clustering, and detrended canonical correspondence analysis were performed to investigate the regional pattern, time, and strength of vegetation changes, respectively. Reconstructed regional plant-functional type (PFT) components for each site-group are generally consistent with modern vegetation, in that vegetation changes within the regions are characterized by minor changes in the abundance of PFTs rather than by increase in new PFTs, particularly during the Holocene. We argue that pollen-based REVEALS estimates of plant abundances should be a more reliable reflection of the vegetation as pollen may overestimate the turnover, particularly when a high pollen producer invades areas dominated by low pollen producers. Comparisons with vegetation-independent climate records show that climate change is the primary factor driving land-cover changes at broad spatial and temporal scales. Vegetation changes in certain regions or periods, however, could not be explained by direct climate change, for example inland Siberia, where a sharp increase in evergreen conifer tree abundance occurred at ca. 7–8 cal. ka BP despite an unchanging climate, potentially reflecting their response to complex climate–permafrost–fire–vegetation interactions and thus a possible long-term-scale lagged climate response.
How to cite: Cao, X., Tian, F., Li, F., Gaillard, M.-J., Rudaya, N., Xu, Q., and Herzschuh, U.: Pollen-based quantitative land-cover reconstruction for northern Asia covering the last 40 ka, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21976, https://doi.org/10.5194/egusphere-egu2020-21976, 2020.
EGU2020-3875 | Displays | ITS2.3/CL1.19
The millennial-scale climatic variability in central Asia during last glacialJia Jia, Zhiyuan Wang, Leibin Wang, and Jianhui Chen
In the North Atlantic and the surrounding region, more than 20 rapid millennial-scale climatic fluctuations occurred during the last glacial-interglacial cycle (Dansgaard et al. 1993). These oscillations, known as Dansgaard-Oeschger (D-O) and H-events. Simulate studies suggest that the millennial-scale climatic signals can spread to a wide area by atmospheric and oceanic circulations. However, it lacks such record in central Asia which is climatically characterized by arid and sensitive to climate change.
Here, we present the record of millennial-scale fluctuations from loess deposits in Tajikistan in Central Asia. The frequency-dependent magnetic susceptibility (Xfd, a moisture proxy) record in the Darai Kalon (DK) section (38º23′4″N, 69º50′1″N, 1561 m) can be readily matched with the NGRIP oxygen isotope curve, especially during the interval from 60-30 ka in which typical D-O cycles and H-events are well developed. Most of the long-lasting D-O cycles in Greenland, e.g., D-O 8, 12, and 14, are also evident in the Tajikistan loess. Similarly, the short-duration D-O cycles in Greenland, e.g., D-O 6, 7, 9, 10 cycles, have their damped counterparts in the Tajikistan loess. However, some significant differences in detail can be observed between the two records. The most distinct difference occurs in the case of last D-O cycle, which includes the well-documented Oldest Dryas (OD or H1), Bølling-Allerød (BA), and Younger Dryas (YD or H0) events, which are not clearly present in the Xfd curve.
The magnetic results support that the climate is humid in interstadials and dry in stadials in central Asia. And, the variation of humidity is much more remarkable in central Asia than in Chinese Loess Plateau which is climatically dominated by Asian Monsoon. It exhibits the humidity in central Asia is sensitive to millennial-scale climate oscillations during the last glacial. The comparison results further indicate propagations of millennial-scale climatic signals were different between these two regions. We assumed the former one is the Westerlier which can directly and effectively force the millennial-scale climatic variability in central Asia, and the latter one is thermohaline circulation and Asian Monsoon, the complex propagation weakened the millennial-scale climatic variability Northern China.
How to cite: Jia, J., Wang, Z., Wang, L., and Chen, J.: The millennial-scale climatic variability in central Asia during last glacial, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3875, https://doi.org/10.5194/egusphere-egu2020-3875, 2020.
In the North Atlantic and the surrounding region, more than 20 rapid millennial-scale climatic fluctuations occurred during the last glacial-interglacial cycle (Dansgaard et al. 1993). These oscillations, known as Dansgaard-Oeschger (D-O) and H-events. Simulate studies suggest that the millennial-scale climatic signals can spread to a wide area by atmospheric and oceanic circulations. However, it lacks such record in central Asia which is climatically characterized by arid and sensitive to climate change.
Here, we present the record of millennial-scale fluctuations from loess deposits in Tajikistan in Central Asia. The frequency-dependent magnetic susceptibility (Xfd, a moisture proxy) record in the Darai Kalon (DK) section (38º23′4″N, 69º50′1″N, 1561 m) can be readily matched with the NGRIP oxygen isotope curve, especially during the interval from 60-30 ka in which typical D-O cycles and H-events are well developed. Most of the long-lasting D-O cycles in Greenland, e.g., D-O 8, 12, and 14, are also evident in the Tajikistan loess. Similarly, the short-duration D-O cycles in Greenland, e.g., D-O 6, 7, 9, 10 cycles, have their damped counterparts in the Tajikistan loess. However, some significant differences in detail can be observed between the two records. The most distinct difference occurs in the case of last D-O cycle, which includes the well-documented Oldest Dryas (OD or H1), Bølling-Allerød (BA), and Younger Dryas (YD or H0) events, which are not clearly present in the Xfd curve.
The magnetic results support that the climate is humid in interstadials and dry in stadials in central Asia. And, the variation of humidity is much more remarkable in central Asia than in Chinese Loess Plateau which is climatically dominated by Asian Monsoon. It exhibits the humidity in central Asia is sensitive to millennial-scale climate oscillations during the last glacial. The comparison results further indicate propagations of millennial-scale climatic signals were different between these two regions. We assumed the former one is the Westerlier which can directly and effectively force the millennial-scale climatic variability in central Asia, and the latter one is thermohaline circulation and Asian Monsoon, the complex propagation weakened the millennial-scale climatic variability Northern China.
How to cite: Jia, J., Wang, Z., Wang, L., and Chen, J.: The millennial-scale climatic variability in central Asia during last glacial, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3875, https://doi.org/10.5194/egusphere-egu2020-3875, 2020.
EGU2020-3185 | Displays | ITS2.3/CL1.19
An n-alkane-based Holocene climate reconstruction in the Altai Mountains, northern Xinjiang, ChinaMin Ran
The climate in the Altai Mountains is highly sensitive to large-scale forcing factors because of its special geographic location. Based on n-alkane data of 150 samples and with a chronologic support of 15 accelerator mass spectrometry (AMS) dates from a 600-cm core at GHZ Peat, the Holocene climatic changes in the Altai Mountains were reconstructed. The reconstruction revealed a warming and drying early Holocene (~10,750-~8500 cal. yr BP), a cooling and persistent dry middle Holocene (~8500-~4500 cal. yr BP), and a cooling and wetting late Holocene (~4500-~700 cal. yr BP). The Holocene temperature changes were primarily controlled by the summer solar radiation with a certain time lag in the early Holocene and also modulated by solar activity, and the time lag in the early Holocene was probably resulted from ice and permafrost melting. The Holocene moisture in the southern Altai Mountains was likely modulated by the North Atlantic Oscillations (NAO) or by the Atlantic Multi-centennial Oscillations (i.e., AMO-like) or by temperature, and or by any combination of the three (NAO, AMO-like, and temperature).
How to cite: Ran, M.: An n-alkane-based Holocene climate reconstruction in the Altai Mountains, northern Xinjiang, China, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3185, https://doi.org/10.5194/egusphere-egu2020-3185, 2020.
The climate in the Altai Mountains is highly sensitive to large-scale forcing factors because of its special geographic location. Based on n-alkane data of 150 samples and with a chronologic support of 15 accelerator mass spectrometry (AMS) dates from a 600-cm core at GHZ Peat, the Holocene climatic changes in the Altai Mountains were reconstructed. The reconstruction revealed a warming and drying early Holocene (~10,750-~8500 cal. yr BP), a cooling and persistent dry middle Holocene (~8500-~4500 cal. yr BP), and a cooling and wetting late Holocene (~4500-~700 cal. yr BP). The Holocene temperature changes were primarily controlled by the summer solar radiation with a certain time lag in the early Holocene and also modulated by solar activity, and the time lag in the early Holocene was probably resulted from ice and permafrost melting. The Holocene moisture in the southern Altai Mountains was likely modulated by the North Atlantic Oscillations (NAO) or by the Atlantic Multi-centennial Oscillations (i.e., AMO-like) or by temperature, and or by any combination of the three (NAO, AMO-like, and temperature).
How to cite: Ran, M.: An n-alkane-based Holocene climate reconstruction in the Altai Mountains, northern Xinjiang, China, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3185, https://doi.org/10.5194/egusphere-egu2020-3185, 2020.
EGU2020-12886 * | Displays | ITS2.3/CL1.19 | Highlight
A review on the spread of prehistoric agriculture from southern China to mainland Southeast AsiaYu Gao, Guanghui Dong, Xiaoyan Yang, and Fahu Chen
The origins and spread of agriculture was one of the milestones in human history. When and how prehistoric agriculture spread to mainland Southeast Asia is highly concerned, which contributed to the formation of modern Austroasiatic in this region. Previous studies mainly focused on the time and route of rice agriculture’s introduction into Southeast Asia while millet agriculture was not paid properly attention. Here we analyze 312 14C dating data yielded from charred seeds of rice (Oryza sativa), foxtail millet (Setaria italica) and broomcorn millet (Panicum miliaceum) from 128 archaeological sites in China and mainland Southeast Asia. The result shows that millet farming was introduced to mainland Southeast Asia in the late third millennium BC and rice farming was in the late second millennium BC. The agriculture of mainland Southeast Asia might originate from three areas, Southwest China, Guangxi-West Guangdong and coastal Fujian. The spread route of ancient agriculture in Southwest China is close to the “Southwest Silk Road” recorded in literature, which infers there was possibly a channel of cultural exchanges on the eastern margin of Tibetan Plateau already in the late Neolithic period, laying the foundation of the Southwest Silk Road later.
How to cite: Gao, Y., Dong, G., Yang, X., and Chen, F.: A review on the spread of prehistoric agriculture from southern China to mainland Southeast Asia, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12886, https://doi.org/10.5194/egusphere-egu2020-12886, 2020.
The origins and spread of agriculture was one of the milestones in human history. When and how prehistoric agriculture spread to mainland Southeast Asia is highly concerned, which contributed to the formation of modern Austroasiatic in this region. Previous studies mainly focused on the time and route of rice agriculture’s introduction into Southeast Asia while millet agriculture was not paid properly attention. Here we analyze 312 14C dating data yielded from charred seeds of rice (Oryza sativa), foxtail millet (Setaria italica) and broomcorn millet (Panicum miliaceum) from 128 archaeological sites in China and mainland Southeast Asia. The result shows that millet farming was introduced to mainland Southeast Asia in the late third millennium BC and rice farming was in the late second millennium BC. The agriculture of mainland Southeast Asia might originate from three areas, Southwest China, Guangxi-West Guangdong and coastal Fujian. The spread route of ancient agriculture in Southwest China is close to the “Southwest Silk Road” recorded in literature, which infers there was possibly a channel of cultural exchanges on the eastern margin of Tibetan Plateau already in the late Neolithic period, laying the foundation of the Southwest Silk Road later.
How to cite: Gao, Y., Dong, G., Yang, X., and Chen, F.: A review on the spread of prehistoric agriculture from southern China to mainland Southeast Asia, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12886, https://doi.org/10.5194/egusphere-egu2020-12886, 2020.
EGU2020-7977 | Displays | ITS2.3/CL1.19
Precursor of the Highland silk road on the Tibetan PlateauXiaoyan Yang
Ancient silk road had two main branches, one is north line across Xinjiang and the other is south line proposed as Highland Silk Road, jointing Tibetan Empire, Tang Dynasty and states in South Asia. Due to the harsh natural environment on the Tibetan Plateau (TP), a little archaeological work had carried out, and a few archaeological sites were excavated on the TP. How the highland silk road was developed is unclear.
In 2018, researchers from the Chinese Academy of Sciences (CAS) launched the second scientific expedition to the Qinghai-Tibet Plateau (STEP) that will last 5 to 10 years, following an expedition in the 1970s. Sponsored by the STEP, we had two systematic surveys along the Yarlung Tsangpo River regions, esp. the middle and lower regions to understand the prehistoric human activities on the central Tibetan Plateau where the mean altitude is above 4000 meters. We investigated the terraces along the River and its tributoraries, and terraces circled the lake banks. 99 archaeological sites were surveyed, including 58 new findings. The anthropic deposits were found at 31 sites and a profile was cleared at each site to collect dating materials. Charcoals and charred seeds were floated from the anthropic deposits and dated 60 samples by the AMS 14C.
In combination with previous published dates, we set up a brief history of human activities on the central TP that is, Neolithic people had occupied the Yarlung Tsangpo Valley in the third millennium BC, and moved along the River and its tributaries. The route of dispersal is similar with the historic highland silk road, indicating that this road had developed in prehistory.
How to cite: Yang, X.: Precursor of the Highland silk road on the Tibetan Plateau, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7977, https://doi.org/10.5194/egusphere-egu2020-7977, 2020.
Ancient silk road had two main branches, one is north line across Xinjiang and the other is south line proposed as Highland Silk Road, jointing Tibetan Empire, Tang Dynasty and states in South Asia. Due to the harsh natural environment on the Tibetan Plateau (TP), a little archaeological work had carried out, and a few archaeological sites were excavated on the TP. How the highland silk road was developed is unclear.
In 2018, researchers from the Chinese Academy of Sciences (CAS) launched the second scientific expedition to the Qinghai-Tibet Plateau (STEP) that will last 5 to 10 years, following an expedition in the 1970s. Sponsored by the STEP, we had two systematic surveys along the Yarlung Tsangpo River regions, esp. the middle and lower regions to understand the prehistoric human activities on the central Tibetan Plateau where the mean altitude is above 4000 meters. We investigated the terraces along the River and its tributoraries, and terraces circled the lake banks. 99 archaeological sites were surveyed, including 58 new findings. The anthropic deposits were found at 31 sites and a profile was cleared at each site to collect dating materials. Charcoals and charred seeds were floated from the anthropic deposits and dated 60 samples by the AMS 14C.
In combination with previous published dates, we set up a brief history of human activities on the central TP that is, Neolithic people had occupied the Yarlung Tsangpo Valley in the third millennium BC, and moved along the River and its tributaries. The route of dispersal is similar with the historic highland silk road, indicating that this road had developed in prehistory.
How to cite: Yang, X.: Precursor of the Highland silk road on the Tibetan Plateau, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7977, https://doi.org/10.5194/egusphere-egu2020-7977, 2020.
EGU2020-6328 | Displays | ITS2.3/CL1.19
Variation of bacterial communities in Muztagh ice core from 1869 to 2000Yongqin Liu, Tandong Yao, and Baiqing Xu
Many studies focusing on the physical and chemical indicators of the ice core reflected the climate changes. However, only few biological indicators indicated the past climate changes which are mainly focused in biomass rather than diversity. How the biodiversity response to the climate change during the past hundred years is still unknow. Glaciers in Mt. Muztagh Ata region are influenced by the year-round westerly circulation. We firstly disclosed annual variations of bacterial community compositions in ice core over the past 130 years from Muztagh Glacier, the western Tibetan Plateau. Temporal variation in bacterial abundance was strongly controlled by DOC, TN, δ18O, Ca2+, SO42−, NH4+ and NO3−. Proteobacteria, Actinobacteria and Firmicutes were the three most abundant bacterial phyla, accounting for 49.3%, 21.3% and 11.0% of the total community, respectively. The abundances of Firmicutes and Bacteroidetes pronouncedly increased over time throughout the entire ice core. UPGMA cluster analysis of the bacterial community composition separated the all ice core samples into two main clusters along the temporal variation. The first cluster consisted of samples from 1951 to 2000 and the second cluster contained main samples during the period of 1869-1950. The stage 1 and stage 2 bacterial community dissimilarities increased linearly with time on the basis of the Bray-Curtis distance, indicating a similar temporal–decay relationship between the stage 1 and stage 2 bacterial communities. Of all the environmental variables examined, only DOC and NH4+ exhibited very strong negative correlations with bacterial Chao1-richness. 18O was another important variable in shaping the ice core bacterial community composition and contributed 1.6% of the total variation. Moreover, DistLM analysis indicated that the environmental variables explained more variation in the stage 1 community (20.1%) than that of the stage 2 community (19.9%).
How to cite: Liu, Y., Yao, T., and Xu, B.: Variation of bacterial communities in Muztagh ice core from 1869 to 2000, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6328, https://doi.org/10.5194/egusphere-egu2020-6328, 2020.
Many studies focusing on the physical and chemical indicators of the ice core reflected the climate changes. However, only few biological indicators indicated the past climate changes which are mainly focused in biomass rather than diversity. How the biodiversity response to the climate change during the past hundred years is still unknow. Glaciers in Mt. Muztagh Ata region are influenced by the year-round westerly circulation. We firstly disclosed annual variations of bacterial community compositions in ice core over the past 130 years from Muztagh Glacier, the western Tibetan Plateau. Temporal variation in bacterial abundance was strongly controlled by DOC, TN, δ18O, Ca2+, SO42−, NH4+ and NO3−. Proteobacteria, Actinobacteria and Firmicutes were the three most abundant bacterial phyla, accounting for 49.3%, 21.3% and 11.0% of the total community, respectively. The abundances of Firmicutes and Bacteroidetes pronouncedly increased over time throughout the entire ice core. UPGMA cluster analysis of the bacterial community composition separated the all ice core samples into two main clusters along the temporal variation. The first cluster consisted of samples from 1951 to 2000 and the second cluster contained main samples during the period of 1869-1950. The stage 1 and stage 2 bacterial community dissimilarities increased linearly with time on the basis of the Bray-Curtis distance, indicating a similar temporal–decay relationship between the stage 1 and stage 2 bacterial communities. Of all the environmental variables examined, only DOC and NH4+ exhibited very strong negative correlations with bacterial Chao1-richness. 18O was another important variable in shaping the ice core bacterial community composition and contributed 1.6% of the total variation. Moreover, DistLM analysis indicated that the environmental variables explained more variation in the stage 1 community (20.1%) than that of the stage 2 community (19.9%).
How to cite: Liu, Y., Yao, T., and Xu, B.: Variation of bacterial communities in Muztagh ice core from 1869 to 2000, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6328, https://doi.org/10.5194/egusphere-egu2020-6328, 2020.
EGU2020-1818 | Displays | ITS2.3/CL1.19
How climate change affected the evolution of ancient civilizations in eastern Ancient Silk Road?Guanghui Dong, Ruo Li, Shanjia Zhang, and Fengwen Liu
The study of the coupling relationship between climate change and civilization evolution along the Ancient Silk, can provide valuable insights for understanding the history, pattern and mechanism of man-land relation evolution from a long-run perspective. Here we provide two case studies from the Hexi Corridor and Qaidam basin in northwest China, where locates at eastern Ancient Silk Road, and became a center for trans-continental exchange since the second Millennium BC, hydrological change in these areas is very drastic. The results reveal three significant desertification events occurred in these two areas during late Holocene, which was likely related to precipitation variation in surrounding mountains instead of basins, and triggered the shrinkage of ancient oases and then the decline of ancient civilizations. We also try to explain the linkage between climate change and the evolution of ancient civilizations in the two areas.
How to cite: Dong, G., Li, R., Zhang, S., and Liu, F.: How climate change affected the evolution of ancient civilizations in eastern Ancient Silk Road?, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1818, https://doi.org/10.5194/egusphere-egu2020-1818, 2020.
The study of the coupling relationship between climate change and civilization evolution along the Ancient Silk, can provide valuable insights for understanding the history, pattern and mechanism of man-land relation evolution from a long-run perspective. Here we provide two case studies from the Hexi Corridor and Qaidam basin in northwest China, where locates at eastern Ancient Silk Road, and became a center for trans-continental exchange since the second Millennium BC, hydrological change in these areas is very drastic. The results reveal three significant desertification events occurred in these two areas during late Holocene, which was likely related to precipitation variation in surrounding mountains instead of basins, and triggered the shrinkage of ancient oases and then the decline of ancient civilizations. We also try to explain the linkage between climate change and the evolution of ancient civilizations in the two areas.
How to cite: Dong, G., Li, R., Zhang, S., and Liu, F.: How climate change affected the evolution of ancient civilizations in eastern Ancient Silk Road?, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1818, https://doi.org/10.5194/egusphere-egu2020-1818, 2020.
EGU2020-6762 | Displays | ITS2.3/CL1.19
Quantitative Estimates of Holocene Glacier Mass Fluctuations on the Western Tibetan PlateauJuzhi Hou
Knowledge of the alpine glacier mass fluctuations is a fundamental prerequisite for understanding glacier dynamics, projecting future glacier change, and assessing the availability of freshwater resources. The glaciers on the Tibetan Plateau (TP) are sources of water for most of the major Asian rivers and their fate remains unclear due to accurate estimates of glacier mass fluctuations are lacking over long time scales. Here, we used d18O record at a proglacial open lake as proxy to estimate the Holocene glacier mass fluctuations in the Western Kunlun Mountain (WKM) quantitatively and continuously. Relative to past decades, maximum WKM glacier mass loss (-28.62±25.76 Gt) occurred at 9.5-8.5 ka BP, and maximum glacier mass gain (24.53±25.02 Gt) occurred at 1.3~0.5 ka BP, the difference in WKM glacier mass between the two periods account for ~20% of the total glaciers. Long-term changes in glacier mass suggests the TP glaciers likely face severe threats at the current rates of global warming.
How to cite: Hou, J.: Quantitative Estimates of Holocene Glacier Mass Fluctuations on the Western Tibetan Plateau, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6762, https://doi.org/10.5194/egusphere-egu2020-6762, 2020.
Knowledge of the alpine glacier mass fluctuations is a fundamental prerequisite for understanding glacier dynamics, projecting future glacier change, and assessing the availability of freshwater resources. The glaciers on the Tibetan Plateau (TP) are sources of water for most of the major Asian rivers and their fate remains unclear due to accurate estimates of glacier mass fluctuations are lacking over long time scales. Here, we used d18O record at a proglacial open lake as proxy to estimate the Holocene glacier mass fluctuations in the Western Kunlun Mountain (WKM) quantitatively and continuously. Relative to past decades, maximum WKM glacier mass loss (-28.62±25.76 Gt) occurred at 9.5-8.5 ka BP, and maximum glacier mass gain (24.53±25.02 Gt) occurred at 1.3~0.5 ka BP, the difference in WKM glacier mass between the two periods account for ~20% of the total glaciers. Long-term changes in glacier mass suggests the TP glaciers likely face severe threats at the current rates of global warming.
How to cite: Hou, J.: Quantitative Estimates of Holocene Glacier Mass Fluctuations on the Western Tibetan Plateau, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6762, https://doi.org/10.5194/egusphere-egu2020-6762, 2020.
EGU2020-3862 | Displays | ITS2.3/CL1.19
Mid- to- late Holocene hydroclimatic changes on the Chinese Loess Plateau: evidence from n-alkanes from the sediments of Tianchi LakeAifeng Zhou
We have reconstructed the history of mid-late Holocene paleohydrological changes in the Chinese Loess Plateau using n-alkane data from a sediment core in Tianchi Lake. We used Paq (the proportion of aquatic macrophytes to the total plant community) to reflect changes in lake water level, with a higher abundance of submerged macrophytes indicating a lower water level and vice versa. The Paq -based hydrological reconstruction agrees with various other lines of evidence, including ACL (average chain length), CPI (carbon preference index), C/N ratio and the n-alkane molecular distribution of the sediments in Tianchi Lake. The results reveal that the lake water level was relatively high during 5.7 to 3.2 ka BP, and decreased gradually thereafter. Our paleohydrological reconstruction is consistent with existing paleoclimate reconstructions from the Loess Plateau, which suggest a humid mid-Holocene, but is asynchronous with paleoclimatic records from central China which indicate an arid mid-Holocene. Overall, our results confirm that the intensity of the rainfall delivered by the EASM (East Asian summer monsoon) is an important factor in affecting paleohydrological changes in the region and can be considered as further evidence for the development of a spatially asynchronous “northern China drought and southern China flood” precipitation pattern during the Holocene.
How to cite: Zhou, A.: Mid- to- late Holocene hydroclimatic changes on the Chinese Loess Plateau: evidence from n-alkanes from the sediments of Tianchi Lake, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3862, https://doi.org/10.5194/egusphere-egu2020-3862, 2020.
We have reconstructed the history of mid-late Holocene paleohydrological changes in the Chinese Loess Plateau using n-alkane data from a sediment core in Tianchi Lake. We used Paq (the proportion of aquatic macrophytes to the total plant community) to reflect changes in lake water level, with a higher abundance of submerged macrophytes indicating a lower water level and vice versa. The Paq -based hydrological reconstruction agrees with various other lines of evidence, including ACL (average chain length), CPI (carbon preference index), C/N ratio and the n-alkane molecular distribution of the sediments in Tianchi Lake. The results reveal that the lake water level was relatively high during 5.7 to 3.2 ka BP, and decreased gradually thereafter. Our paleohydrological reconstruction is consistent with existing paleoclimate reconstructions from the Loess Plateau, which suggest a humid mid-Holocene, but is asynchronous with paleoclimatic records from central China which indicate an arid mid-Holocene. Overall, our results confirm that the intensity of the rainfall delivered by the EASM (East Asian summer monsoon) is an important factor in affecting paleohydrological changes in the region and can be considered as further evidence for the development of a spatially asynchronous “northern China drought and southern China flood” precipitation pattern during the Holocene.
How to cite: Zhou, A.: Mid- to- late Holocene hydroclimatic changes on the Chinese Loess Plateau: evidence from n-alkanes from the sediments of Tianchi Lake, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3862, https://doi.org/10.5194/egusphere-egu2020-3862, 2020.
EGU2020-17666 | Displays | ITS2.3/CL1.19
Prehistoric sheep/goats husbandry in Xinjiang, China—Evidence from bone stable carbon and nitrogen isotopesWeimiao Dong
Sheep and goats have been introduced into northwest China as important livestock for some four thousand years. The frequency of sheep/goats’ bones in prehistoric archeological sites in Xinjiang can be a proof of their importance in people’s life. This study focuses on food reconstruction of prehistoric sheep/goats across Xinjiang to illustrate whether there is a difference on sheep/goats husbandry. Bone samples from 11 sites were isotopically analyzed together with 4 sets of published data, 220 pairs of sheep/goats bone stable carbon and nitrogen isotopes in total from 15 sites across Xinjiang with time span of ca, 4000 cal BP to ca. 2000 cal BP were produced. 9 sites each with sample number no less than 10 were further studied. It revealed that generally sheep/goats from 4 oasis sedimentary farming societies have both higher 13C values and higher 15N values, although highly fluctuated. It is highly likely that C4 plants such as foxtail millet or common millet must have not been a stranger around their environment. As for their remarkably high 15N values, drought stress in arid environment may have been one reason, fertilized soil after long time relatively intensive human activity may have also contributed to this. In the meanwhile, sheep/goats from 5 pastoralism or transhumance societies have homogenous and more negative 13C values, most of which are lower than -18‰, meaning that there was barely no C4 plants in their diet. In contrast, 15N values of them are lower than that of farming societies as a whole but more scattered, seasonally different pastures with diversified 15N background could be the reason.
How to cite: Dong, W.: Prehistoric sheep/goats husbandry in Xinjiang, China—Evidence from bone stable carbon and nitrogen isotopes, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-17666, https://doi.org/10.5194/egusphere-egu2020-17666, 2020.
Sheep and goats have been introduced into northwest China as important livestock for some four thousand years. The frequency of sheep/goats’ bones in prehistoric archeological sites in Xinjiang can be a proof of their importance in people’s life. This study focuses on food reconstruction of prehistoric sheep/goats across Xinjiang to illustrate whether there is a difference on sheep/goats husbandry. Bone samples from 11 sites were isotopically analyzed together with 4 sets of published data, 220 pairs of sheep/goats bone stable carbon and nitrogen isotopes in total from 15 sites across Xinjiang with time span of ca, 4000 cal BP to ca. 2000 cal BP were produced. 9 sites each with sample number no less than 10 were further studied. It revealed that generally sheep/goats from 4 oasis sedimentary farming societies have both higher 13C values and higher 15N values, although highly fluctuated. It is highly likely that C4 plants such as foxtail millet or common millet must have not been a stranger around their environment. As for their remarkably high 15N values, drought stress in arid environment may have been one reason, fertilized soil after long time relatively intensive human activity may have also contributed to this. In the meanwhile, sheep/goats from 5 pastoralism or transhumance societies have homogenous and more negative 13C values, most of which are lower than -18‰, meaning that there was barely no C4 plants in their diet. In contrast, 15N values of them are lower than that of farming societies as a whole but more scattered, seasonally different pastures with diversified 15N background could be the reason.
How to cite: Dong, W.: Prehistoric sheep/goats husbandry in Xinjiang, China—Evidence from bone stable carbon and nitrogen isotopes, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-17666, https://doi.org/10.5194/egusphere-egu2020-17666, 2020.
EGU2020-12828 | Displays | ITS2.3/CL1.19
Centennial to millennial-scale monsoon changes since the last deglaciation linked to solar activities and North Atlantic coolingXingxing Liu, Youbin Sun, Jef Vandenberghe, Peng Cheng, Xu Zhang, Evan Gowan, Gerrit Lohmann, and Zhisheng An
Rapid monsoon changes since the last deglaciation remain poorly constrained due to the scarcity of geological archives. Here we present a high-resolution scanning X-ray fluorescence (XRF) analysis of a 13.5-m terrace succession on the western Chinese Loess Plateau (CLP) to infer rapid monsoon changes since the last deglaciation. Our results indicate that Rb/Sr and Zr/Rb are sensitive indicators of chemical weathering and wind sorting, respectively, which are further linked to the strength of the East Asia summer monsoon (EASM) and the East Asia winter monsoon (EAWM). During the last deglaciation, two cold intervals of the Heinrich event 1 and Younger Dryas were characterized by intensified winter monsoon and weakened summer monsoon. The EAWM gradually weakened since the beginning of the Holocene, while the EASM remained steady till 9.9 ka and then grew stronger. Both the EASM and EAWM intensity were relatively weak during the middle Holocene, indicate a mid-Holocene climatic optimum. Rb/Sr and Zr/Rb exhibit an anti-phase relationship between the summer and winter monsoon changes on centennial timescale during 16~1 ka BP. Comparison of these monsoon changes with solar activity and North Atlantic cooling events reveals that both factors can lead to abrupt changes on the centennial timescale in the early Holocene. During the late Holocene, North Atlantic cooling became the major forcing of centennial monsoon events.
How to cite: Liu, X., Sun, Y., Vandenberghe, J., Cheng, P., Zhang, X., Gowan, E., Lohmann, G., and An, Z.: Centennial to millennial-scale monsoon changes since the last deglaciation linked to solar activities and North Atlantic cooling, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12828, https://doi.org/10.5194/egusphere-egu2020-12828, 2020.
Rapid monsoon changes since the last deglaciation remain poorly constrained due to the scarcity of geological archives. Here we present a high-resolution scanning X-ray fluorescence (XRF) analysis of a 13.5-m terrace succession on the western Chinese Loess Plateau (CLP) to infer rapid monsoon changes since the last deglaciation. Our results indicate that Rb/Sr and Zr/Rb are sensitive indicators of chemical weathering and wind sorting, respectively, which are further linked to the strength of the East Asia summer monsoon (EASM) and the East Asia winter monsoon (EAWM). During the last deglaciation, two cold intervals of the Heinrich event 1 and Younger Dryas were characterized by intensified winter monsoon and weakened summer monsoon. The EAWM gradually weakened since the beginning of the Holocene, while the EASM remained steady till 9.9 ka and then grew stronger. Both the EASM and EAWM intensity were relatively weak during the middle Holocene, indicate a mid-Holocene climatic optimum. Rb/Sr and Zr/Rb exhibit an anti-phase relationship between the summer and winter monsoon changes on centennial timescale during 16~1 ka BP. Comparison of these monsoon changes with solar activity and North Atlantic cooling events reveals that both factors can lead to abrupt changes on the centennial timescale in the early Holocene. During the late Holocene, North Atlantic cooling became the major forcing of centennial monsoon events.
How to cite: Liu, X., Sun, Y., Vandenberghe, J., Cheng, P., Zhang, X., Gowan, E., Lohmann, G., and An, Z.: Centennial to millennial-scale monsoon changes since the last deglaciation linked to solar activities and North Atlantic cooling, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12828, https://doi.org/10.5194/egusphere-egu2020-12828, 2020.
EGU2020-22270 | Displays | ITS2.3/CL1.19
Postcranial Phenotypic Adaptations to New Habitats Following Domestication --- An Investigation on Ovis Metacarpals by 3D Geometric MorphometricsYiru Wang, Robin Bendrey, Jeff Schoenebeck, and Tom Marchant
Domestication is a complex evolutionary process in which wild organisms are moved to anthropogenic environments with a series of phenotypic changes in response to artificial selection and new habitats. In recent years, phenotypic variations have been detected between wild and domestic species, as well as different breeds of domestic species, through dental and skeletal elements. However, the mechanisms of phenotypic adaptations in the postcranial skeletons to new environments following domestication are still poorly understood. In this study, the morphological variations on the metacarpals of a primitive sheep (Ovis aries) breed – Soay sheep, are investigated. Controlled modern samples with known sex, age, and diets from those living feral on St Kilda (Scotland) as well as those re-located and raised on East Anglian farms were analysed using 3D geometric morphometrics. Specific morphotypes were found associated with the animals’ age, sex, and anthropogenic stressors in the new ecological niches under human control. Importantly, apart from the traditionally observed contributing factors to the morphological changes mentioned above, the animals’ locomotor adaptations to the different physical terrains – flat and enclosed East Anglian farms in contrast to the mountainous St Kilda – were observed, indicating that the animals’ movement into the new landscapes following humans’ management might be detected using the specific morphotypes. This study bears testament to the process of initial caprine domestication, and provides insights into the bovids biological mechanisms during the co-evolutionary process between the humans, animals, and physical environments. The specific interlinks between the phenotypic features and the animals’ adaptations following domestication and translocation could serve as a basis for the further studies on the process and effects of the beginnings and spread of farm animals across prehistoric Eurasia.
How to cite: Wang, Y., Bendrey, R., Schoenebeck, J., and Marchant, T.: Postcranial Phenotypic Adaptations to New Habitats Following Domestication --- An Investigation on Ovis Metacarpals by 3D Geometric Morphometrics, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22270, https://doi.org/10.5194/egusphere-egu2020-22270, 2020.
Domestication is a complex evolutionary process in which wild organisms are moved to anthropogenic environments with a series of phenotypic changes in response to artificial selection and new habitats. In recent years, phenotypic variations have been detected between wild and domestic species, as well as different breeds of domestic species, through dental and skeletal elements. However, the mechanisms of phenotypic adaptations in the postcranial skeletons to new environments following domestication are still poorly understood. In this study, the morphological variations on the metacarpals of a primitive sheep (Ovis aries) breed – Soay sheep, are investigated. Controlled modern samples with known sex, age, and diets from those living feral on St Kilda (Scotland) as well as those re-located and raised on East Anglian farms were analysed using 3D geometric morphometrics. Specific morphotypes were found associated with the animals’ age, sex, and anthropogenic stressors in the new ecological niches under human control. Importantly, apart from the traditionally observed contributing factors to the morphological changes mentioned above, the animals’ locomotor adaptations to the different physical terrains – flat and enclosed East Anglian farms in contrast to the mountainous St Kilda – were observed, indicating that the animals’ movement into the new landscapes following humans’ management might be detected using the specific morphotypes. This study bears testament to the process of initial caprine domestication, and provides insights into the bovids biological mechanisms during the co-evolutionary process between the humans, animals, and physical environments. The specific interlinks between the phenotypic features and the animals’ adaptations following domestication and translocation could serve as a basis for the further studies on the process and effects of the beginnings and spread of farm animals across prehistoric Eurasia.
How to cite: Wang, Y., Bendrey, R., Schoenebeck, J., and Marchant, T.: Postcranial Phenotypic Adaptations to New Habitats Following Domestication --- An Investigation on Ovis Metacarpals by 3D Geometric Morphometrics, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22270, https://doi.org/10.5194/egusphere-egu2020-22270, 2020.
EGU2020-13015 | Displays | ITS2.3/CL1.19
Changes in the hydrodynamic intensity of Bosten Lake and its impact on early human settlement in the northeastern Tarim Basin, eastern Arid Central AsiaHaichao Xie
The climate of eastern arid central Asia (ACA) is extremely dry and early human settlement and civilization in the region were dependent upon a potentially unstable water supply. Thus, knowledge of the history of hydrological fluctuations is essential for understanding the relationship between humans and the environment in the region. Here we present a record of variation in lake hydrodynamic intensity based on the grain size of suspended lacustrine silt isolated from the sediments of Bosten Lake, which feeds a river flowing to the northeastern Tarim Basin. The results show that lake hydrodynamic intensity was very weak, and/or that the lake dried-out completely, during the early Holocene (12.0–8.2 ka). Then it increased with two distinct centennial-millennial-scale intervals of weak intensity occurring during 4.7–3.5 ka and 1.2–0.5 ka. Notably, increases in lake hydrodynamic intensity occurred some 2.2 kyr prior to an increase in local precipitation and effective moisture. We speculate that this was a consequence of relatively high early summer temperatures during 8.2–6.0 ka that resulted in an increased water supply from melting snow and ice in mountainous areas of the catchment. Thus, we conclude that changes in the hydrodynamic intensity of Bosten Lake during the Holocene were affected by changes in both temperature and precipitation. The variations in the hydrodynamic intensity of Bosten Lake since the middle Holocene also influenced water availability for the human population that occupied the downstream area of the northeastern Tarim Basin. A persistent increase in hydrodynamic intensity during 2123–1450 B.C. may have been responsible for human occupation of the region that contains the noted archaeological sites of Xiaohe and Gumugou Cemetery. In addition, a drastic decrease in hydrodynamic intensity at around 400 A.D. likely caused the emigration of the inhabitants of Loulan.
How to cite: Xie, H.: Changes in the hydrodynamic intensity of Bosten Lake and its impact on early human settlement in the northeastern Tarim Basin, eastern Arid Central Asia, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13015, https://doi.org/10.5194/egusphere-egu2020-13015, 2020.
The climate of eastern arid central Asia (ACA) is extremely dry and early human settlement and civilization in the region were dependent upon a potentially unstable water supply. Thus, knowledge of the history of hydrological fluctuations is essential for understanding the relationship between humans and the environment in the region. Here we present a record of variation in lake hydrodynamic intensity based on the grain size of suspended lacustrine silt isolated from the sediments of Bosten Lake, which feeds a river flowing to the northeastern Tarim Basin. The results show that lake hydrodynamic intensity was very weak, and/or that the lake dried-out completely, during the early Holocene (12.0–8.2 ka). Then it increased with two distinct centennial-millennial-scale intervals of weak intensity occurring during 4.7–3.5 ka and 1.2–0.5 ka. Notably, increases in lake hydrodynamic intensity occurred some 2.2 kyr prior to an increase in local precipitation and effective moisture. We speculate that this was a consequence of relatively high early summer temperatures during 8.2–6.0 ka that resulted in an increased water supply from melting snow and ice in mountainous areas of the catchment. Thus, we conclude that changes in the hydrodynamic intensity of Bosten Lake during the Holocene were affected by changes in both temperature and precipitation. The variations in the hydrodynamic intensity of Bosten Lake since the middle Holocene also influenced water availability for the human population that occupied the downstream area of the northeastern Tarim Basin. A persistent increase in hydrodynamic intensity during 2123–1450 B.C. may have been responsible for human occupation of the region that contains the noted archaeological sites of Xiaohe and Gumugou Cemetery. In addition, a drastic decrease in hydrodynamic intensity at around 400 A.D. likely caused the emigration of the inhabitants of Loulan.
How to cite: Xie, H.: Changes in the hydrodynamic intensity of Bosten Lake and its impact on early human settlement in the northeastern Tarim Basin, eastern Arid Central Asia, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13015, https://doi.org/10.5194/egusphere-egu2020-13015, 2020.
EGU2020-4601 | Displays | ITS2.3/CL1.19
Holocene moisture variations in western arid central Asia inferred from loess records from NE IranQiang Wang, Haitao Wei, Farhad Khormali, Leibin Wang, Haichao Xie, Xin Wang, Wei Huang, Jianhui Chen, and Fahu Chen
Holocene variations in precipitation in central and eastern arid central Asia (ACA) have been widely investigated, but the pattern in western ACA remains unclear. We present records of the stable carbon isotope composition of bulk organic matter (δ13Corg), magnetic parameters, and sediment color, from five loess-paleosol sequences in NE Iran, in western ACA, with the aim of reconstructing Holocene precipitation. The Yellibadragh (YE) section (the thickest among the five sequences) was selected for OSL dating of the coarse-grained quartz (63-90 μm) fraction, and its δ13Corg record was used to quantitatively reconstruct mean annual precipitation (MAP). The record indicates a dry early Holocene (~11.8-7.4 ka), with nearly constant MAP (~93 mm), followed by a wetting trend from the mid-Holocene (~7.4 ka) onwards, with the wettest period in the late Holocene (~4.0-0.0 ka, ~390 mm). The stratigraphic observations and environmental proxies support the reconstruction. The other loess profiles show stratigraphic features and trends of environmental proxies which are similar to those of the YE profile. A dry early Holocene and wetting trend since the mid-Holocene, with the wettest climate in the late Holocene in NE Iran, are both consistent with records from sand dunes and lake sediments from adjacent areas, and with loess records from central and eastern ACA. Comparison with loess records from monsoonal Asia supports the interpretation of a “westerlies-dominated climatic regime” (WDCR) which was proposed mainly on the basis of lake sediment records from the region. Changes in solar insolation may have been responsible for the persistent wetting trend during the Holocene in western ACA.
How to cite: Wang, Q., Wei, H., Khormali, F., Wang, L., Xie, H., Wang, X., Huang, W., Chen, J., and Chen, F.: Holocene moisture variations in western arid central Asia inferred from loess records from NE Iran, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4601, https://doi.org/10.5194/egusphere-egu2020-4601, 2020.
Holocene variations in precipitation in central and eastern arid central Asia (ACA) have been widely investigated, but the pattern in western ACA remains unclear. We present records of the stable carbon isotope composition of bulk organic matter (δ13Corg), magnetic parameters, and sediment color, from five loess-paleosol sequences in NE Iran, in western ACA, with the aim of reconstructing Holocene precipitation. The Yellibadragh (YE) section (the thickest among the five sequences) was selected for OSL dating of the coarse-grained quartz (63-90 μm) fraction, and its δ13Corg record was used to quantitatively reconstruct mean annual precipitation (MAP). The record indicates a dry early Holocene (~11.8-7.4 ka), with nearly constant MAP (~93 mm), followed by a wetting trend from the mid-Holocene (~7.4 ka) onwards, with the wettest period in the late Holocene (~4.0-0.0 ka, ~390 mm). The stratigraphic observations and environmental proxies support the reconstruction. The other loess profiles show stratigraphic features and trends of environmental proxies which are similar to those of the YE profile. A dry early Holocene and wetting trend since the mid-Holocene, with the wettest climate in the late Holocene in NE Iran, are both consistent with records from sand dunes and lake sediments from adjacent areas, and with loess records from central and eastern ACA. Comparison with loess records from monsoonal Asia supports the interpretation of a “westerlies-dominated climatic regime” (WDCR) which was proposed mainly on the basis of lake sediment records from the region. Changes in solar insolation may have been responsible for the persistent wetting trend during the Holocene in western ACA.
How to cite: Wang, Q., Wei, H., Khormali, F., Wang, L., Xie, H., Wang, X., Huang, W., Chen, J., and Chen, F.: Holocene moisture variations in western arid central Asia inferred from loess records from NE Iran, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4601, https://doi.org/10.5194/egusphere-egu2020-4601, 2020.
EGU2020-3196 | Displays | ITS2.3/CL1.19
Holocene moisture variations in the Tianshan Mountains and their geographic coherency in the mid-latitude Eurasia: A synthesis of proxy recordsYunpeng Yang
Understanding the Holocene moisture variations in the Arid Central Asia (ACA) is of a large-scale climatic significance simply because the vast ACA is influenced by several different climate systems. However, the temporal and spatial patterns and the modulating mechanisms of the Holocene aridity (or moisture) variations in the ACA remain in the center of controversies for the past two decades. Firstly, we in this research depicted the spatial and temporal patterns of the Holocene aridity variations in the Tianshan Mountains based on thirteen already-published aridity sequences and two recently obtained aridity sequences. Our depiction shows that the regionally-averaged standardized aridity-index (RA-SAI) curve for the Eastern Tianshan Mountains exhibits a wetting trend. In contrast, the RA-SAI curve for the Western Tianshan Mountains exhibits a drying trend. Secondly, we further examined these two RA-SAI sequences (one for the Eastern Tianshan and another for the Western Tianshan) within a much larger geographic context for exploring the mechanisms modulating the Holocene patterns. Our examination shows that the summer precipitation-dominated northern middle-latitude Eurasia (i.e., MLEA-N) has experienced a wetting trend and the winter precipitation-dominated southern middle-latitude Eurasia (i.e., MLEA-S) has experienced a drying trend. The wetting trend in MLEA-N is proposed to have resulted from increasingly more positively-phased AMO activities that have increasingly enhanced cyclonic pressure anomalies over the Atlantic regions, directly or indirectly bringing more and more summer precipitation to MLEA-N stretching from West Europe to the Eastern Tianshan. And, the drying trend in MLEA-S is proposed to have resulted from increasingly more negatively-phased NAO activities. That is, the negatively-phased NAO activities weakened the strength of Western Disturbances, therefore resulting in decreased winter precipitations in MLEA-S stretching from the Eastern Mediterranean to the Western Tianshan.
How to cite: Yang, Y.: Holocene moisture variations in the Tianshan Mountains and their geographic coherency in the mid-latitude Eurasia: A synthesis of proxy records, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3196, https://doi.org/10.5194/egusphere-egu2020-3196, 2020.
Understanding the Holocene moisture variations in the Arid Central Asia (ACA) is of a large-scale climatic significance simply because the vast ACA is influenced by several different climate systems. However, the temporal and spatial patterns and the modulating mechanisms of the Holocene aridity (or moisture) variations in the ACA remain in the center of controversies for the past two decades. Firstly, we in this research depicted the spatial and temporal patterns of the Holocene aridity variations in the Tianshan Mountains based on thirteen already-published aridity sequences and two recently obtained aridity sequences. Our depiction shows that the regionally-averaged standardized aridity-index (RA-SAI) curve for the Eastern Tianshan Mountains exhibits a wetting trend. In contrast, the RA-SAI curve for the Western Tianshan Mountains exhibits a drying trend. Secondly, we further examined these two RA-SAI sequences (one for the Eastern Tianshan and another for the Western Tianshan) within a much larger geographic context for exploring the mechanisms modulating the Holocene patterns. Our examination shows that the summer precipitation-dominated northern middle-latitude Eurasia (i.e., MLEA-N) has experienced a wetting trend and the winter precipitation-dominated southern middle-latitude Eurasia (i.e., MLEA-S) has experienced a drying trend. The wetting trend in MLEA-N is proposed to have resulted from increasingly more positively-phased AMO activities that have increasingly enhanced cyclonic pressure anomalies over the Atlantic regions, directly or indirectly bringing more and more summer precipitation to MLEA-N stretching from West Europe to the Eastern Tianshan. And, the drying trend in MLEA-S is proposed to have resulted from increasingly more negatively-phased NAO activities. That is, the negatively-phased NAO activities weakened the strength of Western Disturbances, therefore resulting in decreased winter precipitations in MLEA-S stretching from the Eastern Mediterranean to the Western Tianshan.
How to cite: Yang, Y.: Holocene moisture variations in the Tianshan Mountains and their geographic coherency in the mid-latitude Eurasia: A synthesis of proxy records, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3196, https://doi.org/10.5194/egusphere-egu2020-3196, 2020.
EGU2020-12724 | Displays | ITS2.3/CL1.19
Crop dispersal along the prehistoric highland silk road on the Tibetan PlateauJishuai Yang and Xiaoyan Yang
Previous studies demonstrated that the farmers spread into the Tibetan Plateau (TP) and permenantly settled there around 3600 yr cal BP, taking the ways on the northeastern edges of the TP and bearing the western crops of barley, and sheep. But, other studies argued the earlier permenant settlements or different ways to spread into the central TP. Meanwhile, the Yarlung Tsangpo River regions in southern TP, are considered to be one of the important routes for culture dispersal and human migration, jointing Tibetan Empire, Tang Dynasty and states in South Asia in history period, were proposed as highland silk road. However, the role of this route in prehistorical culture exchange and human colonization on the TP remains unclear, due to the scarce of archaeological work in these regions.
Systematic surveys along the Yarlung Tsangpo River regions had carried out in last two years. Charcoals and charred seeds were floated from the cultural layers in 31 sites and 60 new carbon-14 dates had been got. Charred seeds include wheat, barley and pea from the west, and broomcorn millet and foxtail millet from the east. In combination with previous published dates, we set up routes of crop dispersal and brief history of human activities on the central TP. Neolithic people had occupied the Yarlung Tsangpo Valley from the different direction of the TP in the third millennium BC with different western crops or eastern crops, and moved along the River and its tributaries. The route for dispersal is similar with the historic highland silk road, indicating this road had played a important role since prehistory.
How to cite: Yang, J. and Yang, X.: Crop dispersal along the prehistoric highland silk road on the Tibetan Plateau, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12724, https://doi.org/10.5194/egusphere-egu2020-12724, 2020.
Previous studies demonstrated that the farmers spread into the Tibetan Plateau (TP) and permenantly settled there around 3600 yr cal BP, taking the ways on the northeastern edges of the TP and bearing the western crops of barley, and sheep. But, other studies argued the earlier permenant settlements or different ways to spread into the central TP. Meanwhile, the Yarlung Tsangpo River regions in southern TP, are considered to be one of the important routes for culture dispersal and human migration, jointing Tibetan Empire, Tang Dynasty and states in South Asia in history period, were proposed as highland silk road. However, the role of this route in prehistorical culture exchange and human colonization on the TP remains unclear, due to the scarce of archaeological work in these regions.
Systematic surveys along the Yarlung Tsangpo River regions had carried out in last two years. Charcoals and charred seeds were floated from the cultural layers in 31 sites and 60 new carbon-14 dates had been got. Charred seeds include wheat, barley and pea from the west, and broomcorn millet and foxtail millet from the east. In combination with previous published dates, we set up routes of crop dispersal and brief history of human activities on the central TP. Neolithic people had occupied the Yarlung Tsangpo Valley from the different direction of the TP in the third millennium BC with different western crops or eastern crops, and moved along the River and its tributaries. The route for dispersal is similar with the historic highland silk road, indicating this road had played a important role since prehistory.
How to cite: Yang, J. and Yang, X.: Crop dispersal along the prehistoric highland silk road on the Tibetan Plateau, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12724, https://doi.org/10.5194/egusphere-egu2020-12724, 2020.
EGU2020-5067 | Displays | ITS2.3/CL1.19
Mid-late Holocene hydroclimate variation in the source region of the Yangtze River revealed by lake sediment recordsHou Xiaohuan, Liu Lina, Sun Zhe, Cao Xianyong, and Hou Juzhi
The headwater region of the Yangtze River serves as major constituent of Chinese Water Tower and is critical in providing fresh water for hundreds of millions of people living downstream. Hydrological variation is mainly influenced by environmental changes. Therefore, a good understanding of climate changes in the source region of the Yangtze River (SRYR) is of great significance. Here, we provide a lacustrine sediment core from Saiyong Co in SRYR, northeastern Tibetan Plateau, China, to reconstruct hydrological variation and the main influencing factors based on the analysis of grain size, scanning XRF, loss on ignition (LOI), which cover the past 6 ka. It is remarkable that total organic matter (LOI-550℃) exhibits opposite patterns regarding to the PC1 of XRF, which represents the allochthonous input, indicating the majority of organic matter was mainly yielded within the lake. Clustering of palaeohydrological proxies, such as the reduced PC1 and increase in median grain size, seems coincide with the weakened strength of the Indian summer monsoon, which suggest a generally dry trend in the SRYR during the mid-late Holocene. However, short pulses of outrageous period occurred at 3.8-3.2 ka BP and 1.5-1.0 ka BP. The abrupt increase in PC1 and very coarse silt indicate the lake catchment became more humid with higher surface runoff, which is consistent with weaker lake productivity. The inferred hydrological change in SRYR since 6 ka BP not only have significant environmental influence, but also agree with other sequences from Tibetan Plateau and the adjacent regions This study provides long-term records of paleoenvironmental evolution which is particularly significant to understand recent and to predict future hydrological change in SRYR.
How to cite: Xiaohuan, H., Lina, L., Zhe, S., Xianyong, C., and Juzhi, H.: Mid-late Holocene hydroclimate variation in the source region of the Yangtze River revealed by lake sediment records, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5067, https://doi.org/10.5194/egusphere-egu2020-5067, 2020.
The headwater region of the Yangtze River serves as major constituent of Chinese Water Tower and is critical in providing fresh water for hundreds of millions of people living downstream. Hydrological variation is mainly influenced by environmental changes. Therefore, a good understanding of climate changes in the source region of the Yangtze River (SRYR) is of great significance. Here, we provide a lacustrine sediment core from Saiyong Co in SRYR, northeastern Tibetan Plateau, China, to reconstruct hydrological variation and the main influencing factors based on the analysis of grain size, scanning XRF, loss on ignition (LOI), which cover the past 6 ka. It is remarkable that total organic matter (LOI-550℃) exhibits opposite patterns regarding to the PC1 of XRF, which represents the allochthonous input, indicating the majority of organic matter was mainly yielded within the lake. Clustering of palaeohydrological proxies, such as the reduced PC1 and increase in median grain size, seems coincide with the weakened strength of the Indian summer monsoon, which suggest a generally dry trend in the SRYR during the mid-late Holocene. However, short pulses of outrageous period occurred at 3.8-3.2 ka BP and 1.5-1.0 ka BP. The abrupt increase in PC1 and very coarse silt indicate the lake catchment became more humid with higher surface runoff, which is consistent with weaker lake productivity. The inferred hydrological change in SRYR since 6 ka BP not only have significant environmental influence, but also agree with other sequences from Tibetan Plateau and the adjacent regions This study provides long-term records of paleoenvironmental evolution which is particularly significant to understand recent and to predict future hydrological change in SRYR.
How to cite: Xiaohuan, H., Lina, L., Zhe, S., Xianyong, C., and Juzhi, H.: Mid-late Holocene hydroclimate variation in the source region of the Yangtze River revealed by lake sediment records, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5067, https://doi.org/10.5194/egusphere-egu2020-5067, 2020.
EGU2020-4965 | Displays | ITS2.3/CL1.19
Late Holocene Varve Chronology and High-Resolution Records of Precipitation in the Central Tibetan PlateauKejia Ji, Erlei Zhu, Guoqiang Chu, and Juzhi Hou
Precise age controls are fundamental prerequisites for reconstructing past climate and environment changes. Lakes on the Tibetan Plateau are one of the important archives for studying past climate and environment changes. However, radiocarbon ages for lake sediment core are subject to old radiocarbon reservoir effects, which caused severe problems in constructing age controls for lake sediment cores, especially on the Tibetan Plateau (TP). Here we present a varve chronology over the past 2000 years at Jiang Co on the central TP. The clastic-biogenic varves comprise of a coarse-grained layer and a fine-grained layer observed by petrographic microscope and Electron Probe Micro Analyzer. Varve chronology is supported by measurements of 210Pb and 137Cs, which is further used to determine the radiocarbon reservoir ages in the past ~2000 years. The percentage of coarse-grain layer thickness within single varves was considered as proxy for precipitation as the coarse grains were mainly transported by runoff, which is highly correlated with local meteorological observation. During the past 2000 years, the precipitation records show centennial-scale fluctuations that are consistent with regional records. The varve chronology at Jiang Co provides a valuable opportunity to examine variation in reservoir ages on the TP and a robust chronology for reconstructing paleoclimate.
How to cite: Ji, K., Zhu, E., Chu, G., and Hou, J.: Late Holocene Varve Chronology and High-Resolution Records of Precipitation in the Central Tibetan Plateau, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4965, https://doi.org/10.5194/egusphere-egu2020-4965, 2020.
Precise age controls are fundamental prerequisites for reconstructing past climate and environment changes. Lakes on the Tibetan Plateau are one of the important archives for studying past climate and environment changes. However, radiocarbon ages for lake sediment core are subject to old radiocarbon reservoir effects, which caused severe problems in constructing age controls for lake sediment cores, especially on the Tibetan Plateau (TP). Here we present a varve chronology over the past 2000 years at Jiang Co on the central TP. The clastic-biogenic varves comprise of a coarse-grained layer and a fine-grained layer observed by petrographic microscope and Electron Probe Micro Analyzer. Varve chronology is supported by measurements of 210Pb and 137Cs, which is further used to determine the radiocarbon reservoir ages in the past ~2000 years. The percentage of coarse-grain layer thickness within single varves was considered as proxy for precipitation as the coarse grains were mainly transported by runoff, which is highly correlated with local meteorological observation. During the past 2000 years, the precipitation records show centennial-scale fluctuations that are consistent with regional records. The varve chronology at Jiang Co provides a valuable opportunity to examine variation in reservoir ages on the TP and a robust chronology for reconstructing paleoclimate.
How to cite: Ji, K., Zhu, E., Chu, G., and Hou, J.: Late Holocene Varve Chronology and High-Resolution Records of Precipitation in the Central Tibetan Plateau, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4965, https://doi.org/10.5194/egusphere-egu2020-4965, 2020.
EGU2020-3874 | Displays | ITS2.3/CL1.19
The forced response of Asian Summer Monsoon precipitation during the past 1500 yearsZhiyuan Wang, Jianglin Wang, Jia Jia, and Jian Liu
Asian summer monsoon (ASM) is one of the critical elements of the global climate system, and strongly affects food production and security of most people over Asia. However, the characteristics and the forcing drivers of the ASM system at decadal to centennial time scales remain unclear. To address these issues, we report four 1500-yr long climate model simulations based on the Community Earth System Model (CESM), including full-forced run (ALLR), control run (CTRL), natural run (NAT), and anthropogenic run (ANTH). After evaluating the performances of the CESM in simulating ASM precipitation, a 10-100 bandpass filter is applied to obtain the decadal-centennial signals in ASM precipitation. The main conclusions are (1) the variation of ASM intensity shows significant decadal to centennial periodicities in the ALLR, such as ~15, ~25, ~40, and ~70 years. (2) the major spatial-temporal ASM precipitation distributions in the ALLR show an external forced mode and climate internal variability mode, respectively. (3) The leading forced mode of ASM precipitation is mainly affected by natural forcing over the past 1500 years and characterizes a meridional spatial 'tripole' mode. In the NAT (solar irradiation and volcanic eruptions), the substantial warming (cooling) over the western tropical Pacific enhances (or reduces) the SST gradient change in the tropical Pacific, and modifying the ASM rainfall distribution. Our findings contribute to better understanding of the ASM in the past, and provide implications for future projections of the ASM under global warming.
How to cite: Wang, Z., Wang, J., Jia, J., and Liu, J.: The forced response of Asian Summer Monsoon precipitation during the past 1500 years, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3874, https://doi.org/10.5194/egusphere-egu2020-3874, 2020.
Asian summer monsoon (ASM) is one of the critical elements of the global climate system, and strongly affects food production and security of most people over Asia. However, the characteristics and the forcing drivers of the ASM system at decadal to centennial time scales remain unclear. To address these issues, we report four 1500-yr long climate model simulations based on the Community Earth System Model (CESM), including full-forced run (ALLR), control run (CTRL), natural run (NAT), and anthropogenic run (ANTH). After evaluating the performances of the CESM in simulating ASM precipitation, a 10-100 bandpass filter is applied to obtain the decadal-centennial signals in ASM precipitation. The main conclusions are (1) the variation of ASM intensity shows significant decadal to centennial periodicities in the ALLR, such as ~15, ~25, ~40, and ~70 years. (2) the major spatial-temporal ASM precipitation distributions in the ALLR show an external forced mode and climate internal variability mode, respectively. (3) The leading forced mode of ASM precipitation is mainly affected by natural forcing over the past 1500 years and characterizes a meridional spatial 'tripole' mode. In the NAT (solar irradiation and volcanic eruptions), the substantial warming (cooling) over the western tropical Pacific enhances (or reduces) the SST gradient change in the tropical Pacific, and modifying the ASM rainfall distribution. Our findings contribute to better understanding of the ASM in the past, and provide implications for future projections of the ASM under global warming.
How to cite: Wang, Z., Wang, J., Jia, J., and Liu, J.: The forced response of Asian Summer Monsoon precipitation during the past 1500 years, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3874, https://doi.org/10.5194/egusphere-egu2020-3874, 2020.
ITS2.4/HS12.1 – From the Source to the Sea – River-Sea Systems under Global Change
EGU2020-13344 | Displays | ITS2.4/HS12.1
Another drought or more pesky humans? Anthropogenic impacts leave drought-like sedimentological signatures in offshore sedimentsAkos Kalman, Timor Katz, Alysse Mathalon, Paul Hill, and Beverly Goodman-Tchernov
EGU2020-9015 | Displays | ITS2.4/HS12.1
Influence of organic matter quality on organic matter degradability in river sedimentsFlorian Zander, Julia Gebert, Rob N. J. Comans, Alexander Groengroeft, Timo J. Heimovaara, and Annette Eschenbach
The project BIOMUD, part of the scientific network MUDNET (www.tudelft.nl/mudnet), investigates the decomposition of sediment organic matter (SOM) in the Port of Hamburg. The microbial turnover of sediment organic matter under reducing conditions leads to the formation of methane, carbon dioxide and others gases causing a change in the sediment rheological parameters. BIOMUD is aiming to explain the effect of organic matter lability on the rheological properties impacting the navigable depth of the harbour.
Samples of freshly deposited material were taken in 2018 and 2019 at nine locations in a transect of 30 km through the Port of Hamburg. Analyses included abiotic parameters (among others grain size distribution, standard pore water properties, standard solid properties, stable isotopes, mineral composition) and biotic parameters (among others anaerobic and aerobic organic matter degradation, DNA, protein and lipid content, microbial population). At four locations, physical density fractions and chemical organic matter fractions were analysed.
The quality of organic matter was described by normalising carbon released from microbial degradation under both aerobic and anaerobic conditions to the share of total organic carbon (mg C/g TOC). Organic matter pools with different degradation rates were used to quantify the lability of organic matter. The share of faster degradable (more labile) pools correlated strongly with the size of the hydrophilic DOC fraction, confirming results of Straathof et al. (2014) who investigated dissolved organic carbon pools in compost. The hydrophilic DOC fraction was closely correlated to the polysaccharide concentration, explaining the input of easily degradable organic matter. Moreover, the amount of organic carbon present in the sediment’s light density fraction < 1.4 g/cm3 strongly correlated with the hydrophilic DOC fraction and, less strongly, with organic matter lability. High organic matter quality, i.e. the labile, easily degradable fraction, was further related to the chlorophyll concentration in the water column but also the ammonium concentration in the sediment’s pore water.
It was hypothesised that the observed toposequence of decreasing organic matter quality from upstream to downstream could be explained by a chronosequence of increasing degradation and therefore ageing of organic matter as the sediment passes through the harbour area. Further, it was hypothesized that the harbour received organic matter of higher degradability, originating from phytoplankton biomass, from the upstream part of the Elbe river, whereas the input from the tidal downstream area provided organic matter of lower quality (degradability).
This study was funded by Hamburg Port Authority.
How to cite: Zander, F., Gebert, J., Comans, R. N. J., Groengroeft, A., Heimovaara, T. J., and Eschenbach, A.: Influence of organic matter quality on organic matter degradability in river sediments, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9015, https://doi.org/10.5194/egusphere-egu2020-9015, 2020.
The project BIOMUD, part of the scientific network MUDNET (www.tudelft.nl/mudnet), investigates the decomposition of sediment organic matter (SOM) in the Port of Hamburg. The microbial turnover of sediment organic matter under reducing conditions leads to the formation of methane, carbon dioxide and others gases causing a change in the sediment rheological parameters. BIOMUD is aiming to explain the effect of organic matter lability on the rheological properties impacting the navigable depth of the harbour.
Samples of freshly deposited material were taken in 2018 and 2019 at nine locations in a transect of 30 km through the Port of Hamburg. Analyses included abiotic parameters (among others grain size distribution, standard pore water properties, standard solid properties, stable isotopes, mineral composition) and biotic parameters (among others anaerobic and aerobic organic matter degradation, DNA, protein and lipid content, microbial population). At four locations, physical density fractions and chemical organic matter fractions were analysed.
The quality of organic matter was described by normalising carbon released from microbial degradation under both aerobic and anaerobic conditions to the share of total organic carbon (mg C/g TOC). Organic matter pools with different degradation rates were used to quantify the lability of organic matter. The share of faster degradable (more labile) pools correlated strongly with the size of the hydrophilic DOC fraction, confirming results of Straathof et al. (2014) who investigated dissolved organic carbon pools in compost. The hydrophilic DOC fraction was closely correlated to the polysaccharide concentration, explaining the input of easily degradable organic matter. Moreover, the amount of organic carbon present in the sediment’s light density fraction < 1.4 g/cm3 strongly correlated with the hydrophilic DOC fraction and, less strongly, with organic matter lability. High organic matter quality, i.e. the labile, easily degradable fraction, was further related to the chlorophyll concentration in the water column but also the ammonium concentration in the sediment’s pore water.
It was hypothesised that the observed toposequence of decreasing organic matter quality from upstream to downstream could be explained by a chronosequence of increasing degradation and therefore ageing of organic matter as the sediment passes through the harbour area. Further, it was hypothesized that the harbour received organic matter of higher degradability, originating from phytoplankton biomass, from the upstream part of the Elbe river, whereas the input from the tidal downstream area provided organic matter of lower quality (degradability).
This study was funded by Hamburg Port Authority.
How to cite: Zander, F., Gebert, J., Comans, R. N. J., Groengroeft, A., Heimovaara, T. J., and Eschenbach, A.: Influence of organic matter quality on organic matter degradability in river sediments, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9015, https://doi.org/10.5194/egusphere-egu2020-9015, 2020.
EGU2020-18362 | Displays | ITS2.4/HS12.1
Response of Elbe estuary ecosystem to changed riverine nitrogen loadsJohannes Pein, Ute Daewel, Emil Stanev, and Corinna Schrum
The present day Elbe estuary ecosystem dynamics are largely determined by high loads of river borne inorganic and organic nitrogen. Similar to most European tidal rivers, the Elbe estuary is highly eutrophied. The eutrophication leads to high primary production in the shallow limnic reach, followed by heterotrophic decay, sedimentation and summertime oxygen depletion in the deepened channel and harbor area. For several decades, the estuary has been subject to adverse trends regarding the forcing of the heterotrophic turnover: While the ambient temperature increases, the nitrogen loads are decreasing (Radach and Pätsch, 2008). The projected long-term and climatic changes imply these trends to continue (Radach and Pätsch, 2008; Huang et al., 2010). In this study we use an unstructured 3D coupled bio-physical model of the Elbe estuary to study the effect of long-term changes of riverine nitrogen loads onto the estuarine ecosystem. As a first step we change the riverine nitrogen forcing i) reducing equally the dissolved inorganic and organic nitrogen loads by 50 % each, ii) reducing the inorganic load and organic loads by 80 % and 40 %, respectively, iii) reducing both inorganic and organic loads towards pre-industrial levels (Serna et al., 2010). Our results indicate a decrease of primary production and heterotrophic turnover under all scenarios. The decrease of primary production is mainly due to reduced diatom growth. Consequently summertime nitrification and oxygen depletion also decrease. This effect is more pronounced in case of equal reduction of inorganic and organic loads than of strong reduction of inorganic nitrogen loads only. Other than diatoms, cyano-bacteria are less affected by applied changes and associated biomass even increases in comparison with the reference case under scenario ii). In the second part of the study we will increase the temperature forcing to determine to which degree the projected increase of ambient temperatures will affect the projected reduced nitrogen turnover.
How to cite: Pein, J., Daewel, U., Stanev, E., and Schrum, C.: Response of Elbe estuary ecosystem to changed riverine nitrogen loads, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18362, https://doi.org/10.5194/egusphere-egu2020-18362, 2020.
The present day Elbe estuary ecosystem dynamics are largely determined by high loads of river borne inorganic and organic nitrogen. Similar to most European tidal rivers, the Elbe estuary is highly eutrophied. The eutrophication leads to high primary production in the shallow limnic reach, followed by heterotrophic decay, sedimentation and summertime oxygen depletion in the deepened channel and harbor area. For several decades, the estuary has been subject to adverse trends regarding the forcing of the heterotrophic turnover: While the ambient temperature increases, the nitrogen loads are decreasing (Radach and Pätsch, 2008). The projected long-term and climatic changes imply these trends to continue (Radach and Pätsch, 2008; Huang et al., 2010). In this study we use an unstructured 3D coupled bio-physical model of the Elbe estuary to study the effect of long-term changes of riverine nitrogen loads onto the estuarine ecosystem. As a first step we change the riverine nitrogen forcing i) reducing equally the dissolved inorganic and organic nitrogen loads by 50 % each, ii) reducing the inorganic load and organic loads by 80 % and 40 %, respectively, iii) reducing both inorganic and organic loads towards pre-industrial levels (Serna et al., 2010). Our results indicate a decrease of primary production and heterotrophic turnover under all scenarios. The decrease of primary production is mainly due to reduced diatom growth. Consequently summertime nitrification and oxygen depletion also decrease. This effect is more pronounced in case of equal reduction of inorganic and organic loads than of strong reduction of inorganic nitrogen loads only. Other than diatoms, cyano-bacteria are less affected by applied changes and associated biomass even increases in comparison with the reference case under scenario ii). In the second part of the study we will increase the temperature forcing to determine to which degree the projected increase of ambient temperatures will affect the projected reduced nitrogen turnover.
How to cite: Pein, J., Daewel, U., Stanev, E., and Schrum, C.: Response of Elbe estuary ecosystem to changed riverine nitrogen loads, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18362, https://doi.org/10.5194/egusphere-egu2020-18362, 2020.
EGU2020-8950 | Displays | ITS2.4/HS12.1
A catchment to coast framework for the evolution of a coastal mangove wetlandJose Rodriguez, Eliana Jorquera, Patricia Saco, and Angelo Breda
Coastal wetlands are at the interface between land and sea, receiving water, sediment and nutrients from upstream catchments and also being subject to tides, wave and changing sea levels. Analysis of their future evolution requires the analysis of the entire catchment to coast system, including the effects of climate variability and change and land use changes. We have developed a modelling framework that is able to include both catchment and coastal processes into the evolution of coastal wetlands by coupling an ecogeomorphological wetland evolution model with a hydrosedimentological catchment model to include both tidal and catchment runoff inputs. We drive the model with storm events and sea-level variations and analyse scenarios of future climate and land use for a catchment in Vanua Levu, Fiji that includes a mangrove wetland at the catchment outlet. We inform our model with field, remote sensing and historical data on land use, tides, sediment and nutrient transport and cyclone activity.
How to cite: Rodriguez, J., Jorquera, E., Saco, P., and Breda, A.: A catchment to coast framework for the evolution of a coastal mangove wetland, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8950, https://doi.org/10.5194/egusphere-egu2020-8950, 2020.
Coastal wetlands are at the interface between land and sea, receiving water, sediment and nutrients from upstream catchments and also being subject to tides, wave and changing sea levels. Analysis of their future evolution requires the analysis of the entire catchment to coast system, including the effects of climate variability and change and land use changes. We have developed a modelling framework that is able to include both catchment and coastal processes into the evolution of coastal wetlands by coupling an ecogeomorphological wetland evolution model with a hydrosedimentological catchment model to include both tidal and catchment runoff inputs. We drive the model with storm events and sea-level variations and analyse scenarios of future climate and land use for a catchment in Vanua Levu, Fiji that includes a mangrove wetland at the catchment outlet. We inform our model with field, remote sensing and historical data on land use, tides, sediment and nutrient transport and cyclone activity.
How to cite: Rodriguez, J., Jorquera, E., Saco, P., and Breda, A.: A catchment to coast framework for the evolution of a coastal mangove wetland, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8950, https://doi.org/10.5194/egusphere-egu2020-8950, 2020.
EGU2020-6846 | Displays | ITS2.4/HS12.1
A Virtual Geostationary Ocean Colour Sensor to characterise the river-sea interaction over the North Adriatic Sea.Marco Bracaglia, Rosalia Santoleri, Gianluca Volpe, Simone Colella, Federica Braga, Debora Bellafiore, and Vittorio Ernesto Brando
Inherent optical properties (IOPs) and concentrations of the sea water components are key quantities in supporting the monitoring of the water quality and the study of the ecosystem functioning. In coastal waters, those quantities have a large spatial and temporal variability, due to river discharges and meteo-marine conditions, such as wind, wave and current, and their interaction with shallow water bathymetry. This short term variability can be adequately captured only using Geostationary Ocean Colour (OC) satellites, absent over the European seas.
In this study, to compensate the lack of an OC geostationary sensor over the North Adriatic Sea (NAS), the Virtual Geostationary Ocean Colour Sensor (VGOCS) dataset has been used. VGOCS contains data from several OC polar satellites, making available multiple images a day of the NAS, approaching the temporal resolution of a geostationary sensor.
Generally, data from different satellite sensors are characterized by different uncertainty sources and consequently, looking at two satellite images, it is not easy to ascertain how much of the observed differences are due to real processes. In the VGOCS dataset, the inter-sensor differences are reduced, as the satellite data were adjusted with a multi-linear regression algorithm based on in situ reflectance acquired in the gulf of Venice. Consequently, the use of the adjusted spectra as input in the retrieval of the IOPs and the concentrations allows performing a reliable analysis of the short-time bio-optical variability of the basin.
In this work, we demonstrate the suitability of VGOCS to better characterise the river-sea interaction and to understand the influence of the river forcing on the short time variability of IOPs and concentrations in the coastal areas. This variability will be analysed for different case studies characterised by a different regime of river discharges, using meteorological, hydrological, and oceanographic fields as ancillary variables. This new approach and the availability of this new set of data represent an opportunity for interdisciplinary studies, in support to and interacting also with modelling implementations in river-sea areas.
How to cite: Bracaglia, M., Santoleri, R., Volpe, G., Colella, S., Braga, F., Bellafiore, D., and Brando, V. E.: A Virtual Geostationary Ocean Colour Sensor to characterise the river-sea interaction over the North Adriatic Sea., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6846, https://doi.org/10.5194/egusphere-egu2020-6846, 2020.
Inherent optical properties (IOPs) and concentrations of the sea water components are key quantities in supporting the monitoring of the water quality and the study of the ecosystem functioning. In coastal waters, those quantities have a large spatial and temporal variability, due to river discharges and meteo-marine conditions, such as wind, wave and current, and their interaction with shallow water bathymetry. This short term variability can be adequately captured only using Geostationary Ocean Colour (OC) satellites, absent over the European seas.
In this study, to compensate the lack of an OC geostationary sensor over the North Adriatic Sea (NAS), the Virtual Geostationary Ocean Colour Sensor (VGOCS) dataset has been used. VGOCS contains data from several OC polar satellites, making available multiple images a day of the NAS, approaching the temporal resolution of a geostationary sensor.
Generally, data from different satellite sensors are characterized by different uncertainty sources and consequently, looking at two satellite images, it is not easy to ascertain how much of the observed differences are due to real processes. In the VGOCS dataset, the inter-sensor differences are reduced, as the satellite data were adjusted with a multi-linear regression algorithm based on in situ reflectance acquired in the gulf of Venice. Consequently, the use of the adjusted spectra as input in the retrieval of the IOPs and the concentrations allows performing a reliable analysis of the short-time bio-optical variability of the basin.
In this work, we demonstrate the suitability of VGOCS to better characterise the river-sea interaction and to understand the influence of the river forcing on the short time variability of IOPs and concentrations in the coastal areas. This variability will be analysed for different case studies characterised by a different regime of river discharges, using meteorological, hydrological, and oceanographic fields as ancillary variables. This new approach and the availability of this new set of data represent an opportunity for interdisciplinary studies, in support to and interacting also with modelling implementations in river-sea areas.
How to cite: Bracaglia, M., Santoleri, R., Volpe, G., Colella, S., Braga, F., Bellafiore, D., and Brando, V. E.: A Virtual Geostationary Ocean Colour Sensor to characterise the river-sea interaction over the North Adriatic Sea., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6846, https://doi.org/10.5194/egusphere-egu2020-6846, 2020.
EGU2020-8587 | Displays | ITS2.4/HS12.1
Pulsed terrestrial organic carbon persists in an estuarine environment after major storm eventsEero Asmala, Christopher Osburn, Ryan Paerl, and Hans Paerl
The transport of dissolved organic carbon from land to ocean is a large and dynamic component of the global carbon cycle. Export of dissolved organic carbon from watersheds is largely controlled by hydrology, and is exacerbated by increasing major rainfall and storm events, causing pulses of terrestrial dissolved organic carbon (DOC) to be shunted through rivers downstream to estuaries. Despite this increasing trend, the fate of the pulsed terrestrial DOC in estuaries remains uncertain. Here we present DOC data from 1999 to 2017 in Neuse River Estuary (NC, USA) and analyze the effect of six tropical cyclones (TC) during that period on the quantity and fate of DOC in the estuary. We find that that TCs promote a considerable increase in DOC concentration near the river mouth at the entrance to the estuary, on average an increase of 200 µmol l-1 due to storms was observed. TC-induced increases in DOC are apparent throughout the estuary, and the duration of these elevated DOC concentrations ranges from one month at the river mouth to over six months in lower estuary. Our results suggest that despite the fast mineralization rates, the terrestrial DOC is processed only to a minor extent relative to the pulsed amount entering the estuary. We conclude that the vast quantity of organic carbon delivered to estuaries by TCs transform estuaries from active biogeochemical processing “reactors” of organic carbon to appear more like passive shunts due to the sheer amount of pulsed material rapidly flushed through the estuary.
How to cite: Asmala, E., Osburn, C., Paerl, R., and Paerl, H.: Pulsed terrestrial organic carbon persists in an estuarine environment after major storm events , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8587, https://doi.org/10.5194/egusphere-egu2020-8587, 2020.
The transport of dissolved organic carbon from land to ocean is a large and dynamic component of the global carbon cycle. Export of dissolved organic carbon from watersheds is largely controlled by hydrology, and is exacerbated by increasing major rainfall and storm events, causing pulses of terrestrial dissolved organic carbon (DOC) to be shunted through rivers downstream to estuaries. Despite this increasing trend, the fate of the pulsed terrestrial DOC in estuaries remains uncertain. Here we present DOC data from 1999 to 2017 in Neuse River Estuary (NC, USA) and analyze the effect of six tropical cyclones (TC) during that period on the quantity and fate of DOC in the estuary. We find that that TCs promote a considerable increase in DOC concentration near the river mouth at the entrance to the estuary, on average an increase of 200 µmol l-1 due to storms was observed. TC-induced increases in DOC are apparent throughout the estuary, and the duration of these elevated DOC concentrations ranges from one month at the river mouth to over six months in lower estuary. Our results suggest that despite the fast mineralization rates, the terrestrial DOC is processed only to a minor extent relative to the pulsed amount entering the estuary. We conclude that the vast quantity of organic carbon delivered to estuaries by TCs transform estuaries from active biogeochemical processing “reactors” of organic carbon to appear more like passive shunts due to the sheer amount of pulsed material rapidly flushed through the estuary.
How to cite: Asmala, E., Osburn, C., Paerl, R., and Paerl, H.: Pulsed terrestrial organic carbon persists in an estuarine environment after major storm events , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8587, https://doi.org/10.5194/egusphere-egu2020-8587, 2020.
EGU2020-20824 | Displays | ITS2.4/HS12.1
Dublin Bay Water Quality Modelling from Catchment to CoastAisling Corkery, Guanghai Gao, John O'Sullivan, Liam Reynolds, Laura Sala-Comorera, Niamh Martin, Jayne Stephens, Tristan Nolan, Wim Meijer, Bartholemew Masterson, Conor Muldoon, and Gregory O'Hare
This paper presents the development and preliminary results of a deterministic modelling system for bathing water quality assessment in Dublin Bay, Ireland. The system integrates functional capacity for simulating the transport and fate of diffuse agricultural pollutants (utilising both the NAM rainfall-runoff model in conjunction with MIKE 11), discharges from the Dublin urban drainage network (through MIKE Urban and InfoWorks software), and the ultimate fate of pollutants in Dublin Bay (coastal domain modelling utilises the 3-dimensional MIKE 3 code). The work presented forms part of the EU INTERREG funded Acclimatize project (www.acclimatize.eu) that is investigating the longer-term water quality pressures in Dublin Bay that may arise in the context of a changing climate (particularly that from predicted changes in precipitation totals and patterns). Model calibration and validation has been underpinned by extensive data collection from within the catchments discharging to Dublin Bay and from the bay area itself. Catchment data includes the observing of hydrometeorological variables for establishing relationships to measured flows and water quality at catchment and sub-catchment scales. Coastal data relates to water quality, coastal hydrodynamics (current speed and direction collected from ADCP deployments at multiple monitoring points in the bay), temperature and salinity. A nested modelling approach where the modelled domain is nested in a larger Irish Sea model has been adopted. Tidal constituents along the seaward boundaries of this nested model have been calibrated to correlate well with tidal measurements from a set of established tide gauges within the modelled domain. Bottom friction was calibrated to produce good correlations of measured and simulated current speed and direction. Preliminary results indicate that the transport of faecal indicator bacteria within the study area is adequately represented for spring and neap tide conditions.
How to cite: Corkery, A., Gao, G., O'Sullivan, J., Reynolds, L., Sala-Comorera, L., Martin, N., Stephens, J., Nolan, T., Meijer, W., Masterson, B., Muldoon, C., and O'Hare, G.: Dublin Bay Water Quality Modelling from Catchment to Coast, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20824, https://doi.org/10.5194/egusphere-egu2020-20824, 2020.
This paper presents the development and preliminary results of a deterministic modelling system for bathing water quality assessment in Dublin Bay, Ireland. The system integrates functional capacity for simulating the transport and fate of diffuse agricultural pollutants (utilising both the NAM rainfall-runoff model in conjunction with MIKE 11), discharges from the Dublin urban drainage network (through MIKE Urban and InfoWorks software), and the ultimate fate of pollutants in Dublin Bay (coastal domain modelling utilises the 3-dimensional MIKE 3 code). The work presented forms part of the EU INTERREG funded Acclimatize project (www.acclimatize.eu) that is investigating the longer-term water quality pressures in Dublin Bay that may arise in the context of a changing climate (particularly that from predicted changes in precipitation totals and patterns). Model calibration and validation has been underpinned by extensive data collection from within the catchments discharging to Dublin Bay and from the bay area itself. Catchment data includes the observing of hydrometeorological variables for establishing relationships to measured flows and water quality at catchment and sub-catchment scales. Coastal data relates to water quality, coastal hydrodynamics (current speed and direction collected from ADCP deployments at multiple monitoring points in the bay), temperature and salinity. A nested modelling approach where the modelled domain is nested in a larger Irish Sea model has been adopted. Tidal constituents along the seaward boundaries of this nested model have been calibrated to correlate well with tidal measurements from a set of established tide gauges within the modelled domain. Bottom friction was calibrated to produce good correlations of measured and simulated current speed and direction. Preliminary results indicate that the transport of faecal indicator bacteria within the study area is adequately represented for spring and neap tide conditions.
How to cite: Corkery, A., Gao, G., O'Sullivan, J., Reynolds, L., Sala-Comorera, L., Martin, N., Stephens, J., Nolan, T., Meijer, W., Masterson, B., Muldoon, C., and O'Hare, G.: Dublin Bay Water Quality Modelling from Catchment to Coast, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20824, https://doi.org/10.5194/egusphere-egu2020-20824, 2020.
EGU2020-19093 | Displays | ITS2.4/HS12.1 | Highlight
DANUBIUS-RI: Future Vision and Research Needs for River-Sea SystemsSina Bold, Jana Friedrich, Peter Heininger, Chris Bradley, Andrew Tyler, Adrian Stanica, and Danubius-pp Consortium
More than three quarters of the Earth's land surface is connected to the ocean by rivers. This natural connection between land and ocean by rivers, estuaries and deltas, as well as coastal seas, is essential for humankind in providing key ecosystem services (incl. food and water). However, the quantity and quality of water and sediment transported along the river-sea continuum is changing fundamentally with implications for the structure and functioning of associated ecosystems that are in turn affecting the continued provision of ecosystems services.
DANUBIUS-RI, the International Centre for Advanced Studies on River-Sea Systems, is a distributed research infrastructure (RI) integrating studies of rivers and their catchments, transitional waters, such as estuaries, deltas and lagoons, and their adjacent coastal seas (i.e. River-Sea Systems). DANUBIUS-RI’s vision is to achieve healthy River-Sea Systems and advance their sustainable management in order to live within the planet’s ecological limits by 2050. DANUBIUS-RI’s mission is to facilitate excellent research from the river source to the sea by (1) providing access to state-of-the art facilities, methods and tools, as well as samples and data; (2) bringing together relevant expertise to advance process and system understanding and to enhance stakeholder engagement; and (3) enabling the development of integrated management and policy-making in River-Sea Systems. DANUBIUS-RI’s mission-oriented, integrated, interdisciplinary and participatory approach seeks to change the process and system understanding of River-Sea Systems and their respective management.
DANUBIUS-RI’s Science & Innovation Agenda is guiding the RI’s evolution as it progresses from preparation through implementation to operation. It describes DANUBIUS-RI’s vision, mission and approach, and provides a scientific framework for the RI’s design and highlights the research priorities for the first five years. The framework includes interrelated key challenges in River-Sea Systems, such as global change including climate change and extreme events, changes in hydromorphology, the quantity and quality of water and sediment across the river-sea continuum as well as the structure and functioning of associated ecosystems. DANUBIUS-RI’s research priorities are in line with forthcoming missions of Horizon Europe, which have been applied to River-Sea Systems (1): “Achieving healthy inland, transitional and coastal waters” including the research priorities (a) Water Quantity, (b) Sediment Balance, (c) Nutrients and Pollutants, (d) Biodiversity, (e) Ecosystem Services; and (2): “Adapting to Climate Change: Enhancing Resilience of River-Sea Systems” including the research priorities (f) Climate Change, (g) Extreme Events.
In 2016, the European Strategy Forum for Research Infrastructures (ESFRI) included DANUBIUS-RI in its roadmap highlighting the need for a research infrastructure at the freshwater-marine interface. The Horizon 2020 project DANUBIUS-PP (Preparatory Phase) has built the scientific, legal and financial foundation to enable DANUBIUS-RI to proceed to implementation (www.danubius-pp.eu.
How to cite: Bold, S., Friedrich, J., Heininger, P., Bradley, C., Tyler, A., Stanica, A., and Consortium, D.: DANUBIUS-RI: Future Vision and Research Needs for River-Sea Systems, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19093, https://doi.org/10.5194/egusphere-egu2020-19093, 2020.
More than three quarters of the Earth's land surface is connected to the ocean by rivers. This natural connection between land and ocean by rivers, estuaries and deltas, as well as coastal seas, is essential for humankind in providing key ecosystem services (incl. food and water). However, the quantity and quality of water and sediment transported along the river-sea continuum is changing fundamentally with implications for the structure and functioning of associated ecosystems that are in turn affecting the continued provision of ecosystems services.
DANUBIUS-RI, the International Centre for Advanced Studies on River-Sea Systems, is a distributed research infrastructure (RI) integrating studies of rivers and their catchments, transitional waters, such as estuaries, deltas and lagoons, and their adjacent coastal seas (i.e. River-Sea Systems). DANUBIUS-RI’s vision is to achieve healthy River-Sea Systems and advance their sustainable management in order to live within the planet’s ecological limits by 2050. DANUBIUS-RI’s mission is to facilitate excellent research from the river source to the sea by (1) providing access to state-of-the art facilities, methods and tools, as well as samples and data; (2) bringing together relevant expertise to advance process and system understanding and to enhance stakeholder engagement; and (3) enabling the development of integrated management and policy-making in River-Sea Systems. DANUBIUS-RI’s mission-oriented, integrated, interdisciplinary and participatory approach seeks to change the process and system understanding of River-Sea Systems and their respective management.
DANUBIUS-RI’s Science & Innovation Agenda is guiding the RI’s evolution as it progresses from preparation through implementation to operation. It describes DANUBIUS-RI’s vision, mission and approach, and provides a scientific framework for the RI’s design and highlights the research priorities for the first five years. The framework includes interrelated key challenges in River-Sea Systems, such as global change including climate change and extreme events, changes in hydromorphology, the quantity and quality of water and sediment across the river-sea continuum as well as the structure and functioning of associated ecosystems. DANUBIUS-RI’s research priorities are in line with forthcoming missions of Horizon Europe, which have been applied to River-Sea Systems (1): “Achieving healthy inland, transitional and coastal waters” including the research priorities (a) Water Quantity, (b) Sediment Balance, (c) Nutrients and Pollutants, (d) Biodiversity, (e) Ecosystem Services; and (2): “Adapting to Climate Change: Enhancing Resilience of River-Sea Systems” including the research priorities (f) Climate Change, (g) Extreme Events.
In 2016, the European Strategy Forum for Research Infrastructures (ESFRI) included DANUBIUS-RI in its roadmap highlighting the need for a research infrastructure at the freshwater-marine interface. The Horizon 2020 project DANUBIUS-PP (Preparatory Phase) has built the scientific, legal and financial foundation to enable DANUBIUS-RI to proceed to implementation (www.danubius-pp.eu.
How to cite: Bold, S., Friedrich, J., Heininger, P., Bradley, C., Tyler, A., Stanica, A., and Consortium, D.: DANUBIUS-RI: Future Vision and Research Needs for River-Sea Systems, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19093, https://doi.org/10.5194/egusphere-egu2020-19093, 2020.
EGU2020-18320 | Displays | ITS2.4/HS12.1
Modus Operandi of the International Centre for Advanced Studies on River-Sea Systems (DANUBIUS-RI)Jana Friedrich, Sina Bold, Peter Heininger, Chris Bradley, Andrew Tyler, Adrian Stanica, and Danubius-PP Consortium
DANUBIUS-RI, the International Centre for Advanced Studies on River- Sea Systems, is a distributed environmental research infrastructure on the Roadmap of the European Strategy Forum on Research Infrastructures (ESFRI). DANUBIUS-RI offers a new paradigm in aquatic science: the River- Sea continuum approach. It aims to provide an integrated research infrastructure with interdisciplinary expertise encompassing: remote and in-situ observation systems (including ships), experimental facilities, laboratories, modelling tools and resources for knowledge exchange along freshwater-seawater continua throughout Europe, from river source to sea. The Science and Innovation Agenda of DANUBIUS (SIA), presented in a tandem presentation, provides the scientific rationale that underpins the technical and organisational design of the research infrastructure, hence the components and their interaction. The research needs and priorities identified in the SIA shape the design of the infrastructure to ensure DANUBIUS-RI provides the interdisciplinary expertise, tools and capacities required.
This presentation describes the DANUBIUS-RI components, their functions and interactions, the governance structure and services of the RI to demonstrate how DANUBIUS-RI will transform its mission into science and services for the benefit of healthy River-Sea Systems.
The DANUBIUS-RI Components comprise the Hub, Data Centre, Nodes, Supersites, e-Learning Office and Technology Transfer Office, distributed across Europe. DANUBIUS-ERIC, as legal entity, provides the effective governance framework: it coordinates, manages, harmonizes and communicates the activities carried out by the DANUBIUS-RI Components.
The DANUBIUS Commons will be a key element of DANUBIUS-RI: a set of harmonised regulations, methods, procedures and standards for scientific and non-scientific activities, to guarantee the integrity, relevance, consistency and elevated quality of DANUBIUSRI’s products and services. The DANUBIUS Commons will provide the framework to ensure that the outputs of DANUBIUS-RI are compatible, comparable, and exchangeable throughout the research infrastructure, and within the user community.
The DANUBIUS-RI Services span a range of disciplines, which is essential to address the major research questions and challenges in River-Sea Systems. Seven categories of services have been developed: (1) digital and non-digital data; 2) tools, methods and expert support; (3) study and measurements; (4) diagnostic and impact; (5) solution development; tests, audit, validation and certification; and (7) training.
DANUBIUS-RI will cooperate closely with other research infrastructures, including ICOS-ERIC, EMSOERIC, EURO-ARGO ERIC, LifeWatch ERIC and eLTER; with research infrastructure networks such as HYDRALAB and JERICO; with River Basin and Regional Seas Commissions; with data programmes and initiatives such as the European Copernicus programme, EUMETSAT and SeaDataNet; and with research programmes and initiatives such as JPI Water and JPI Oceans.
DANUBIUS-RI has completed its preparatory phase (DANUBIUS-PP) at the end of 2019, and now started its implementation. The first components successfully applied for EU infrastructural funding (EFRE). DANUBIUS-RI is expected to be operational by 2023.
How to cite: Friedrich, J., Bold, S., Heininger, P., Bradley, C., Tyler, A., Stanica, A., and Consortium, D.-P.: Modus Operandi of the International Centre for Advanced Studies on River-Sea Systems (DANUBIUS-RI), EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18320, https://doi.org/10.5194/egusphere-egu2020-18320, 2020.
DANUBIUS-RI, the International Centre for Advanced Studies on River- Sea Systems, is a distributed environmental research infrastructure on the Roadmap of the European Strategy Forum on Research Infrastructures (ESFRI). DANUBIUS-RI offers a new paradigm in aquatic science: the River- Sea continuum approach. It aims to provide an integrated research infrastructure with interdisciplinary expertise encompassing: remote and in-situ observation systems (including ships), experimental facilities, laboratories, modelling tools and resources for knowledge exchange along freshwater-seawater continua throughout Europe, from river source to sea. The Science and Innovation Agenda of DANUBIUS (SIA), presented in a tandem presentation, provides the scientific rationale that underpins the technical and organisational design of the research infrastructure, hence the components and their interaction. The research needs and priorities identified in the SIA shape the design of the infrastructure to ensure DANUBIUS-RI provides the interdisciplinary expertise, tools and capacities required.
This presentation describes the DANUBIUS-RI components, their functions and interactions, the governance structure and services of the RI to demonstrate how DANUBIUS-RI will transform its mission into science and services for the benefit of healthy River-Sea Systems.
The DANUBIUS-RI Components comprise the Hub, Data Centre, Nodes, Supersites, e-Learning Office and Technology Transfer Office, distributed across Europe. DANUBIUS-ERIC, as legal entity, provides the effective governance framework: it coordinates, manages, harmonizes and communicates the activities carried out by the DANUBIUS-RI Components.
The DANUBIUS Commons will be a key element of DANUBIUS-RI: a set of harmonised regulations, methods, procedures and standards for scientific and non-scientific activities, to guarantee the integrity, relevance, consistency and elevated quality of DANUBIUSRI’s products and services. The DANUBIUS Commons will provide the framework to ensure that the outputs of DANUBIUS-RI are compatible, comparable, and exchangeable throughout the research infrastructure, and within the user community.
The DANUBIUS-RI Services span a range of disciplines, which is essential to address the major research questions and challenges in River-Sea Systems. Seven categories of services have been developed: (1) digital and non-digital data; 2) tools, methods and expert support; (3) study and measurements; (4) diagnostic and impact; (5) solution development; tests, audit, validation and certification; and (7) training.
DANUBIUS-RI will cooperate closely with other research infrastructures, including ICOS-ERIC, EMSOERIC, EURO-ARGO ERIC, LifeWatch ERIC and eLTER; with research infrastructure networks such as HYDRALAB and JERICO; with River Basin and Regional Seas Commissions; with data programmes and initiatives such as the European Copernicus programme, EUMETSAT and SeaDataNet; and with research programmes and initiatives such as JPI Water and JPI Oceans.
DANUBIUS-RI has completed its preparatory phase (DANUBIUS-PP) at the end of 2019, and now started its implementation. The first components successfully applied for EU infrastructural funding (EFRE). DANUBIUS-RI is expected to be operational by 2023.
How to cite: Friedrich, J., Bold, S., Heininger, P., Bradley, C., Tyler, A., Stanica, A., and Consortium, D.-P.: Modus Operandi of the International Centre for Advanced Studies on River-Sea Systems (DANUBIUS-RI), EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18320, https://doi.org/10.5194/egusphere-egu2020-18320, 2020.
EGU2020-22281 | Displays | ITS2.4/HS12.1
Enhancing River-Sea System Understanding by providing insights into headwaters– the Upper Danube Austria Supersite of DANUBIUS-RIEva Feldbacher, Stefan Schmutz, Gabriele Weigelhofer, and Thomas Hein
Austria has a share in three international river basins (Danube, Elbe, Rhine), but by far the most of its territory (> 96%) drains into the Danube. This Austrian territory accounts for 10% of the total area of the Danube River Basin and belongs entirely to the Upper Danube Basins, which extends from the source of the Danube in Germany to Bratislava at Austria’s eastern border to Slovakia. Austria contributes approx. 25% (ca. 50 km³/a ) to the total yearly discharge of the Danube into the Black Sea (ca. 200 km³/a).
Human activities have severely altered the Upper Danube catchment, impacting both the main stem and the main pre-alpine tributaries. Due to the Upper Danube’s considerable natural gradient and mountainous character, this part of the Danube is extensively used for hydropower production. Ten large (> 10 MW) hydropower plants are situated along the Austrian Danube (out of a total of 41), and only two Danube stretches can still be characterized as free-flowing (Wachau, Nationalpark Donau-Auen). Besides energy generation, other human activities such as agriculture, shipping, industrialisation, urbanisation and tourism, have been and still are changing the process and system dynamics of the Upper Danube. Climate change is additionally affecting this already heavily impacted River System.
The Upper Danube Austria and its pre-alpine network of tributaries is therefore an ideal case study region to investigate the multiple effects of human activities on riverine systems and was chosen as a “supersite” within Danubius-RI, the “International Centre for Advanced Studies on River-Sea Systems”. Danubius-RI is being developed as distributed Research Infrastructures with the goal to support interdisciplinary and integrated research on river-sea systems. DANUBIUS-RI aims to enable and support research addressing the conflicts between society’s demands, environmental change and environmental protection for river -sea systems worldwide and brings together research on freshwaters and the interface to marine waters, drawing on existing research excellence across Europe.
The supersite “Upper Danube Austria and its pre-alpine network of tributaries” covers the freshwater spectrum within the river-sea continuum, ranging from alpine and pre-alpine headwater streams along major Danube tributaries to the Danube River, including adjacent floodplains in the Upper Danube catchment. The research focus lies on the interactive effects of climate change, land use pressures, and hydromorphological alterations on the biodiversity, ecological functions, and the ecosystem service provision of streams and rivers in the Upper Danube basin and their role within the catchment.
The Supersite “Upper Danube Austria and its pre-alpine network of tributaries” joins forces of eight Austrian research institutions and is led by WasserCluster Lunz and the Institute for Hydrobiology and Aquatic Ecosystem Management (IHG) at the University of Natural Resources and Life Sciences, Vienna (BOKU). Research on sustainable management and restoration of riverine landscapes (WFD, FD, HD, Biodiversity Strategy) in the Upper Danube Catchment is an important contribution to a healthy River-Sea System of the Danube River Basin as a whole.
How to cite: Feldbacher, E., Schmutz, S., Weigelhofer, G., and Hein, T.: Enhancing River-Sea System Understanding by providing insights into headwaters– the Upper Danube Austria Supersite of DANUBIUS-RI, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22281, https://doi.org/10.5194/egusphere-egu2020-22281, 2020.
Austria has a share in three international river basins (Danube, Elbe, Rhine), but by far the most of its territory (> 96%) drains into the Danube. This Austrian territory accounts for 10% of the total area of the Danube River Basin and belongs entirely to the Upper Danube Basins, which extends from the source of the Danube in Germany to Bratislava at Austria’s eastern border to Slovakia. Austria contributes approx. 25% (ca. 50 km³/a ) to the total yearly discharge of the Danube into the Black Sea (ca. 200 km³/a).
Human activities have severely altered the Upper Danube catchment, impacting both the main stem and the main pre-alpine tributaries. Due to the Upper Danube’s considerable natural gradient and mountainous character, this part of the Danube is extensively used for hydropower production. Ten large (> 10 MW) hydropower plants are situated along the Austrian Danube (out of a total of 41), and only two Danube stretches can still be characterized as free-flowing (Wachau, Nationalpark Donau-Auen). Besides energy generation, other human activities such as agriculture, shipping, industrialisation, urbanisation and tourism, have been and still are changing the process and system dynamics of the Upper Danube. Climate change is additionally affecting this already heavily impacted River System.
The Upper Danube Austria and its pre-alpine network of tributaries is therefore an ideal case study region to investigate the multiple effects of human activities on riverine systems and was chosen as a “supersite” within Danubius-RI, the “International Centre for Advanced Studies on River-Sea Systems”. Danubius-RI is being developed as distributed Research Infrastructures with the goal to support interdisciplinary and integrated research on river-sea systems. DANUBIUS-RI aims to enable and support research addressing the conflicts between society’s demands, environmental change and environmental protection for river -sea systems worldwide and brings together research on freshwaters and the interface to marine waters, drawing on existing research excellence across Europe.
The supersite “Upper Danube Austria and its pre-alpine network of tributaries” covers the freshwater spectrum within the river-sea continuum, ranging from alpine and pre-alpine headwater streams along major Danube tributaries to the Danube River, including adjacent floodplains in the Upper Danube catchment. The research focus lies on the interactive effects of climate change, land use pressures, and hydromorphological alterations on the biodiversity, ecological functions, and the ecosystem service provision of streams and rivers in the Upper Danube basin and their role within the catchment.
The Supersite “Upper Danube Austria and its pre-alpine network of tributaries” joins forces of eight Austrian research institutions and is led by WasserCluster Lunz and the Institute for Hydrobiology and Aquatic Ecosystem Management (IHG) at the University of Natural Resources and Life Sciences, Vienna (BOKU). Research on sustainable management and restoration of riverine landscapes (WFD, FD, HD, Biodiversity Strategy) in the Upper Danube Catchment is an important contribution to a healthy River-Sea System of the Danube River Basin as a whole.
How to cite: Feldbacher, E., Schmutz, S., Weigelhofer, G., and Hein, T.: Enhancing River-Sea System Understanding by providing insights into headwaters– the Upper Danube Austria Supersite of DANUBIUS-RI, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22281, https://doi.org/10.5194/egusphere-egu2020-22281, 2020.
EGU2020-22494 | Displays | ITS2.4/HS12.1
Interactions of extreme river flows and sea levels for coastal floodingPeter Robins, Lisa Harrison, Mariam Elnahrawi, Matt Lewis, Tom Coulthard, and Gemma Coxon
Coastal flooding worldwide causes the vast majority of natural disasters; for the UK costing £2.2 billion/year. Fluvial and surge-tide extremes can occur synchronously resulting in combination flooding hazards in estuaries, intensifying the flood risk beyond fluvial-only or surge-only events. Worse, this flood risk has the potential to increase further in the future as the frequency and/or intensity of these drivers change, combined with projected sea-level rise. Yet, the sensitivity of contrasting estuaries to combination and compound flooding hazards at sub-daily scales – now and in the future – is unclear. Here, we investigate the dependence between fluvial and surge interactions at sub-daily scales for contrasting catchment and estuary types (Humber vs. Dyfi, UK), using 50+ years of data: 15-min fluvial flows and hourly sea levels. Additionally, we simulate intra-estuary (<50 m resolution) sensitivities to combination flooding hazards based on: (1) realistic extreme events (worst-on-record); (2) realistic events with shifted timings of the drivers to maximise flooding; and (3) modified drivers representing projected climate change.
For well-documented flooding events, we show significant correlation between skew surge and peak fluvial flow, for the Dyfi (small catchment and estuary with a fast fluvial response on the west coast of Britain), with a higher dependence during autumn/winter months. In contrast, we show no dependence for the Humber (large catchment and estuary with a slow fluvial response on the east coast of Britain). Cross-correlation results, however, did show correlation with a time lag (~10 hours). For the Dyfi, flood extent was sensitive to the relative timing of the fluvial and surge-tide drivers. In contrast, the relative timing of these drivers did not affect flooding in the Humber. However, extreme fluvial flows in the Humber actually reduced water levels in the outer estuary, compared with a surge-only event. Projected future changes in these drivers by 2100 are likely to increase combination flooding hazards: sea-level rise scenarios predicted substantial and widespread flooding in both estuaries. However, similar increases in storm surge resulted in a greater seawater influx, altering the character of the flooding. Projected changes in fluvial volumes were the weakest driver of estuarine flooding. On the west coast of Britain containing many small/steep catchments, combination flooding hazards from fluvial and surges extremes occurring together is likely. Moreover, high-resolution data and hydrodynamic modelling are necessary to resolve the impact and inform flood mitigation methodology.
How to cite: Robins, P., Harrison, L., Elnahrawi, M., Lewis, M., Coulthard, T., and Coxon, G.: Interactions of extreme river flows and sea levels for coastal flooding, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22494, https://doi.org/10.5194/egusphere-egu2020-22494, 2020.
Coastal flooding worldwide causes the vast majority of natural disasters; for the UK costing £2.2 billion/year. Fluvial and surge-tide extremes can occur synchronously resulting in combination flooding hazards in estuaries, intensifying the flood risk beyond fluvial-only or surge-only events. Worse, this flood risk has the potential to increase further in the future as the frequency and/or intensity of these drivers change, combined with projected sea-level rise. Yet, the sensitivity of contrasting estuaries to combination and compound flooding hazards at sub-daily scales – now and in the future – is unclear. Here, we investigate the dependence between fluvial and surge interactions at sub-daily scales for contrasting catchment and estuary types (Humber vs. Dyfi, UK), using 50+ years of data: 15-min fluvial flows and hourly sea levels. Additionally, we simulate intra-estuary (<50 m resolution) sensitivities to combination flooding hazards based on: (1) realistic extreme events (worst-on-record); (2) realistic events with shifted timings of the drivers to maximise flooding; and (3) modified drivers representing projected climate change.
For well-documented flooding events, we show significant correlation between skew surge and peak fluvial flow, for the Dyfi (small catchment and estuary with a fast fluvial response on the west coast of Britain), with a higher dependence during autumn/winter months. In contrast, we show no dependence for the Humber (large catchment and estuary with a slow fluvial response on the east coast of Britain). Cross-correlation results, however, did show correlation with a time lag (~10 hours). For the Dyfi, flood extent was sensitive to the relative timing of the fluvial and surge-tide drivers. In contrast, the relative timing of these drivers did not affect flooding in the Humber. However, extreme fluvial flows in the Humber actually reduced water levels in the outer estuary, compared with a surge-only event. Projected future changes in these drivers by 2100 are likely to increase combination flooding hazards: sea-level rise scenarios predicted substantial and widespread flooding in both estuaries. However, similar increases in storm surge resulted in a greater seawater influx, altering the character of the flooding. Projected changes in fluvial volumes were the weakest driver of estuarine flooding. On the west coast of Britain containing many small/steep catchments, combination flooding hazards from fluvial and surges extremes occurring together is likely. Moreover, high-resolution data and hydrodynamic modelling are necessary to resolve the impact and inform flood mitigation methodology.
How to cite: Robins, P., Harrison, L., Elnahrawi, M., Lewis, M., Coulthard, T., and Coxon, G.: Interactions of extreme river flows and sea levels for coastal flooding, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22494, https://doi.org/10.5194/egusphere-egu2020-22494, 2020.
EGU2020-14936 | Displays | ITS2.4/HS12.1
Extreme event occurrences and impacts in coastal waters of western EuropeColine Poppeschi, Maximilian Unterberger, Guillaume Charria, Peggy Rimmelin-Maury, Eric Goberville, Nicolas Barrier, Emilie Grossteffan, Michel Repecaud, Loïc Quemener, Sébastien Theetten, Sébastien Petton, Jean-François Le Roux, and Paul Tréguer
Extreme event occurrences and impacts in coastal waters of western Europe
Coline Poppeschi1, Maximilian Unterberger1, Guillaume Charria1, Peggy Rimmelin-Maury2, Eric Goberville3, Nicolas Barrier5, Emilie Grossteffan2, Michel Repecaud6, Loïc Quemener6, Sébastien Theetten1, Sébastien Petton7, Jean-François Le Roux1, Paul Tréguer4
1 Ifremer, Univ. Brest, CNRS, IRD, Laboratoire d'Océanographie Physique et Spatiale (LOPS), IUEM, 29280 Brest, France.
2 OSU-Institut Universitaire Européen de la Mer (IUEM), UMS3113, F-29280, Plouzané, France.
3 Muséum National d’Histoire Naturelle, UMR 7208 BOREA, Sorbonne Université, CNRS, UCN, UA, IRD, Paris, France.
4 IUEM, UMR-CNRS 6539 Laboratoire de l’Environnement Marin (LEMAR), OSU IUEM, F-29280, Plouzané, France.
5 MARBEC, Université de Montpellier, Centre National de la Recherche Scientifique (CNRS), Ifremer, Institut de Recherche pour le Développement (IRD), F-34203 Sète, France.
6 Ifremer, Centre de Brest, REM/RDT/DCM, F-29280, Plouzané, France.
7 Ifremer, Centre de Brest, RBE/PFOM/LPI, F-29840, Argenton en Landunvez, France.
Abstract
The occurrence and the impact of the atmospheric extreme events in coastal waters of western Europe is evolving. Responses of the coastal environment to those events and evolutions need to be explored and explained. In this framework, the hydrodynamical and biogeochemical processes driven by extreme events in the bay of Brest are studied to better estimate their impacts on the local ecosystem. We are analyzing long-term in situ observations (since 2000), sampled at high and low frequencies, from the COAST-HF and SOMLIT network sites, located at the entrance to the bay of Brest. This study is divided into two main parts: the detection and characterization of extreme events, followed by the analysis of a realistic numerical simulation of these events to understand the underlying ocean processes. We focus on freshwater events during the winter months (December, January, February and March), considering the season with most of extreme event occurrence. The relationship between local extreme events and variability at larger scales, considering climate indices such as the North Atlantic Oscillation (NAO), is detailed. A comparison between the low frequency data from the SOMLIT network and the high frequency data from the COAST-HF network is carried out, highlighting the potential of high frequency measurements for the detection of extreme events. A comparison between in situ data and two numerical simulations of different resolutions is also performed over salinity time series. The interannual variability of extreme event occurrences and features in a context of climate change is also discussed. The link between these extreme low salinity events and the winter nitrate levels in the bay of Brest is shown. Then, we investigate the relationship between extreme events and biology in the coastal environment.
Keywords
In-situ observations, High and low frequency measurements, Extreme events, Numerical simulations, Bay of Brest, Weather regimes.
How to cite: Poppeschi, C., Unterberger, M., Charria, G., Rimmelin-Maury, P., Goberville, E., Barrier, N., Grossteffan, E., Repecaud, M., Quemener, L., Theetten, S., Petton, S., Le Roux, J.-F., and Tréguer, P.: Extreme event occurrences and impacts in coastal waters of western Europe, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-14936, https://doi.org/10.5194/egusphere-egu2020-14936, 2020.
Extreme event occurrences and impacts in coastal waters of western Europe
Coline Poppeschi1, Maximilian Unterberger1, Guillaume Charria1, Peggy Rimmelin-Maury2, Eric Goberville3, Nicolas Barrier5, Emilie Grossteffan2, Michel Repecaud6, Loïc Quemener6, Sébastien Theetten1, Sébastien Petton7, Jean-François Le Roux1, Paul Tréguer4
1 Ifremer, Univ. Brest, CNRS, IRD, Laboratoire d'Océanographie Physique et Spatiale (LOPS), IUEM, 29280 Brest, France.
2 OSU-Institut Universitaire Européen de la Mer (IUEM), UMS3113, F-29280, Plouzané, France.
3 Muséum National d’Histoire Naturelle, UMR 7208 BOREA, Sorbonne Université, CNRS, UCN, UA, IRD, Paris, France.
4 IUEM, UMR-CNRS 6539 Laboratoire de l’Environnement Marin (LEMAR), OSU IUEM, F-29280, Plouzané, France.
5 MARBEC, Université de Montpellier, Centre National de la Recherche Scientifique (CNRS), Ifremer, Institut de Recherche pour le Développement (IRD), F-34203 Sète, France.
6 Ifremer, Centre de Brest, REM/RDT/DCM, F-29280, Plouzané, France.
7 Ifremer, Centre de Brest, RBE/PFOM/LPI, F-29840, Argenton en Landunvez, France.
Abstract
The occurrence and the impact of the atmospheric extreme events in coastal waters of western Europe is evolving. Responses of the coastal environment to those events and evolutions need to be explored and explained. In this framework, the hydrodynamical and biogeochemical processes driven by extreme events in the bay of Brest are studied to better estimate their impacts on the local ecosystem. We are analyzing long-term in situ observations (since 2000), sampled at high and low frequencies, from the COAST-HF and SOMLIT network sites, located at the entrance to the bay of Brest. This study is divided into two main parts: the detection and characterization of extreme events, followed by the analysis of a realistic numerical simulation of these events to understand the underlying ocean processes. We focus on freshwater events during the winter months (December, January, February and March), considering the season with most of extreme event occurrence. The relationship between local extreme events and variability at larger scales, considering climate indices such as the North Atlantic Oscillation (NAO), is detailed. A comparison between the low frequency data from the SOMLIT network and the high frequency data from the COAST-HF network is carried out, highlighting the potential of high frequency measurements for the detection of extreme events. A comparison between in situ data and two numerical simulations of different resolutions is also performed over salinity time series. The interannual variability of extreme event occurrences and features in a context of climate change is also discussed. The link between these extreme low salinity events and the winter nitrate levels in the bay of Brest is shown. Then, we investigate the relationship between extreme events and biology in the coastal environment.
Keywords
In-situ observations, High and low frequency measurements, Extreme events, Numerical simulations, Bay of Brest, Weather regimes.
How to cite: Poppeschi, C., Unterberger, M., Charria, G., Rimmelin-Maury, P., Goberville, E., Barrier, N., Grossteffan, E., Repecaud, M., Quemener, L., Theetten, S., Petton, S., Le Roux, J.-F., and Tréguer, P.: Extreme event occurrences and impacts in coastal waters of western Europe, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-14936, https://doi.org/10.5194/egusphere-egu2020-14936, 2020.
EGU2020-6670 | Displays | ITS2.4/HS12.1
Assessing the impact of climate change on water quality and quantity in the Elbe catchment using an open-data driven approachAlexander Wachholz, Seifeddine Jomaa, Olaf Büttner, Robert Reinecke, Michael Rode, and Dietrich Borchardt
Due to global climate change, the past decade has been the warmest for Germany since the beginning of climate records. Not only air temperature but also precipitation patterns are changing and therefore influencing the hydrologic cycle. This will certainly influence the chemical status of ground- and surface water bodies as mobilization, dilution and chemical reactions of contaminants are altered. However, it is uncertain if those alterations will impact water quality for better or worse and how they occur spatially. Since water management in Europe is handled at the regional scale, we suggest that an investigation is needed at the same scale to capture and quantify the different responses of the chemical status of water bodies to climate change and extreme weather conditions. In this study, we use open-access data to (1) quantify changes in temperature, precipitation, streamflow and groundwater levels for the past 40 - 60 years and (2) assess their impacts on nutrient concentrations in surface- and groundwater bodies. To disentangle management from climate effects we pay special attention to extreme weather conditions in the past decade. Referring to the Water Framework Directive, we chose the river basin district Elbe as our area of interest. Preliminary results indicate that especially the nitrate concentrations in surface water bodies of the Elbe catchment were positively affected in the last two years, while no significant impact on nitrate levels in shallow groundwater bodies was witnessed. However, many wells showed the first significant increase in water table depth in both years since 1985, raising the question of how fast groundwater-surface water interactions will change in the next years.
How to cite: Wachholz, A., Jomaa, S., Büttner, O., Reinecke, R., Rode, M., and Borchardt, D.: Assessing the impact of climate change on water quality and quantity in the Elbe catchment using an open-data driven approach, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6670, https://doi.org/10.5194/egusphere-egu2020-6670, 2020.
Due to global climate change, the past decade has been the warmest for Germany since the beginning of climate records. Not only air temperature but also precipitation patterns are changing and therefore influencing the hydrologic cycle. This will certainly influence the chemical status of ground- and surface water bodies as mobilization, dilution and chemical reactions of contaminants are altered. However, it is uncertain if those alterations will impact water quality for better or worse and how they occur spatially. Since water management in Europe is handled at the regional scale, we suggest that an investigation is needed at the same scale to capture and quantify the different responses of the chemical status of water bodies to climate change and extreme weather conditions. In this study, we use open-access data to (1) quantify changes in temperature, precipitation, streamflow and groundwater levels for the past 40 - 60 years and (2) assess their impacts on nutrient concentrations in surface- and groundwater bodies. To disentangle management from climate effects we pay special attention to extreme weather conditions in the past decade. Referring to the Water Framework Directive, we chose the river basin district Elbe as our area of interest. Preliminary results indicate that especially the nitrate concentrations in surface water bodies of the Elbe catchment were positively affected in the last two years, while no significant impact on nitrate levels in shallow groundwater bodies was witnessed. However, many wells showed the first significant increase in water table depth in both years since 1985, raising the question of how fast groundwater-surface water interactions will change in the next years.
How to cite: Wachholz, A., Jomaa, S., Büttner, O., Reinecke, R., Rode, M., and Borchardt, D.: Assessing the impact of climate change on water quality and quantity in the Elbe catchment using an open-data driven approach, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6670, https://doi.org/10.5194/egusphere-egu2020-6670, 2020.
EGU2020-5655 | Displays | ITS2.4/HS12.1
Summer drought conditions promote the dominant role of phytoplankton in riverine nutrient dynamicsNorbert Kamjunke, Michael Rode, Martina Baborowski, Vanessa Kunz, Oliver Lechtenfeld, Peter Herzsprung, and Markus Weitere
Large rivers play a relevant role in the nutrient turnover from land to ocean. Here, highly dynamic planktonic processes are more important compared to streams making it necessary to link the dynamics of nutrient turnover to control mechanisms of phytoplankton. We investigated the basic conditions leading to high phytoplankton biomass and corresponding nutrient dynamics in the eutrophic River Elbe (Germany). In a first step, we performed six Lagrangian samplings in the lower river part at different hydrological conditions. While nutrient concentration remained high at low algal densities in autumn and at moderate discharge in summer, high algal concentrations occurred at low discharge in summer. Under these conditions, concentrations of silica and nitrate decreased and rates of nitrate assimilation were high. Soluble reactive phosphorus was depleted and particulate phosphorus increased inversely. Rising molar C:P ratios of seston indicated a phosphorus limitation of phytoplankton. Global radiation combined with discharge had a strong predictive power to explain maximum chlorophyll concentration. In a second step, we estimated nutrient turnover exemplarily for N during the campaign with the lowest discharge. Mass balance calculations revealed a total nitrate uptake of 455 mg N m-2d-1 which was clearly dominated by assimilatory phytoplankton uptake whereas denitrification and other benthic processes were only of minor importance. Phytoplankton density, which showed a sigmoidal longitudinal development, dominantly explained gross primary production, related assimilatory nutrient uptake and respiration. Chlorophyll a concentration and bacterial abundance affected the composition of dissolved organic matter and were positively related to a number of CHO and CHNO components with high H/C and low O/C ratios but negatively to several CHOS surfactants. In conclusion, nutrient uptake in the large river strongly depends on the growth conditions for phytoplankton, which are favored during summer drought conditions.
How to cite: Kamjunke, N., Rode, M., Baborowski, M., Kunz, V., Lechtenfeld, O., Herzsprung, P., and Weitere, M.: Summer drought conditions promote the dominant role of phytoplankton in riverine nutrient dynamics, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5655, https://doi.org/10.5194/egusphere-egu2020-5655, 2020.
Large rivers play a relevant role in the nutrient turnover from land to ocean. Here, highly dynamic planktonic processes are more important compared to streams making it necessary to link the dynamics of nutrient turnover to control mechanisms of phytoplankton. We investigated the basic conditions leading to high phytoplankton biomass and corresponding nutrient dynamics in the eutrophic River Elbe (Germany). In a first step, we performed six Lagrangian samplings in the lower river part at different hydrological conditions. While nutrient concentration remained high at low algal densities in autumn and at moderate discharge in summer, high algal concentrations occurred at low discharge in summer. Under these conditions, concentrations of silica and nitrate decreased and rates of nitrate assimilation were high. Soluble reactive phosphorus was depleted and particulate phosphorus increased inversely. Rising molar C:P ratios of seston indicated a phosphorus limitation of phytoplankton. Global radiation combined with discharge had a strong predictive power to explain maximum chlorophyll concentration. In a second step, we estimated nutrient turnover exemplarily for N during the campaign with the lowest discharge. Mass balance calculations revealed a total nitrate uptake of 455 mg N m-2d-1 which was clearly dominated by assimilatory phytoplankton uptake whereas denitrification and other benthic processes were only of minor importance. Phytoplankton density, which showed a sigmoidal longitudinal development, dominantly explained gross primary production, related assimilatory nutrient uptake and respiration. Chlorophyll a concentration and bacterial abundance affected the composition of dissolved organic matter and were positively related to a number of CHO and CHNO components with high H/C and low O/C ratios but negatively to several CHOS surfactants. In conclusion, nutrient uptake in the large river strongly depends on the growth conditions for phytoplankton, which are favored during summer drought conditions.
How to cite: Kamjunke, N., Rode, M., Baborowski, M., Kunz, V., Lechtenfeld, O., Herzsprung, P., and Weitere, M.: Summer drought conditions promote the dominant role of phytoplankton in riverine nutrient dynamics, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5655, https://doi.org/10.5194/egusphere-egu2020-5655, 2020.
EGU2020-1447 | Displays | ITS2.4/HS12.1
Interannual variabilities of nutrients and phytoplankton off the Changjiang Estuary in response to changing river inputsJianzhong Ge, Shenyang Shi, Changsheng Chen, and Richard Bellerby
Coastal ecosystems are strongly influenced by terrestrial and oceanic inputs of water, sediment and nutrients. Terrestrial nutrients in freshwater discharge are particularly important for mega-river estuaries. A remarkable increase in nutrient loads transported from the Changjiang River through the estuary to the shelf has been observed from 1999 to 2016. The Finite-Volume Community Ocean Model and the European Regional Seas Ecosystem Model were coupled to assess the interannual variability of nutrients and phytoplankton under these flux dynamics. The system exhibited a rapid ecosystem response to the changing river nutrient contribution. Singular vector decomposition (SVD) analysis demonstratedthat abundant nitrate from the river was diluted by low-nitrate water transported from the oceanic domain. In contrast, phosphate exhibited local variation, suggesting the estuarine ecosystem was phosphate-limited. The SVD results showed that there were no significant correlations between the suspended sediment and nutrients, but a significant correlation between sediment and phytoplankton. The nutrient structure of the river discharge resulted in the dominance of non-diatom species in the phytoplankton bloom from spring to autumn. The ratio of diatom and dinoflagellate populations showed a rapid feedback response to the strong oscillations in river nutrient input. High diatom primary production occurred near the sediment front, whereas dinoflagellate bloom extended significantly offshore. Both diatoms and dinoflagellates had major peaks representing spring blooms from empirical orthogonal function Mode 1 and 2, and secondary peaks from Mode 2 in the autumn, which coincided with the autumn bloom.
How to cite: Ge, J., Shi, S., Chen, C., and Bellerby, R.: Interannual variabilities of nutrients and phytoplankton off the Changjiang Estuary in response to changing river inputs, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1447, https://doi.org/10.5194/egusphere-egu2020-1447, 2020.
Coastal ecosystems are strongly influenced by terrestrial and oceanic inputs of water, sediment and nutrients. Terrestrial nutrients in freshwater discharge are particularly important for mega-river estuaries. A remarkable increase in nutrient loads transported from the Changjiang River through the estuary to the shelf has been observed from 1999 to 2016. The Finite-Volume Community Ocean Model and the European Regional Seas Ecosystem Model were coupled to assess the interannual variability of nutrients and phytoplankton under these flux dynamics. The system exhibited a rapid ecosystem response to the changing river nutrient contribution. Singular vector decomposition (SVD) analysis demonstratedthat abundant nitrate from the river was diluted by low-nitrate water transported from the oceanic domain. In contrast, phosphate exhibited local variation, suggesting the estuarine ecosystem was phosphate-limited. The SVD results showed that there were no significant correlations between the suspended sediment and nutrients, but a significant correlation between sediment and phytoplankton. The nutrient structure of the river discharge resulted in the dominance of non-diatom species in the phytoplankton bloom from spring to autumn. The ratio of diatom and dinoflagellate populations showed a rapid feedback response to the strong oscillations in river nutrient input. High diatom primary production occurred near the sediment front, whereas dinoflagellate bloom extended significantly offshore. Both diatoms and dinoflagellates had major peaks representing spring blooms from empirical orthogonal function Mode 1 and 2, and secondary peaks from Mode 2 in the autumn, which coincided with the autumn bloom.
How to cite: Ge, J., Shi, S., Chen, C., and Bellerby, R.: Interannual variabilities of nutrients and phytoplankton off the Changjiang Estuary in response to changing river inputs, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1447, https://doi.org/10.5194/egusphere-egu2020-1447, 2020.
EGU2020-1915 | Displays | ITS2.4/HS12.1
Rapid changes of the Changjiang (Yangtze) River and East China Sea source-to-sink conveying system in response to human induced catchment changesJianhua Gao
Human activity has led to rapid changes in the erosion and deposition conditions and boundaries of the different units within the Changjiang–ECS S2S conveying system, thereby resulting in major changes in the source-sink pattern of the entire S2S conveying system. After 2003, the insufficient sediment supply disequilibrated the mass balance relationship between the estuary-coast-shelf deposition systems, thereby resulting in alteration in siltation and erosion state and sea bed sediment types, and the adjustment of the geomorphology evolvement. In addition, currently, the upper reach of the Changjiang became disconnected from the Changjiang–ECS S2S conveying system to become an independent S2S conveying system. Thus, the length of the Changjiang–ECS S2S conveying system is shortened, and the source area within this S2S conveying system has significantly increased.
How to cite: Gao, J.: Rapid changes of the Changjiang (Yangtze) River and East China Sea source-to-sink conveying system in response to human induced catchment changes, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1915, https://doi.org/10.5194/egusphere-egu2020-1915, 2020.
Human activity has led to rapid changes in the erosion and deposition conditions and boundaries of the different units within the Changjiang–ECS S2S conveying system, thereby resulting in major changes in the source-sink pattern of the entire S2S conveying system. After 2003, the insufficient sediment supply disequilibrated the mass balance relationship between the estuary-coast-shelf deposition systems, thereby resulting in alteration in siltation and erosion state and sea bed sediment types, and the adjustment of the geomorphology evolvement. In addition, currently, the upper reach of the Changjiang became disconnected from the Changjiang–ECS S2S conveying system to become an independent S2S conveying system. Thus, the length of the Changjiang–ECS S2S conveying system is shortened, and the source area within this S2S conveying system has significantly increased.
How to cite: Gao, J.: Rapid changes of the Changjiang (Yangtze) River and East China Sea source-to-sink conveying system in response to human induced catchment changes, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1915, https://doi.org/10.5194/egusphere-egu2020-1915, 2020.
EGU2020-14404 | Displays | ITS2.4/HS12.1
Venice lagoon salt marsh vulnerability and halophytic vegetation vertical migration in response to sea level riseZhicheng Yang, Sonia Silvestri, Marco Marani, and Andrea D’Alpaos
Salt marshes are biogeomorphic systems that provide important ecosystem services such as carbon sequestration and prevention of coastal erosion. These ecosystems are, however, threatened by increasing sea levels and human pressure. Improving current knowledge of salt-marsh response to changes in the environmental forcing is a key step to understand and predict salt-marsh evolution, especially under accelerated sea level rise scenarios and increasing human pressure. Towards this goal, we have analyzed field observations of marsh topographic changes and halophytic vegetation distribution with elevation collected over 20 years (between 2000 and 2019) in a representative marsh in the Venice lagoon (Italy).
Our results suggest that: 1) on average, marsh elevation with respect to local mean sea level decreased , (i.e. the surface accretion rate was lower than the rate of sea level rise); 2) elevational frequency distributions are characteristic for different halophytic vegetation species, highlighting different ecological realized niches that change in time; 3) although the preferential elevations at which different species have changed in time, the sequence of vegetation species with increasing soil elevation was preserved and simply shifted upward; 4) we observed different vegetation migration rates for the different species, suggesting that the migration process is species-specific. In particular, vegetation species colonizing marsh edges (Juncus and Inula) migrated faster facing to changes in sea levels than Limonium and Spartina , while Sarcocornia was characterized by delayed migration in response to sea level changes. These results bear significant implications for long-term biogeomorphic evolution of tidal environments.
How to cite: Yang, Z., Silvestri, S., Marani, M., and D’Alpaos, A.: Venice lagoon salt marsh vulnerability and halophytic vegetation vertical migration in response to sea level rise, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-14404, https://doi.org/10.5194/egusphere-egu2020-14404, 2020.
Salt marshes are biogeomorphic systems that provide important ecosystem services such as carbon sequestration and prevention of coastal erosion. These ecosystems are, however, threatened by increasing sea levels and human pressure. Improving current knowledge of salt-marsh response to changes in the environmental forcing is a key step to understand and predict salt-marsh evolution, especially under accelerated sea level rise scenarios and increasing human pressure. Towards this goal, we have analyzed field observations of marsh topographic changes and halophytic vegetation distribution with elevation collected over 20 years (between 2000 and 2019) in a representative marsh in the Venice lagoon (Italy).
Our results suggest that: 1) on average, marsh elevation with respect to local mean sea level decreased , (i.e. the surface accretion rate was lower than the rate of sea level rise); 2) elevational frequency distributions are characteristic for different halophytic vegetation species, highlighting different ecological realized niches that change in time; 3) although the preferential elevations at which different species have changed in time, the sequence of vegetation species with increasing soil elevation was preserved and simply shifted upward; 4) we observed different vegetation migration rates for the different species, suggesting that the migration process is species-specific. In particular, vegetation species colonizing marsh edges (Juncus and Inula) migrated faster facing to changes in sea levels than Limonium and Spartina , while Sarcocornia was characterized by delayed migration in response to sea level changes. These results bear significant implications for long-term biogeomorphic evolution of tidal environments.
How to cite: Yang, Z., Silvestri, S., Marani, M., and D’Alpaos, A.: Venice lagoon salt marsh vulnerability and halophytic vegetation vertical migration in response to sea level rise, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-14404, https://doi.org/10.5194/egusphere-egu2020-14404, 2020.
EGU2020-26 | Displays | ITS2.4/HS12.1
Damming effect on carbon processing in a subtropical valley-type reservoir in the upper Mekong BasinLin Lin
Damming rivers has been identified as one of the most intense artificial perturbations on carbon transportation along the river continuum. To quantify the damming effect on the riverine carbon flux in the upper Mekong River, seasonal carbon fluxes were monitored in a subtropical valley-type reservoir (the Gongguoqiao Reservoir) in 2016. Annually, around 20% of the incoming carbon was sequestered within the reservoir with most of the carbon retention occurring in the rainy season. Since higher rainfalls and water discharge brought large amounts of terrestrial carbon into the reservoir in summer, the concentrations of dissolved organic carbon (DIC), particulate inorganic carbon (PIC) and particulate organic carbon (POC) in the topwater show significant decreasing trends from the river inlet to the outlet (p<0.01). During the cooler dry season (winter), however, the damming effect was much weaker. Precipitation of PIC owing to the alkaline environment and decelerated flow velocity contributed over half of the carbon retention in the reservoir. Correlation between suspended sediment concentration and carbon concentrations reveals that heavy sedimentation also resulted in the sequestration of particulate carbon. Yet the damming impact on the flux of dissolved organic carbon (DOC) was relatively weak due to the short water retention time and refractory nature of allochthonous carbon. The anti-season operation of the dam allowed little time for the decomposition of the incoming DOC in the rainy season. The differentiation processing of the carbon flow significantly increased the dissolved carbon proportion in the outflow. The dams could be acting as filters and the effect might be exacerbated in the cascading system. Accumulation of dissolved organic carbon possibly can accelerate eutrophication processes in the downstream reservoirs and thus altered the aquatic carbon dynamics in the downstream river channels.
How to cite: Lin, L.: Damming effect on carbon processing in a subtropical valley-type reservoir in the upper Mekong Basin, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-26, https://doi.org/10.5194/egusphere-egu2020-26, 2020.
Damming rivers has been identified as one of the most intense artificial perturbations on carbon transportation along the river continuum. To quantify the damming effect on the riverine carbon flux in the upper Mekong River, seasonal carbon fluxes were monitored in a subtropical valley-type reservoir (the Gongguoqiao Reservoir) in 2016. Annually, around 20% of the incoming carbon was sequestered within the reservoir with most of the carbon retention occurring in the rainy season. Since higher rainfalls and water discharge brought large amounts of terrestrial carbon into the reservoir in summer, the concentrations of dissolved organic carbon (DIC), particulate inorganic carbon (PIC) and particulate organic carbon (POC) in the topwater show significant decreasing trends from the river inlet to the outlet (p<0.01). During the cooler dry season (winter), however, the damming effect was much weaker. Precipitation of PIC owing to the alkaline environment and decelerated flow velocity contributed over half of the carbon retention in the reservoir. Correlation between suspended sediment concentration and carbon concentrations reveals that heavy sedimentation also resulted in the sequestration of particulate carbon. Yet the damming impact on the flux of dissolved organic carbon (DOC) was relatively weak due to the short water retention time and refractory nature of allochthonous carbon. The anti-season operation of the dam allowed little time for the decomposition of the incoming DOC in the rainy season. The differentiation processing of the carbon flow significantly increased the dissolved carbon proportion in the outflow. The dams could be acting as filters and the effect might be exacerbated in the cascading system. Accumulation of dissolved organic carbon possibly can accelerate eutrophication processes in the downstream reservoirs and thus altered the aquatic carbon dynamics in the downstream river channels.
How to cite: Lin, L.: Damming effect on carbon processing in a subtropical valley-type reservoir in the upper Mekong Basin, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-26, https://doi.org/10.5194/egusphere-egu2020-26, 2020.
EGU2020-8373 | Displays | ITS2.4/HS12.1
Simulating the spatio-temporal variability in terrestrial - aquatic DOC transfers across EuropeCeline Gommet, Ronny Lauerwald, Philippe Ciais, and Pierre Regnier
Inland waters receive important amounts of dissolved organic carbon (DOC) from surrounding soils, which drives an important net-heterotrophy and subsequent CO2 emission from these systems. At the same time, this DOC transfer decreases the soil carbon sequestration capacity, which may limit the efficiency of the land carbon sink. The variation of DOC stocks and fluxes in time and space is modeled using the ORCHILEAK model that couples terrestrial ecosystem processes, carbon emissions from soils to headwater streams by runoff and drainage, as well as carbon decomposition and transport in rivers until export to the coastal ocean. The runs were performed at the resolution of 0.5°, taking advantage of the relatively dense observations of soil and river DOC available for European catchments. The model was first evaluated for the hydrology by comparing the discharge at different stations along several large European rivers. The DOC measurements were used to calibrate the different parameters of the ORCHILEAK model and to evaluate the model results. ORCHILEAK was then used to generate the first European map of DOC stocks and leaching for the four seasons. We estimate a soil DOC stock at 71 TgC and a DOC leaching flux of 7,8 TgC/yr, largely dominated by runoff exports during the winter season. Our model results also allow to identify the underlying processes controlling the fraction of terrestrial NPP exported to the European inland water network. The next step will be to use the model to hindcast historical DOC fluxes and predict their evolution over the 21st century using climate change and land use projections from the SSP-RCP scenarios developed for the IPCC assessment report.
How to cite: Gommet, C., Lauerwald, R., Ciais, P., and Regnier, P.: Simulating the spatio-temporal variability in terrestrial - aquatic DOC transfers across Europe, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8373, https://doi.org/10.5194/egusphere-egu2020-8373, 2020.
Inland waters receive important amounts of dissolved organic carbon (DOC) from surrounding soils, which drives an important net-heterotrophy and subsequent CO2 emission from these systems. At the same time, this DOC transfer decreases the soil carbon sequestration capacity, which may limit the efficiency of the land carbon sink. The variation of DOC stocks and fluxes in time and space is modeled using the ORCHILEAK model that couples terrestrial ecosystem processes, carbon emissions from soils to headwater streams by runoff and drainage, as well as carbon decomposition and transport in rivers until export to the coastal ocean. The runs were performed at the resolution of 0.5°, taking advantage of the relatively dense observations of soil and river DOC available for European catchments. The model was first evaluated for the hydrology by comparing the discharge at different stations along several large European rivers. The DOC measurements were used to calibrate the different parameters of the ORCHILEAK model and to evaluate the model results. ORCHILEAK was then used to generate the first European map of DOC stocks and leaching for the four seasons. We estimate a soil DOC stock at 71 TgC and a DOC leaching flux of 7,8 TgC/yr, largely dominated by runoff exports during the winter season. Our model results also allow to identify the underlying processes controlling the fraction of terrestrial NPP exported to the European inland water network. The next step will be to use the model to hindcast historical DOC fluxes and predict their evolution over the 21st century using climate change and land use projections from the SSP-RCP scenarios developed for the IPCC assessment report.
How to cite: Gommet, C., Lauerwald, R., Ciais, P., and Regnier, P.: Simulating the spatio-temporal variability in terrestrial - aquatic DOC transfers across Europe, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8373, https://doi.org/10.5194/egusphere-egu2020-8373, 2020.
EGU2020-8875 | Displays | ITS2.4/HS12.1
Carbon and nutrient cycling between estuarine and adjacent coastal watersLouise Rewrie, Yoana Voynova, Holger Brix, and Burkard Baschek
Seasonal and annual nitrate and phosphate loads were determined from FerryBox measurements to investigate the high seasonal and inter-annual variability of carbon and nutrient exchange between the Elbe estuary and North Sea. At the inner continental shelf, high biological activity is driven by riverine nutrient inputs, which can contribute to the net carbon dioxide (CO2) uptake. It is possible that in tidal systems this newly formed phytoplankton is transported back into the estuary over the flood tide, and this organic matter can be remineralized in the intertidal region. At present, the influence of this tidally driven mechanism on the nutrient exports and primary production in the coastal zone is not fully characterized, hence carbon sources and sinks at the estuary-coastal boundary may not be well accounted for.
The coupling between nutrient inputs from the Elbe estuary to adjacent coastal waters and the subsequent biological activity are now being investigated with a high-frequency dataset provided by a FerryBox situated at the mouth of the estuary. The FerryBox continuously measures physical and biogeochemical parameters every 10 to 60 minutes. Prelimary seasonal and nutrient (nitrate and phosphate) loads from the Elbe estuary to the coastal waters were calculated with FerryBox data between 2014 and 2017. The nutrient loads exhibited high seasonal and inter-annual variability. For example, in summer 2014 nitrate loads reached 100 x 107 mol yr-1 whereas, in summer 2017 nitrate loads were 50 x 107 mol yr-1, which cannot be explained by river discharge alone. Such changes in nutrient loads are likely to influence primary production rates in the adjacent coastal waters and impact CO2 uptake and therefore carbon cycling.
Time-series analysis is employed to determine patterns in oxygen changes in relation to photosynthesis and respiration, along with nutrient fluctuations, between 2014 and 2017. Salinity is used to differentiate between the coastal and estuarine end members, with low and high salinity representing flood tide (estuarine waters) and ebb tide (coastal waters), respectively. Changes in dissolved oxygen concentrations are used to estimate primary production (P) and community respiration (R) rates in the water column. The P/R ratio provides the ability to classify the community into autotrophic and heterotrophic systems. Results of this analysis will show the role of varying nutrient loads in supporting primary production in the coastal waters, along with estimating net ecosystem metabolism, and therefore give us a better understanding of nutrient and carbon cycling.
How to cite: Rewrie, L., Voynova, Y., Brix, H., and Baschek, B.: Carbon and nutrient cycling between estuarine and adjacent coastal waters , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8875, https://doi.org/10.5194/egusphere-egu2020-8875, 2020.
Seasonal and annual nitrate and phosphate loads were determined from FerryBox measurements to investigate the high seasonal and inter-annual variability of carbon and nutrient exchange between the Elbe estuary and North Sea. At the inner continental shelf, high biological activity is driven by riverine nutrient inputs, which can contribute to the net carbon dioxide (CO2) uptake. It is possible that in tidal systems this newly formed phytoplankton is transported back into the estuary over the flood tide, and this organic matter can be remineralized in the intertidal region. At present, the influence of this tidally driven mechanism on the nutrient exports and primary production in the coastal zone is not fully characterized, hence carbon sources and sinks at the estuary-coastal boundary may not be well accounted for.
The coupling between nutrient inputs from the Elbe estuary to adjacent coastal waters and the subsequent biological activity are now being investigated with a high-frequency dataset provided by a FerryBox situated at the mouth of the estuary. The FerryBox continuously measures physical and biogeochemical parameters every 10 to 60 minutes. Prelimary seasonal and nutrient (nitrate and phosphate) loads from the Elbe estuary to the coastal waters were calculated with FerryBox data between 2014 and 2017. The nutrient loads exhibited high seasonal and inter-annual variability. For example, in summer 2014 nitrate loads reached 100 x 107 mol yr-1 whereas, in summer 2017 nitrate loads were 50 x 107 mol yr-1, which cannot be explained by river discharge alone. Such changes in nutrient loads are likely to influence primary production rates in the adjacent coastal waters and impact CO2 uptake and therefore carbon cycling.
Time-series analysis is employed to determine patterns in oxygen changes in relation to photosynthesis and respiration, along with nutrient fluctuations, between 2014 and 2017. Salinity is used to differentiate between the coastal and estuarine end members, with low and high salinity representing flood tide (estuarine waters) and ebb tide (coastal waters), respectively. Changes in dissolved oxygen concentrations are used to estimate primary production (P) and community respiration (R) rates in the water column. The P/R ratio provides the ability to classify the community into autotrophic and heterotrophic systems. Results of this analysis will show the role of varying nutrient loads in supporting primary production in the coastal waters, along with estimating net ecosystem metabolism, and therefore give us a better understanding of nutrient and carbon cycling.
How to cite: Rewrie, L., Voynova, Y., Brix, H., and Baschek, B.: Carbon and nutrient cycling between estuarine and adjacent coastal waters , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8875, https://doi.org/10.5194/egusphere-egu2020-8875, 2020.
EGU2020-2128 | Displays | ITS2.4/HS12.1
Metabolism, transportation, and redistribution difference of atrazine and acetochlor from estuary to bayYu Zhang
Metabolism, transportation, and redistribution difference of atrazine and acetochlor from estuary to bay
Yu Zhang a, Wei Ouyanga, Zihan Wanga, Zewei Guoa, Zeshan Wua, Mats Tysklindb, Chunye Lina,
Baodong Wangc, Ming Xinc
a State Key Laboratory of Water Environment Simulation, School of Environment, Beijing Normal University, Beijing 100875, China
b Environmental Chemistry, Department of Chemistry, Umeå University, SE-901 87 Umeå, Sweden
c The First Institute of Oceanography, State Oceanic Administration, 6 Xianxialing Road, Qingdao, 266061, China
Abstract
Agricultural activities are the cause of pollution in several watersheds, mainly due to the discharge of herbicides. Herbicides suffer continual degradation and they present special patterns during the transport from watershed to bay. The spatial distribution of atrazine, its dealkylated chlorotriazine metabolites, and acetochlor in water, suspended particulate sediment (SPS), and surface sediment were investigated from estuary to bay. The concentrations of atrazine and deethyl-atrazine(DEA) and deisopropy-latrazine (DIA) were generally higher in the coastal zone than estuary and bay. The water distance of metabolites demonstrated that atrazine degradation was active from estuary to bay and DIA had the shortest half-distance of 1.6 km. In contrast, acetochlor concentration decreased with an increase of seawater depth and had the longer half-distance of 8.5 km than atrazine and its metabolites. Didealkyl-atrazine (DDA) had the highest concentration in SPS (7.6 ng/L) and sediment (7.0 ng/g) among all these herbicides, which indicated that it had the biggest sorption capacity. Both the spatial distribution and the vertical contents in water, SPS, and sediment demonstrated that these herbicides presented different response during the transport from the estuary to bay. Despite the significant difference in contents of atrazine, DEA, DIA, and acetochlor in the water and sediment, their spatially averaged value in SPS was very close, indicating that SPS had saturated sorption capability. The water-particle phase partition coefficient (Kp) analysis indicated that the partition process was more active in the estuary than in the bay for atrazine and its metabolites, and the metabolites had stronger capacity than the atrazine. The Kp of acetochlor was the highest among the herbicides, which illustrated that acetochlor was strongly phase partitioned in the coastal and bay zones, thereby causing similar distribution of acetochlor in the three matrices. The correlation between Kp and the corresponding octanol-water partitioning coefficient indicated that the hydrophobicity of atrazine and its metabolites were important factors for the partition between seawater and SPS. The current tides and bathymetry were the critical factors in determining the spatial distribution of herbicides in the water and sediment, resulting in low load in the estuary zone.
Key words: Typical herbicides; Phase partition; Diffuse pollution; Suspended particulate sediment; Semiclosed bay
How to cite: Zhang, Y.: Metabolism, transportation, and redistribution difference of atrazine and acetochlor from estuary to bay, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2128, https://doi.org/10.5194/egusphere-egu2020-2128, 2020.
Metabolism, transportation, and redistribution difference of atrazine and acetochlor from estuary to bay
Yu Zhang a, Wei Ouyanga, Zihan Wanga, Zewei Guoa, Zeshan Wua, Mats Tysklindb, Chunye Lina,
Baodong Wangc, Ming Xinc
a State Key Laboratory of Water Environment Simulation, School of Environment, Beijing Normal University, Beijing 100875, China
b Environmental Chemistry, Department of Chemistry, Umeå University, SE-901 87 Umeå, Sweden
c The First Institute of Oceanography, State Oceanic Administration, 6 Xianxialing Road, Qingdao, 266061, China
Abstract
Agricultural activities are the cause of pollution in several watersheds, mainly due to the discharge of herbicides. Herbicides suffer continual degradation and they present special patterns during the transport from watershed to bay. The spatial distribution of atrazine, its dealkylated chlorotriazine metabolites, and acetochlor in water, suspended particulate sediment (SPS), and surface sediment were investigated from estuary to bay. The concentrations of atrazine and deethyl-atrazine(DEA) and deisopropy-latrazine (DIA) were generally higher in the coastal zone than estuary and bay. The water distance of metabolites demonstrated that atrazine degradation was active from estuary to bay and DIA had the shortest half-distance of 1.6 km. In contrast, acetochlor concentration decreased with an increase of seawater depth and had the longer half-distance of 8.5 km than atrazine and its metabolites. Didealkyl-atrazine (DDA) had the highest concentration in SPS (7.6 ng/L) and sediment (7.0 ng/g) among all these herbicides, which indicated that it had the biggest sorption capacity. Both the spatial distribution and the vertical contents in water, SPS, and sediment demonstrated that these herbicides presented different response during the transport from the estuary to bay. Despite the significant difference in contents of atrazine, DEA, DIA, and acetochlor in the water and sediment, their spatially averaged value in SPS was very close, indicating that SPS had saturated sorption capability. The water-particle phase partition coefficient (Kp) analysis indicated that the partition process was more active in the estuary than in the bay for atrazine and its metabolites, and the metabolites had stronger capacity than the atrazine. The Kp of acetochlor was the highest among the herbicides, which illustrated that acetochlor was strongly phase partitioned in the coastal and bay zones, thereby causing similar distribution of acetochlor in the three matrices. The correlation between Kp and the corresponding octanol-water partitioning coefficient indicated that the hydrophobicity of atrazine and its metabolites were important factors for the partition between seawater and SPS. The current tides and bathymetry were the critical factors in determining the spatial distribution of herbicides in the water and sediment, resulting in low load in the estuary zone.
Key words: Typical herbicides; Phase partition; Diffuse pollution; Suspended particulate sediment; Semiclosed bay
How to cite: Zhang, Y.: Metabolism, transportation, and redistribution difference of atrazine and acetochlor from estuary to bay, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2128, https://doi.org/10.5194/egusphere-egu2020-2128, 2020.
EGU2020-5986 | Displays | ITS2.4/HS12.1
Evolution of river pollutions under the influence of local hydro-climatic changes - the example of the Bienne River (Jura Mountain, France)Elie Dhivert, François Gibon, Karine Hochart, and Bertrand Devillers
In application of the EU Water Framework Directive, many actions have been undertaken in order to reduce pollution levels in river systems. However, for certain catchments, the resilience process is not occurring as expected. In the Bienne River basin, metals discharge has plummeted since the 1990s, following the implementation of a better industrial waste management, as well as an important industrial restructuring. Nevertheless, this river has been regularly affected by massive fish mortality over the 2012-2019 period. This phenomenon, never identified before, is becoming recurrent. Organic tissues sampled in dead fish contained high concentrations of metals in association with other toxics. In this context, this study introduces a transdisciplinary approach in order to: (i) analyse spatial and temporal evolutions of pollutions in the Bienne River, (ii) evaluate potential ecotoxicological impacts associated, (iii) identify interactions with local hydro-climatic changes. Metallic and organic pollutants were analysed over different stations and at multi-temporal scales, associating sedimentary archives, suspended matters and passive water samplers. These analyses highlight the impact on the river quality of both current and legacy pollutions, particularly during prolonged low-water periods and high discharge events. Ecotoxicological analyses emphasize a severe risk level in the case of polluted sediments remobilization, especially because of heavy metals and PAH contamination. Geochemical evidence of such remobilization events has been recorded over the last decade in a sedimentary core sampled in the downstream part of the Bienne River. Hydrological data recorded in the Bienne River gauging stations since 1971 attests of an important year-to-year variability, although changes in the river discharge distribution are ongoing. Data has shown a higher frequency of both the lowest and the highest outflows over the 2012-2019 period compared to the rest of the hydrological recording. Hydro-climatic variables coming from in-situ measurements and satellite data (GPM-IMERG6) has also shown significant modifications in the rainfall regime over this period, especially in the augmentation of dry spells and heavy rainfall episodes. Those modifications agree well with the discharge change observations. This study brings out knock-on impacts of combined geochemical, ecotoxicological and hydro-sedimentary issues on the fate of aquatic ecosystems, especially under the influence of local hydro-climatic changes and their implications on hydrological regimes. Those results aim at reducing uncertainties concerning the evolution of the river quality by highlighting such a tipping point for environmental conditions. In addition, such a study helps us to grasp the complexity of local stakes regarding the multiple interests of a wide range of stakeholders and policy makers involved on the field.
How to cite: Dhivert, E., Gibon, F., Hochart, K., and Devillers, B.: Evolution of river pollutions under the influence of local hydro-climatic changes - the example of the Bienne River (Jura Mountain, France), EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5986, https://doi.org/10.5194/egusphere-egu2020-5986, 2020.
In application of the EU Water Framework Directive, many actions have been undertaken in order to reduce pollution levels in river systems. However, for certain catchments, the resilience process is not occurring as expected. In the Bienne River basin, metals discharge has plummeted since the 1990s, following the implementation of a better industrial waste management, as well as an important industrial restructuring. Nevertheless, this river has been regularly affected by massive fish mortality over the 2012-2019 period. This phenomenon, never identified before, is becoming recurrent. Organic tissues sampled in dead fish contained high concentrations of metals in association with other toxics. In this context, this study introduces a transdisciplinary approach in order to: (i) analyse spatial and temporal evolutions of pollutions in the Bienne River, (ii) evaluate potential ecotoxicological impacts associated, (iii) identify interactions with local hydro-climatic changes. Metallic and organic pollutants were analysed over different stations and at multi-temporal scales, associating sedimentary archives, suspended matters and passive water samplers. These analyses highlight the impact on the river quality of both current and legacy pollutions, particularly during prolonged low-water periods and high discharge events. Ecotoxicological analyses emphasize a severe risk level in the case of polluted sediments remobilization, especially because of heavy metals and PAH contamination. Geochemical evidence of such remobilization events has been recorded over the last decade in a sedimentary core sampled in the downstream part of the Bienne River. Hydrological data recorded in the Bienne River gauging stations since 1971 attests of an important year-to-year variability, although changes in the river discharge distribution are ongoing. Data has shown a higher frequency of both the lowest and the highest outflows over the 2012-2019 period compared to the rest of the hydrological recording. Hydro-climatic variables coming from in-situ measurements and satellite data (GPM-IMERG6) has also shown significant modifications in the rainfall regime over this period, especially in the augmentation of dry spells and heavy rainfall episodes. Those modifications agree well with the discharge change observations. This study brings out knock-on impacts of combined geochemical, ecotoxicological and hydro-sedimentary issues on the fate of aquatic ecosystems, especially under the influence of local hydro-climatic changes and their implications on hydrological regimes. Those results aim at reducing uncertainties concerning the evolution of the river quality by highlighting such a tipping point for environmental conditions. In addition, such a study helps us to grasp the complexity of local stakes regarding the multiple interests of a wide range of stakeholders and policy makers involved on the field.
How to cite: Dhivert, E., Gibon, F., Hochart, K., and Devillers, B.: Evolution of river pollutions under the influence of local hydro-climatic changes - the example of the Bienne River (Jura Mountain, France), EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5986, https://doi.org/10.5194/egusphere-egu2020-5986, 2020.
EGU2020-13793 | Displays | ITS2.4/HS12.1
Sr-Nd-Pb isotope fingerprint analysis of sediment from the river Weser (Germany) and its implication to trace human and climate-induced impactsFeifei Deng, Stefen Hellmann, Tristan Zimmerman, and Daniel Pröfrock
River systems in Germany are under an increasing pressure due to human activities and the changing global climate in the recent decade. Human activities, such as agriculture and industrial manufacturing, for instance have supplied contaminates to many rivers, which has greatly affected the river ecosystem. Extreme events, as a result of the changing global climate, such as the more frequent extraordinary floods and droughts, are playing an increasingly significant role in the chemical compositions of the different river systems. To protect these unique river ecosystems, it is important to identify the contribution of these various sources of pressure and quantitatively assess their relative impacts on the different river systems.
Here, we will explore the potential of using the Sr, Nd, and Pb isotopes as a fingerprinting tool to quantify the relative contributions from both natural and anthropogenic sources supplying the materials to the river system. Sediment samples were collected from the river Weser, the longest river that lies entirely within Germany. The river Weser is formed by the junction of two rivers, Werra and Fulda, and flows towards its estuary in the North Sea. With a mean discharge of 327 m3/s, it is one of the main rivers discharging into the North Sea. With its two headwaters and tributaries also sampled, sampling locations cover a geographical area of agricultural land and industrial sites, and expand to coastal areas of the North Sea. It is therefore ideal to evaluate the impact of various sources of human activities and the changing climate on the river system, and to provide insight into the contribution of river system to the ocean.
Sediment samples were analysed for their elemental compositions to evaluate the load of each chemical composition in the river Weser. Isotopic ratios of Sr, Nd, and Pb were measured on MC-ICP-MS (Multi-collector-Inductively Coupled Plasma-Mass Spectrometry) with the newly-developed automated prepFAST sample purification method (Retzmann et al., 2017). The Sr, Nd and Pb isotope results reported here are the first such dataset obtained from the river Weser sediment. Combined with the statistical analysis, such as the principal component analysis, the dataset allows the evaluation of the contribution of various sources to the load of the river Weser, and enables the quantification of the flux of the river to the North Sea, and an estimate of the contribution of the river system to contaminants transported into the coastal zone. These estimates will also be of interest to stakeholders and governments for targeted management interventions of the socio-economically important Weser river system.
References
Retzmann, A., Zimmermann, T., Pröfrock, D., Prohaska, T., Irrgeher, J., 2017. A fully automated simultaneous single-stage separation of Sr, Pb, and Nd using DGA Resin for the isotopic analysis of marine sediments. Analytical and Bioanalytical Chemistry 409, 5463-5480.
How to cite: Deng, F., Hellmann, S., Zimmerman, T., and Pröfrock, D.: Sr-Nd-Pb isotope fingerprint analysis of sediment from the river Weser (Germany) and its implication to trace human and climate-induced impacts , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13793, https://doi.org/10.5194/egusphere-egu2020-13793, 2020.
River systems in Germany are under an increasing pressure due to human activities and the changing global climate in the recent decade. Human activities, such as agriculture and industrial manufacturing, for instance have supplied contaminates to many rivers, which has greatly affected the river ecosystem. Extreme events, as a result of the changing global climate, such as the more frequent extraordinary floods and droughts, are playing an increasingly significant role in the chemical compositions of the different river systems. To protect these unique river ecosystems, it is important to identify the contribution of these various sources of pressure and quantitatively assess their relative impacts on the different river systems.
Here, we will explore the potential of using the Sr, Nd, and Pb isotopes as a fingerprinting tool to quantify the relative contributions from both natural and anthropogenic sources supplying the materials to the river system. Sediment samples were collected from the river Weser, the longest river that lies entirely within Germany. The river Weser is formed by the junction of two rivers, Werra and Fulda, and flows towards its estuary in the North Sea. With a mean discharge of 327 m3/s, it is one of the main rivers discharging into the North Sea. With its two headwaters and tributaries also sampled, sampling locations cover a geographical area of agricultural land and industrial sites, and expand to coastal areas of the North Sea. It is therefore ideal to evaluate the impact of various sources of human activities and the changing climate on the river system, and to provide insight into the contribution of river system to the ocean.
Sediment samples were analysed for their elemental compositions to evaluate the load of each chemical composition in the river Weser. Isotopic ratios of Sr, Nd, and Pb were measured on MC-ICP-MS (Multi-collector-Inductively Coupled Plasma-Mass Spectrometry) with the newly-developed automated prepFAST sample purification method (Retzmann et al., 2017). The Sr, Nd and Pb isotope results reported here are the first such dataset obtained from the river Weser sediment. Combined with the statistical analysis, such as the principal component analysis, the dataset allows the evaluation of the contribution of various sources to the load of the river Weser, and enables the quantification of the flux of the river to the North Sea, and an estimate of the contribution of the river system to contaminants transported into the coastal zone. These estimates will also be of interest to stakeholders and governments for targeted management interventions of the socio-economically important Weser river system.
References
Retzmann, A., Zimmermann, T., Pröfrock, D., Prohaska, T., Irrgeher, J., 2017. A fully automated simultaneous single-stage separation of Sr, Pb, and Nd using DGA Resin for the isotopic analysis of marine sediments. Analytical and Bioanalytical Chemistry 409, 5463-5480.
How to cite: Deng, F., Hellmann, S., Zimmerman, T., and Pröfrock, D.: Sr-Nd-Pb isotope fingerprint analysis of sediment from the river Weser (Germany) and its implication to trace human and climate-induced impacts , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13793, https://doi.org/10.5194/egusphere-egu2020-13793, 2020.
EGU2020-21706 | Displays | ITS2.4/HS12.1
Reservoir-river-sea system sediment geochemistry in Fiumi Uniti catchment from Apennines to Adriatic SeaSimone Toller, Salvatore Dominech, Enrico Dinelli, Shouye Yang, Lucilla Capotondi, Francesco Riminucci, and Ivo Vasumini
Sediment samples were collected in 2019 from Fiumi Uniti catchment in Italy in an area between the Romagna Apennines and the Adriatic Sea. The sampling phase included the collection of sediments from the Ridracoli reservoir, a large artificial basin located at 480 m a.s.l. made by construction of a dam on the Bidente river and used as the main drinking water supply of the region and for hydropower production, as well as river sediments within the whole catchment that includes the dam (Bidente, Ronco and Fiumi Uniti rivers and tributaries) and marine sediments from the Adriatic sea. In addition, we collected in the reservoir area rock and soil samples to define the element behaviour during weathering and transport.
Here we report data on chemical concentrations from different matrices within the area of Ridracoli reservoir as well as chemical characterization of sediments downstream the dam along the rivers. The chemical analyses were carried out at the State Key Laboratory of Marine Geology in Shanghai, where samples underwent a two-step digestion to assess the mobile and residual fraction using a first leaching step with 1N HCl and a second one with pure HNO3 and HF, respectively.
The chemical differences between rock, soils and sediments inside the reservoir showed a system of element mobility that can be compared to the geochemistry of surrounding sediments to assess pathways of geochemical cycles of elements. The ratio between concentrations of different matrices shows an enrichment in soils compared to rock for some elements (>1.3; Li, V, Cr, Mn, Sr, Cd, U) and slightly depletion in lake sediments compared to rocks (0.8-0.9). The REE ratio between lake sediments and other matrices (i.e., rocks, soils, and stream sediments) equals to 0.7-0.8, while for other trace elements (Li, V, Mn, Fe, Ni) is 1.1-1.2 showing an opposite behaviour.
More mobile elements assessed using the ratio between the first step of leaching and the total composition, are Mn (0.7 of extractability) and Sr (0.8) followed by Co, Cu, Se Cd and Pb (around 0.3-0.4). The more stable elements (higher in the residual) are Ti, Rb, Zr, Cs (max 0.015). Cu and Pb seems to be more mobile in sediments than rock and soil, whereas the mobility of other analytes doesn’t seem to be affected by the different matrices. REE are quite mobile showing good extractability for Eu, Gd, Tb, Dy. Spider diagrams of REE were normalized to PAAS (Post Archean Australian Shale) and show similar shapes with Gd peaks. A difference can be seen between rocks (values around 0.8 and 1.2) and sediments, with the latter showing higher values (1.2 and 1.4).
The importance of this study relies on the implications that human activities have on river systems thanks to sediment quality and on the functioning of the river-sea system in Romagna and specifically in Ridracoli reservoir catchment.
How to cite: Toller, S., Dominech, S., Dinelli, E., Yang, S., Capotondi, L., Riminucci, F., and Vasumini, I.: Reservoir-river-sea system sediment geochemistry in Fiumi Uniti catchment from Apennines to Adriatic Sea, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21706, https://doi.org/10.5194/egusphere-egu2020-21706, 2020.
Sediment samples were collected in 2019 from Fiumi Uniti catchment in Italy in an area between the Romagna Apennines and the Adriatic Sea. The sampling phase included the collection of sediments from the Ridracoli reservoir, a large artificial basin located at 480 m a.s.l. made by construction of a dam on the Bidente river and used as the main drinking water supply of the region and for hydropower production, as well as river sediments within the whole catchment that includes the dam (Bidente, Ronco and Fiumi Uniti rivers and tributaries) and marine sediments from the Adriatic sea. In addition, we collected in the reservoir area rock and soil samples to define the element behaviour during weathering and transport.
Here we report data on chemical concentrations from different matrices within the area of Ridracoli reservoir as well as chemical characterization of sediments downstream the dam along the rivers. The chemical analyses were carried out at the State Key Laboratory of Marine Geology in Shanghai, where samples underwent a two-step digestion to assess the mobile and residual fraction using a first leaching step with 1N HCl and a second one with pure HNO3 and HF, respectively.
The chemical differences between rock, soils and sediments inside the reservoir showed a system of element mobility that can be compared to the geochemistry of surrounding sediments to assess pathways of geochemical cycles of elements. The ratio between concentrations of different matrices shows an enrichment in soils compared to rock for some elements (>1.3; Li, V, Cr, Mn, Sr, Cd, U) and slightly depletion in lake sediments compared to rocks (0.8-0.9). The REE ratio between lake sediments and other matrices (i.e., rocks, soils, and stream sediments) equals to 0.7-0.8, while for other trace elements (Li, V, Mn, Fe, Ni) is 1.1-1.2 showing an opposite behaviour.
More mobile elements assessed using the ratio between the first step of leaching and the total composition, are Mn (0.7 of extractability) and Sr (0.8) followed by Co, Cu, Se Cd and Pb (around 0.3-0.4). The more stable elements (higher in the residual) are Ti, Rb, Zr, Cs (max 0.015). Cu and Pb seems to be more mobile in sediments than rock and soil, whereas the mobility of other analytes doesn’t seem to be affected by the different matrices. REE are quite mobile showing good extractability for Eu, Gd, Tb, Dy. Spider diagrams of REE were normalized to PAAS (Post Archean Australian Shale) and show similar shapes with Gd peaks. A difference can be seen between rocks (values around 0.8 and 1.2) and sediments, with the latter showing higher values (1.2 and 1.4).
The importance of this study relies on the implications that human activities have on river systems thanks to sediment quality and on the functioning of the river-sea system in Romagna and specifically in Ridracoli reservoir catchment.
How to cite: Toller, S., Dominech, S., Dinelli, E., Yang, S., Capotondi, L., Riminucci, F., and Vasumini, I.: Reservoir-river-sea system sediment geochemistry in Fiumi Uniti catchment from Apennines to Adriatic Sea, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21706, https://doi.org/10.5194/egusphere-egu2020-21706, 2020.
EGU2020-22064 | Displays | ITS2.4/HS12.1
Prediction of organic matter degradability in river sedimentsJulia Gebert and Florian Zander
Under anaerobic conditions, degradation of organic matter in river sediments leads to gas formation, with organic carbon being released mainly as CH4 and CO2. Gas bubbles reduce sediment density, viscosity and shear strength, impede sonic depth finding and are suspected to affect the sediments’ rheological properties. Moreover, methane (CH4) is a potent greenhouse gas with a global warming potential (GWP100) of 28-36. Therefore, the climate impact may vary greatly depending on the way sediments are managed (for example, type and frequency of dredging and relocation in the water body or treatment on land). The objective of this paper is therefore to quantify the time-dependent stability, or inversely, the lability of sediment organic matter (SOM) as a basis for prediction of effects on sediment mechanical properties and on the release of greenhouse gases.
Within two years, over 200 samples of predominantly fine-grained sediment were collected from nine locations within a 30 km transect through the Port of Hamburg. All samples were, amongst other analyses, subjected to long-term (> 250 days) aerobic and anaerobic incubation for measurement of SOM degradation, yielding a comprehensive data set on the time-dependent change in degradation rates and the corresponding size of differently degradable SOM pools. SOM degradability exhibited a pronounced spatial variability with an approximately tenfold higher anaerobic and a roughly fivefold higher aerobic degradability of upstream SOM compared to downstream SOM. Lower δ13C values, higher DNA concentrations and a higher share of organic carbon in the light density fraction as well as elevated chlorophyll concentrations in the water phase support the hypothesis of increased biological sources of SOM at upstream locations and increased SOM degradability in shallow compared to deep layers (Zander et al., 2020).
First statistical and time series analyses indicate that
- Long-term SOM lability appears to be predictable from short-term measurements.
- The relationship between short-term and long-term SOM degradation is site-specific and also differs for layers of different age (depth). This supports the above-mentioned variability between sites regarding the size of differently degradable carbon pools, as well as for the depth profile at any one site.
- The relevance of the available electron acceptors (redox conditions) for SOM degradation, i.e. the ratio between carbon release under aerobic and anaerobic conditions, differs less by site but more so by layers of different age (depth). This is plausible as especially the top layers are exposed to more variability in redox conditions than the deeper layers that are always under reducing conditions.
Zander, F., Heimovaara, T., Gebert, J. (2020): Spatial variability of organic matter degradability in tidal Elbe sediments. Journal of Soils and Sediments, accepted for publication.
How to cite: Gebert, J. and Zander, F.: Prediction of organic matter degradability in river sediments, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22064, https://doi.org/10.5194/egusphere-egu2020-22064, 2020.
Under anaerobic conditions, degradation of organic matter in river sediments leads to gas formation, with organic carbon being released mainly as CH4 and CO2. Gas bubbles reduce sediment density, viscosity and shear strength, impede sonic depth finding and are suspected to affect the sediments’ rheological properties. Moreover, methane (CH4) is a potent greenhouse gas with a global warming potential (GWP100) of 28-36. Therefore, the climate impact may vary greatly depending on the way sediments are managed (for example, type and frequency of dredging and relocation in the water body or treatment on land). The objective of this paper is therefore to quantify the time-dependent stability, or inversely, the lability of sediment organic matter (SOM) as a basis for prediction of effects on sediment mechanical properties and on the release of greenhouse gases.
Within two years, over 200 samples of predominantly fine-grained sediment were collected from nine locations within a 30 km transect through the Port of Hamburg. All samples were, amongst other analyses, subjected to long-term (> 250 days) aerobic and anaerobic incubation for measurement of SOM degradation, yielding a comprehensive data set on the time-dependent change in degradation rates and the corresponding size of differently degradable SOM pools. SOM degradability exhibited a pronounced spatial variability with an approximately tenfold higher anaerobic and a roughly fivefold higher aerobic degradability of upstream SOM compared to downstream SOM. Lower δ13C values, higher DNA concentrations and a higher share of organic carbon in the light density fraction as well as elevated chlorophyll concentrations in the water phase support the hypothesis of increased biological sources of SOM at upstream locations and increased SOM degradability in shallow compared to deep layers (Zander et al., 2020).
First statistical and time series analyses indicate that
- Long-term SOM lability appears to be predictable from short-term measurements.
- The relationship between short-term and long-term SOM degradation is site-specific and also differs for layers of different age (depth). This supports the above-mentioned variability between sites regarding the size of differently degradable carbon pools, as well as for the depth profile at any one site.
- The relevance of the available electron acceptors (redox conditions) for SOM degradation, i.e. the ratio between carbon release under aerobic and anaerobic conditions, differs less by site but more so by layers of different age (depth). This is plausible as especially the top layers are exposed to more variability in redox conditions than the deeper layers that are always under reducing conditions.
Zander, F., Heimovaara, T., Gebert, J. (2020): Spatial variability of organic matter degradability in tidal Elbe sediments. Journal of Soils and Sediments, accepted for publication.
How to cite: Gebert, J. and Zander, F.: Prediction of organic matter degradability in river sediments, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22064, https://doi.org/10.5194/egusphere-egu2020-22064, 2020.
EGU2020-1973 | Displays | ITS2.4/HS12.1
Provenance and Transport Process of Fine Sediments in Central South Sea Mudhyen goo cho
The Central South Sea Mud (CSSM), developed in the Seomjin River estuary, is known to be supplied with sediments from Heuksan Mud Belt (HMB) and Seomjin River. However, in order to form a mud belt, more sediments must be supplied than supplied in the above areas. In this study, clay minerals, major elements analyzes were performed on cores 16PCT-GC01 and 16PCT-GC03 in order to investigate the transition in the provenance and transport pathway of sediments in CSSM. The Huanghe sediments are characterized by higher smectite and the Changjiang sediments are characterized by higher illite. Korean river sediments contain more kaolinite and chlorite than those of chinese rivers. Korean river sediments have higher Al, Fe, K concentraion than Chinese river sediments and Chinese rivers have higher Ca, Mg, Na than those of Korean rivers. Therefore, clay minerals and major elements can be a useful indicator for provenance. Based on our results, CSSM can be divided into three sediment units. Unit 3, which corresponds to the lowstand stage, is interpreted that sediments from Huanghe were supplied to the study area by coastal or tidal currents. Unit 2, which corresponds to the transgressive stage, is interpreted to have a weaker Huanghe effect and a stronger Changjiang and Korean rivers effect. Unit 1, which corresponds to the highstand stage when the sea level is the same as present and current circulation system is formed, is interpreted that sediments from Changjiang and Korean rivers are supplied to the research area through the current.
How to cite: cho, H. G.: Provenance and Transport Process of Fine Sediments in Central South Sea Mud, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1973, https://doi.org/10.5194/egusphere-egu2020-1973, 2020.
The Central South Sea Mud (CSSM), developed in the Seomjin River estuary, is known to be supplied with sediments from Heuksan Mud Belt (HMB) and Seomjin River. However, in order to form a mud belt, more sediments must be supplied than supplied in the above areas. In this study, clay minerals, major elements analyzes were performed on cores 16PCT-GC01 and 16PCT-GC03 in order to investigate the transition in the provenance and transport pathway of sediments in CSSM. The Huanghe sediments are characterized by higher smectite and the Changjiang sediments are characterized by higher illite. Korean river sediments contain more kaolinite and chlorite than those of chinese rivers. Korean river sediments have higher Al, Fe, K concentraion than Chinese river sediments and Chinese rivers have higher Ca, Mg, Na than those of Korean rivers. Therefore, clay minerals and major elements can be a useful indicator for provenance. Based on our results, CSSM can be divided into three sediment units. Unit 3, which corresponds to the lowstand stage, is interpreted that sediments from Huanghe were supplied to the study area by coastal or tidal currents. Unit 2, which corresponds to the transgressive stage, is interpreted to have a weaker Huanghe effect and a stronger Changjiang and Korean rivers effect. Unit 1, which corresponds to the highstand stage when the sea level is the same as present and current circulation system is formed, is interpreted that sediments from Changjiang and Korean rivers are supplied to the research area through the current.
How to cite: cho, H. G.: Provenance and Transport Process of Fine Sediments in Central South Sea Mud, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1973, https://doi.org/10.5194/egusphere-egu2020-1973, 2020.
EGU2020-3816 | Displays | ITS2.4/HS12.1
Quartz fluid inclusions characteristics of fluvial deposit in Changjiang River and their implicationsChendong Ge and Zhenqiang Ji
The morphological characteristics of quartz inclusions in sediments from five locations in the upper, middle and lower reaches of the Changjiang River are analyzed. The source indication of sediments is discussed through the differences in shape, size, quantity, gas percentage and genetic type. From upstream to downstream, the characteristics of quartz inclusions in sediments are different. The inclusion types appearing in the samples from upstream to estuary are gradually enriched. The sediment influx from the tributaries of the Changjiang River makes new types of quartz inclusions appearing in the downstream and estuary. In terms of quantity and size, most quartz inclusions are concentrated in the range of 2-4 μm in size and 10-150/mm3 in number. The number and size range of different positions are also different. In SGJS-01 collected from Shigu, is 2-18 μm, the number is 2-166 per volume. In YBCJ-01, YZD-63 and YZD-10 samples collected from Yibin, Yichang and Wuhan, the size is 2-15μm, 2-10μm, 2-12μm and the number is 1-270, 2-220 and 1-308 per volume. The primary inclusions of SGJS-01 in Shigu is 14%, higher than that of primary inclusions in the middle and lower reaches, and that of YBCJ-01 in Yibin decreases to 6%, and for YZD-63, YZD-10 and HK-01 they were 8%, 6% and 5% respectively. The change of the primary inclusion proportion reflects the difference of source rock types of sediments. The difference of source rocks of sediments can be reflected in the type, size, quantity and proportion of primary inclusions. The characteristics of quartz inclusions could be a new way to explore the source of sediments.
How to cite: Ge, C. and Ji, Z.: Quartz fluid inclusions characteristics of fluvial deposit in Changjiang River and their implications, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3816, https://doi.org/10.5194/egusphere-egu2020-3816, 2020.
The morphological characteristics of quartz inclusions in sediments from five locations in the upper, middle and lower reaches of the Changjiang River are analyzed. The source indication of sediments is discussed through the differences in shape, size, quantity, gas percentage and genetic type. From upstream to downstream, the characteristics of quartz inclusions in sediments are different. The inclusion types appearing in the samples from upstream to estuary are gradually enriched. The sediment influx from the tributaries of the Changjiang River makes new types of quartz inclusions appearing in the downstream and estuary. In terms of quantity and size, most quartz inclusions are concentrated in the range of 2-4 μm in size and 10-150/mm3 in number. The number and size range of different positions are also different. In SGJS-01 collected from Shigu, is 2-18 μm, the number is 2-166 per volume. In YBCJ-01, YZD-63 and YZD-10 samples collected from Yibin, Yichang and Wuhan, the size is 2-15μm, 2-10μm, 2-12μm and the number is 1-270, 2-220 and 1-308 per volume. The primary inclusions of SGJS-01 in Shigu is 14%, higher than that of primary inclusions in the middle and lower reaches, and that of YBCJ-01 in Yibin decreases to 6%, and for YZD-63, YZD-10 and HK-01 they were 8%, 6% and 5% respectively. The change of the primary inclusion proportion reflects the difference of source rock types of sediments. The difference of source rocks of sediments can be reflected in the type, size, quantity and proportion of primary inclusions. The characteristics of quartz inclusions could be a new way to explore the source of sediments.
How to cite: Ge, C. and Ji, Z.: Quartz fluid inclusions characteristics of fluvial deposit in Changjiang River and their implications, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3816, https://doi.org/10.5194/egusphere-egu2020-3816, 2020.
ITS2.7/HS12.2 – Plastic in freshwater environments
EGU2020-5891 | Displays | ITS2.7/HS12.2 | Highlight
Assessing litter loads and estimating macroplastic emission rates of three major North Sea tributaries – Ems, Weser, and Elbe – through holistic, field-based observationsRosanna Schöneich-Argent, Kirsten Dau, and Holger Freund
Research into the scope of litter pollution, particularly of plastic debris, in freshwater systems has shown similar levels to the marine and coastal environment. Global model estimates of riverine emission rates of plastic litter are however largely based on microplastic studies as long-term and holistic observations of riverine macroplastics are still scarce. Our study therefore aimed to contribute a detailed assessment of macrolitter in the transitional waters of three major North Sea tributaries: Ems, Weser, and Elbe. It was hypothesised that the larger and more intensely used, the more polluted the river would be. Litter surveys were carried out in four river compartments: along the embankment, on the river surface, in the water column, and on the river bed. Plastic generally comprised 88-100 % of all recorded debris items. Our data revealed spatio-temporal variability and distinct pollution levels for each compartment. Beaches had the highest debris diversity and were significantly more littered than vegetated sites and harbours. Stony embankments were least polluted. Benthic litter levels appeared substantial despite rapid burial of objects being likely due to high suspended sediment loads. Extrapolated to daily mean emission rates, more plastic litter is discharged into each estuary via the river surface than through the water column. Combining both, the Ems emits over 700 macroplastic items daily, the Weser more than 2,700, and the Elbe ~196,000 objects. Using the mean (median) plastic item mass recorded from water column samples, i.e. 6.3 g (1.7 g), this equates to ~4.5 (1.2) kg d-1 and ~1.6 (0.4) t y-1 of plastic waste discharged by the Ems, ~17.2 (4.6) kg d-1 and ~6.3 (1.7) t y-1 for the Weser, and ~1.2 (0.3) t d-1 respectively ~451 (122) t y-1 carried into the North Sea via the Elbe. These rates deviate considerably from previous model estimates of plastic loads discharged by said rivers. Future studies should therefore ground-truth model estimates with more river-specific and long-term field observations, which will ultimately help assess the effectiveness of waste management and reduction strategies inland and on water.
How to cite: Schöneich-Argent, R., Dau, K., and Freund, H.: Assessing litter loads and estimating macroplastic emission rates of three major North Sea tributaries – Ems, Weser, and Elbe – through holistic, field-based observations, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5891, https://doi.org/10.5194/egusphere-egu2020-5891, 2020.
Research into the scope of litter pollution, particularly of plastic debris, in freshwater systems has shown similar levels to the marine and coastal environment. Global model estimates of riverine emission rates of plastic litter are however largely based on microplastic studies as long-term and holistic observations of riverine macroplastics are still scarce. Our study therefore aimed to contribute a detailed assessment of macrolitter in the transitional waters of three major North Sea tributaries: Ems, Weser, and Elbe. It was hypothesised that the larger and more intensely used, the more polluted the river would be. Litter surveys were carried out in four river compartments: along the embankment, on the river surface, in the water column, and on the river bed. Plastic generally comprised 88-100 % of all recorded debris items. Our data revealed spatio-temporal variability and distinct pollution levels for each compartment. Beaches had the highest debris diversity and were significantly more littered than vegetated sites and harbours. Stony embankments were least polluted. Benthic litter levels appeared substantial despite rapid burial of objects being likely due to high suspended sediment loads. Extrapolated to daily mean emission rates, more plastic litter is discharged into each estuary via the river surface than through the water column. Combining both, the Ems emits over 700 macroplastic items daily, the Weser more than 2,700, and the Elbe ~196,000 objects. Using the mean (median) plastic item mass recorded from water column samples, i.e. 6.3 g (1.7 g), this equates to ~4.5 (1.2) kg d-1 and ~1.6 (0.4) t y-1 of plastic waste discharged by the Ems, ~17.2 (4.6) kg d-1 and ~6.3 (1.7) t y-1 for the Weser, and ~1.2 (0.3) t d-1 respectively ~451 (122) t y-1 carried into the North Sea via the Elbe. These rates deviate considerably from previous model estimates of plastic loads discharged by said rivers. Future studies should therefore ground-truth model estimates with more river-specific and long-term field observations, which will ultimately help assess the effectiveness of waste management and reduction strategies inland and on water.
How to cite: Schöneich-Argent, R., Dau, K., and Freund, H.: Assessing litter loads and estimating macroplastic emission rates of three major North Sea tributaries – Ems, Weser, and Elbe – through holistic, field-based observations, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5891, https://doi.org/10.5194/egusphere-egu2020-5891, 2020.
EGU2020-19268 | Displays | ITS2.7/HS12.2 | Highlight
Plastic waste input from Guadalquivir River to the oceanRocio Quintana Sepúlveda, Daniel González Fernández, Andrés Cózar Cabañas, Cesar Vilas Fernández, Enrique González Ortegón, Francisco Baldo Martínez, and Carmen Morales Caselles
Around 8 million tons of plastic waste are leaked from land into the ocean annually. One of the main pathways of plastic input into the ocean is rivers, but there is no comprehensive information about the amount and nature of litter transported. This study presents results of a monthly monitoring over a two years period in the estuary of Guadalquivir River, southern Spain. The samples, which consisted of passive hauls, were taken from a traditional glass eel fishing boat anchored with three nets working in parallel. The nets, with a mesh of 1 mm and an opening of 2.5 (width) × 3 (depth) metres, filtered around 60,000 m3 per sample. Our methodological approach allowed characterization of virtually all plastic sizes in river waters, comprising micro-, meso- and macroplastics. Plastic items were dominated by pieces of film (70% in number). Microplastics in the size interval from 2.5 to 4.0 mm represented half of the total identified items. Small fragments in Guadalquivir River comprised most of the plastic mass input to the sea. Our results support the relevance of fragmentation processes inland, and the role of rivers and estuaries as sources of microplastics to the ocean.
How to cite: Quintana Sepúlveda, R., González Fernández, D., Cózar Cabañas, A., Vilas Fernández, C., González Ortegón, E., Baldo Martínez, F., and Morales Caselles, C.: Plastic waste input from Guadalquivir River to the ocean , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19268, https://doi.org/10.5194/egusphere-egu2020-19268, 2020.
Around 8 million tons of plastic waste are leaked from land into the ocean annually. One of the main pathways of plastic input into the ocean is rivers, but there is no comprehensive information about the amount and nature of litter transported. This study presents results of a monthly monitoring over a two years period in the estuary of Guadalquivir River, southern Spain. The samples, which consisted of passive hauls, were taken from a traditional glass eel fishing boat anchored with three nets working in parallel. The nets, with a mesh of 1 mm and an opening of 2.5 (width) × 3 (depth) metres, filtered around 60,000 m3 per sample. Our methodological approach allowed characterization of virtually all plastic sizes in river waters, comprising micro-, meso- and macroplastics. Plastic items were dominated by pieces of film (70% in number). Microplastics in the size interval from 2.5 to 4.0 mm represented half of the total identified items. Small fragments in Guadalquivir River comprised most of the plastic mass input to the sea. Our results support the relevance of fragmentation processes inland, and the role of rivers and estuaries as sources of microplastics to the ocean.
How to cite: Quintana Sepúlveda, R., González Fernández, D., Cózar Cabañas, A., Vilas Fernández, C., González Ortegón, E., Baldo Martínez, F., and Morales Caselles, C.: Plastic waste input from Guadalquivir River to the ocean , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19268, https://doi.org/10.5194/egusphere-egu2020-19268, 2020.
EGU2020-11731 | Displays | ITS2.7/HS12.2
Plastic Wastes Survey in River Mouths Discharging to Manila BayMaria Antonia Tanchuling and Ezra Osorio
This study reports on the amount of plastic wastes in five river mouths discharging to Manila Bay, a natural harbor which drains approximately 17,000 km3 of watershed area. Of the 17 rivers discharging to the Bay, five rivers which run through the densely populated and highly urbanized Metropolitan Manila are included in the study namely, Pasig, Cañas, Tullahan, Meycauyan and Parañaque. A Waste Analysis and Characterization Study (WACS) was conducted to investigate the composition of the wastes that were on the river banks. Samples were taken from the wastes that were found lying on the banks. The wastes collected in each study site varied from each other, although plastic wastes and yard wastes were gathered from all areas. Based on their % wet weight, plastics alone comprised 28% of wastes in Cañas, 46% of wastes in Meycauayan, 42% of wastes in Parañaque, 37% of wastes in Pasig and 27% of wastes in Tullahan. The disposed plastics collected were also characterized and categorized into different types: hard plastic (PP, HDPE), film plastic (PP, PE), foam (PS, PUR) and other type (PVC, PET). In Cañas River, film plastics (79%) were the most ubiquitous type of plastic waste which primarily consist of different sachets of household products and single-use plastic bags. Few hard plastics and other types of plastic such as PVC and PET were collected. Meycauayan River and Parañaque River had almost the same plastic type distribution wherein the most dominant plastic type were hard plastics. These hard plastics collected were mostly composed of bottles of detergents and toiletries. Meycauayan River has relatively fewer establishments near its river mouth, indicating that the sources of the accumulated plastic wastes came from the mid and upstream of the river where the urbanized and industrialized areas were located. Furthermore, even though hard plastics represented 38% of wastes in Paranaque, numerous plastic straw ropes were collected as fishermen use these straws to tie up their boats to the docking area. Significant amount of foams and PET bottles were also amassed in these rivers. Plastic wastes from the Pasig River were mostly comprised of both film plastics (39%) and hard plastics (30%). The plastic wastes taken were all household products directly dumped by the those residing by the Pasig River mouth. Notable quantity of foams and other types of plastic were fetched from the sampling area. Tullahan River has abundant amount of film plastics (35%) and foams (33%) in its river mouth. Some of these plastic wastes are stuck to the rafts tied up along the bank of the river. Sachets of household products were dominantly present. Few hard plastics and other type of plastic were extracted from the site. Substantial amount of plastic wastes in each of the river mouths signifies poor waste management infrastructure, lack of materials recovery facilities, and lack of discipline of people as these plastics are found to be directly dumped into the water bodies.
How to cite: Tanchuling, M. A. and Osorio, E.: Plastic Wastes Survey in River Mouths Discharging to Manila Bay, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11731, https://doi.org/10.5194/egusphere-egu2020-11731, 2020.
This study reports on the amount of plastic wastes in five river mouths discharging to Manila Bay, a natural harbor which drains approximately 17,000 km3 of watershed area. Of the 17 rivers discharging to the Bay, five rivers which run through the densely populated and highly urbanized Metropolitan Manila are included in the study namely, Pasig, Cañas, Tullahan, Meycauyan and Parañaque. A Waste Analysis and Characterization Study (WACS) was conducted to investigate the composition of the wastes that were on the river banks. Samples were taken from the wastes that were found lying on the banks. The wastes collected in each study site varied from each other, although plastic wastes and yard wastes were gathered from all areas. Based on their % wet weight, plastics alone comprised 28% of wastes in Cañas, 46% of wastes in Meycauayan, 42% of wastes in Parañaque, 37% of wastes in Pasig and 27% of wastes in Tullahan. The disposed plastics collected were also characterized and categorized into different types: hard plastic (PP, HDPE), film plastic (PP, PE), foam (PS, PUR) and other type (PVC, PET). In Cañas River, film plastics (79%) were the most ubiquitous type of plastic waste which primarily consist of different sachets of household products and single-use plastic bags. Few hard plastics and other types of plastic such as PVC and PET were collected. Meycauayan River and Parañaque River had almost the same plastic type distribution wherein the most dominant plastic type were hard plastics. These hard plastics collected were mostly composed of bottles of detergents and toiletries. Meycauayan River has relatively fewer establishments near its river mouth, indicating that the sources of the accumulated plastic wastes came from the mid and upstream of the river where the urbanized and industrialized areas were located. Furthermore, even though hard plastics represented 38% of wastes in Paranaque, numerous plastic straw ropes were collected as fishermen use these straws to tie up their boats to the docking area. Significant amount of foams and PET bottles were also amassed in these rivers. Plastic wastes from the Pasig River were mostly comprised of both film plastics (39%) and hard plastics (30%). The plastic wastes taken were all household products directly dumped by the those residing by the Pasig River mouth. Notable quantity of foams and other types of plastic were fetched from the sampling area. Tullahan River has abundant amount of film plastics (35%) and foams (33%) in its river mouth. Some of these plastic wastes are stuck to the rafts tied up along the bank of the river. Sachets of household products were dominantly present. Few hard plastics and other type of plastic were extracted from the site. Substantial amount of plastic wastes in each of the river mouths signifies poor waste management infrastructure, lack of materials recovery facilities, and lack of discipline of people as these plastics are found to be directly dumped into the water bodies.
How to cite: Tanchuling, M. A. and Osorio, E.: Plastic Wastes Survey in River Mouths Discharging to Manila Bay, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11731, https://doi.org/10.5194/egusphere-egu2020-11731, 2020.
EGU2020-10339 | Displays | ITS2.7/HS12.2 | Highlight
Methods for measuring and modelling plastic transport and accumulation in large riversMarcel Liedermann, Sebastian Pessenlehner, Michael Tritthart, Philipp Gmeiner, and Helmut Habersack
Although freshwater systems are known to be the transport paths of plastic debris to the ocean, studies in rivers are rare. In recent years, measurements are advancing, but they hardly address the spatial distribution of plastic debris in the whole water column. Waste collecting activities in the Nationalpark Donau-Auen – a part of the Austrian Danube River to the East of Vienna – indicate that increasing quantities of plastic waste can also be found near the banks and within the inundation areas of our rivers. The EU financed project "PlasticFreeDanube" tries to find the sources, environmental impacts, transported amounts and paths, compositions and possible plastic accumulation zones.
A robust, net-based device was developed which can be applied at high flow velocities and discharges even at large rivers. The device consists of a strong and stable equipment carrier allowing a steady positioning. Three frames can be equipped with 1-2 nets each, having different mesh sizes exposed over the whole water column. The methodology was tested in the Austrian Danube River, showing a high heterogeneity of microplastic concentrations over the cross-section but also vertically over the depth. It was found that even higher amounts of plastic can be transported in a subsurface layer or even bottom-near.
Three-dimensional numerical modelling has proven to be a great support in describing and analyzing plastic particle transport in flowing waters. Flow fields near river engineering structures such as groynes and guiding walls were characterized by the models as they are known to be plastic accumulation zones. The models can be used for predicting potential accumulation zones in Danubian inundation areas and can provide recommendations for creating “artificial” accumulation zones where plastic can be more easily extracted from the river.
How to cite: Liedermann, M., Pessenlehner, S., Tritthart, M., Gmeiner, P., and Habersack, H.: Methods for measuring and modelling plastic transport and accumulation in large rivers, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10339, https://doi.org/10.5194/egusphere-egu2020-10339, 2020.
Although freshwater systems are known to be the transport paths of plastic debris to the ocean, studies in rivers are rare. In recent years, measurements are advancing, but they hardly address the spatial distribution of plastic debris in the whole water column. Waste collecting activities in the Nationalpark Donau-Auen – a part of the Austrian Danube River to the East of Vienna – indicate that increasing quantities of plastic waste can also be found near the banks and within the inundation areas of our rivers. The EU financed project "PlasticFreeDanube" tries to find the sources, environmental impacts, transported amounts and paths, compositions and possible plastic accumulation zones.
A robust, net-based device was developed which can be applied at high flow velocities and discharges even at large rivers. The device consists of a strong and stable equipment carrier allowing a steady positioning. Three frames can be equipped with 1-2 nets each, having different mesh sizes exposed over the whole water column. The methodology was tested in the Austrian Danube River, showing a high heterogeneity of microplastic concentrations over the cross-section but also vertically over the depth. It was found that even higher amounts of plastic can be transported in a subsurface layer or even bottom-near.
Three-dimensional numerical modelling has proven to be a great support in describing and analyzing plastic particle transport in flowing waters. Flow fields near river engineering structures such as groynes and guiding walls were characterized by the models as they are known to be plastic accumulation zones. The models can be used for predicting potential accumulation zones in Danubian inundation areas and can provide recommendations for creating “artificial” accumulation zones where plastic can be more easily extracted from the river.
How to cite: Liedermann, M., Pessenlehner, S., Tritthart, M., Gmeiner, P., and Habersack, H.: Methods for measuring and modelling plastic transport and accumulation in large rivers, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10339, https://doi.org/10.5194/egusphere-egu2020-10339, 2020.
EGU2020-22613 | Displays | ITS2.7/HS12.2
Monitoring floating riverine pollution by advanced technologyIrene Ruiz, Oihane Barurko, Irati Epelde, Pedro Liria, Anna Rubio, Julien Mader, and Matthias Delpey
Rivers act as pathways to the ocean of significant but unquantified amounts of plastic pollution. Measuring with precision the quantities of riverine plastic inputs is crucial to support and ensure the effectiveness of prevention and mitigation waste management actions. However, there is a lack of technological tools capable of monitoring and, consequently, assessing accurately plastic abundances and its temporal variability through river water surfaces. Within the LIFE LEMA project, two videometry systems were installed at the river mouths of two European rivers (Oria in Spain and Adour in France) and a detection algorithm was developed to monitor litter inputs in near real time . The objective of these developments was to detect riverine pollution at water surface, with the goal of quantifying the number and providing data on the travel speed and size of the floating items. Between 2018 and 2020, the system was tested under different environmental conditions. These tests have led to develop a second version of the algorithm that improves the results reducing false positives. After these improvements, a new validation has been carried out consisting in detailed analysis of more than 300 short videos of 5 minutes duration recorded in Orio’s station under different river flows, weather conditions and plastic loads. The validation results highlighted the operational reliability of the system. In a scale of 1 to 5 scoring (being 1 very bad and 5 very good) over 70% of the recordings scored 4 to 5. This also demonstrated the great potential of the videometry system in harmonizing visual observations of floating riverine litter. The data provided by the systems is currently being used in the LEMA TOOL, a tool designed to guide local authorities on managing, monitoring and forecasting marine litter presence and abundances in coastal waters of the SE Bay of Biscay. Furthermore, the data provided is key to evaluate the sources of the pollution and the efficiency of waste management measures within the river basins, towards a successful reduction of plastic inputs into the ocean.
How to cite: Ruiz, I., Barurko, O., Epelde, I., Liria, P., Rubio, A., Mader, J., and Delpey, M.: Monitoring floating riverine pollution by advanced technology, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22613, https://doi.org/10.5194/egusphere-egu2020-22613, 2020.
Rivers act as pathways to the ocean of significant but unquantified amounts of plastic pollution. Measuring with precision the quantities of riverine plastic inputs is crucial to support and ensure the effectiveness of prevention and mitigation waste management actions. However, there is a lack of technological tools capable of monitoring and, consequently, assessing accurately plastic abundances and its temporal variability through river water surfaces. Within the LIFE LEMA project, two videometry systems were installed at the river mouths of two European rivers (Oria in Spain and Adour in France) and a detection algorithm was developed to monitor litter inputs in near real time . The objective of these developments was to detect riverine pollution at water surface, with the goal of quantifying the number and providing data on the travel speed and size of the floating items. Between 2018 and 2020, the system was tested under different environmental conditions. These tests have led to develop a second version of the algorithm that improves the results reducing false positives. After these improvements, a new validation has been carried out consisting in detailed analysis of more than 300 short videos of 5 minutes duration recorded in Orio’s station under different river flows, weather conditions and plastic loads. The validation results highlighted the operational reliability of the system. In a scale of 1 to 5 scoring (being 1 very bad and 5 very good) over 70% of the recordings scored 4 to 5. This also demonstrated the great potential of the videometry system in harmonizing visual observations of floating riverine litter. The data provided by the systems is currently being used in the LEMA TOOL, a tool designed to guide local authorities on managing, monitoring and forecasting marine litter presence and abundances in coastal waters of the SE Bay of Biscay. Furthermore, the data provided is key to evaluate the sources of the pollution and the efficiency of waste management measures within the river basins, towards a successful reduction of plastic inputs into the ocean.
How to cite: Ruiz, I., Barurko, O., Epelde, I., Liria, P., Rubio, A., Mader, J., and Delpey, M.: Monitoring floating riverine pollution by advanced technology, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22613, https://doi.org/10.5194/egusphere-egu2020-22613, 2020.
EGU2020-20888 | Displays | ITS2.7/HS12.2 | Highlight
Seasonal and longitudinal patterns of plastic pollution in a subtropical urban riverCharlotte Haberstroh and Mauricio Arias
Plastic contamination in rivers quickly changes over time and space, driven by factors such as land use, urbanization and population density, climatic conditions and river hydrology. Understanding the patterns and mechanisms behind these fluctuations is of major importance to estimate and evaluate plastic loads and to forward management strategies and policies. During a 18-month sampling campaign (May 2018 to October 2019) in the Hillsborough River Tampa (USA), we studied how seasonality and urban pollution affected plastic loads transported through the river. We sampled monthly at three sampling sites, strategically located to assess the release of plastic through urban runoff from Tampa, covering two wet seasons and one dry season. At each site, we conducted stationary sampling with a 500-µm mesh neuston net at five different positions through the width and depth of the river. Using an Acoustic Doppler Current Profiler, we also collected comprehensive data on flow characteristics and accurately estimated river discharge during sampling events. All samples were processed in the laboratory with state-of-the-art methods to separate plastic particles from water samples. Plastic particles were classified by size categories and a subset was identified using Raman spectroscopy. Results of this study shows a strong correlation between plastic loads and rainfall seasonality. For instance, mean concentrations close to the mouth of the river varied from less than 1 count/m3 during the dry season (March-May) to up to 9 counts/m3 during wet months (September). Furthermore, there was a substantial increase in loads as the river passed through the city, mostly peaking at the farthest downstream site close to the river mouth; while median concentrations at the site upstream from the city were 0.21 counts/m3 (range of 0-1.68), median concentrations at the station close to the river mouth (in Downtown Tampa) were 1.16 counts/m3 (range of 0.14-21.61 counts/m3). During some months, however, loads were higher at the second site, located in the middle of a residential and commercial district. Differences in plastic loads along the river were explained by river flow accumulation and land use/land cover intensity, though small differences in concentrations between the middle site and the furthest downstream can be explained by differences in stormwater management practices between these two contrasting socioeconomic areas. This study generated a unique and comprehensive dataset on plastic loads and river hydrology on a watershed scale to evaluate drivers of plastic pollution and rivers as their pathway, providing a base for the development of management plans in urban rivers and solution strategies for plastic pollution in similar subtropical watersheds.
How to cite: Haberstroh, C. and Arias, M.: Seasonal and longitudinal patterns of plastic pollution in a subtropical urban river, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20888, https://doi.org/10.5194/egusphere-egu2020-20888, 2020.
Plastic contamination in rivers quickly changes over time and space, driven by factors such as land use, urbanization and population density, climatic conditions and river hydrology. Understanding the patterns and mechanisms behind these fluctuations is of major importance to estimate and evaluate plastic loads and to forward management strategies and policies. During a 18-month sampling campaign (May 2018 to October 2019) in the Hillsborough River Tampa (USA), we studied how seasonality and urban pollution affected plastic loads transported through the river. We sampled monthly at three sampling sites, strategically located to assess the release of plastic through urban runoff from Tampa, covering two wet seasons and one dry season. At each site, we conducted stationary sampling with a 500-µm mesh neuston net at five different positions through the width and depth of the river. Using an Acoustic Doppler Current Profiler, we also collected comprehensive data on flow characteristics and accurately estimated river discharge during sampling events. All samples were processed in the laboratory with state-of-the-art methods to separate plastic particles from water samples. Plastic particles were classified by size categories and a subset was identified using Raman spectroscopy. Results of this study shows a strong correlation between plastic loads and rainfall seasonality. For instance, mean concentrations close to the mouth of the river varied from less than 1 count/m3 during the dry season (March-May) to up to 9 counts/m3 during wet months (September). Furthermore, there was a substantial increase in loads as the river passed through the city, mostly peaking at the farthest downstream site close to the river mouth; while median concentrations at the site upstream from the city were 0.21 counts/m3 (range of 0-1.68), median concentrations at the station close to the river mouth (in Downtown Tampa) were 1.16 counts/m3 (range of 0.14-21.61 counts/m3). During some months, however, loads were higher at the second site, located in the middle of a residential and commercial district. Differences in plastic loads along the river were explained by river flow accumulation and land use/land cover intensity, though small differences in concentrations between the middle site and the furthest downstream can be explained by differences in stormwater management practices between these two contrasting socioeconomic areas. This study generated a unique and comprehensive dataset on plastic loads and river hydrology on a watershed scale to evaluate drivers of plastic pollution and rivers as their pathway, providing a base for the development of management plans in urban rivers and solution strategies for plastic pollution in similar subtropical watersheds.
How to cite: Haberstroh, C. and Arias, M.: Seasonal and longitudinal patterns of plastic pollution in a subtropical urban river, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20888, https://doi.org/10.5194/egusphere-egu2020-20888, 2020.
EGU2020-707 * | Displays | ITS2.7/HS12.2 | Highlight
Abundance of plastic debris across European and Asian riversCaroline van Calcar and Tim van Emmerik
Plastic pollution in the marine environment is an urgent global environmental challenge. Land-based plastics, emitted into the ocean through rivers, are believed to be the main source of marine plastic litter. According to the latest model-based estimates, most riverine plastics are emitted in Asia. However, the exact amount of global riverine plastic emission remains uncertain due to a severe lack of observation. Field-based studies are rare in numbers, focused on rivers in Europe and North America and used strongly varying data collection methods. We present a harmonized assessment of floating macroplastic transport from observations at 24 locations in rivers in seven countries in Europe and Asia. Visual counting and debris sampling were used to assess (1) magnitude of plastic transport, (2) the spatial distribution across the river width, and (3) the plastic polymer composition. Several waterways in Indonesia and Vietnam contain up to four orders of magnitude more plastic than waterways in Italy, France, and The Netherlands in terms of plastic items per hour. We present a first transcontinental overview of plastic transport, providing observational evidence that, for the sampled rivers, Asian rivers transport considerably more plastics towards the ocean. New insights are presented in the magnitude, composition, and spatiotemporal variation of riverine plastic debris. We emphasize the urgent need for more long-term monitoring efforts. Accurate data on riverine plastic debris are extremely important to improve global and local modeling approaches and to optimize prevention and collection strategies.
How to cite: van Calcar, C. and van Emmerik, T.: Abundance of plastic debris across European and Asian rivers, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-707, https://doi.org/10.5194/egusphere-egu2020-707, 2020.
Plastic pollution in the marine environment is an urgent global environmental challenge. Land-based plastics, emitted into the ocean through rivers, are believed to be the main source of marine plastic litter. According to the latest model-based estimates, most riverine plastics are emitted in Asia. However, the exact amount of global riverine plastic emission remains uncertain due to a severe lack of observation. Field-based studies are rare in numbers, focused on rivers in Europe and North America and used strongly varying data collection methods. We present a harmonized assessment of floating macroplastic transport from observations at 24 locations in rivers in seven countries in Europe and Asia. Visual counting and debris sampling were used to assess (1) magnitude of plastic transport, (2) the spatial distribution across the river width, and (3) the plastic polymer composition. Several waterways in Indonesia and Vietnam contain up to four orders of magnitude more plastic than waterways in Italy, France, and The Netherlands in terms of plastic items per hour. We present a first transcontinental overview of plastic transport, providing observational evidence that, for the sampled rivers, Asian rivers transport considerably more plastics towards the ocean. New insights are presented in the magnitude, composition, and spatiotemporal variation of riverine plastic debris. We emphasize the urgent need for more long-term monitoring efforts. Accurate data on riverine plastic debris are extremely important to improve global and local modeling approaches and to optimize prevention and collection strategies.
How to cite: van Calcar, C. and van Emmerik, T.: Abundance of plastic debris across European and Asian rivers, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-707, https://doi.org/10.5194/egusphere-egu2020-707, 2020.
EGU2020-21579 | Displays | ITS2.7/HS12.2
Microplastics in a UK Landfill: Developing Methods and Assessing Concentrations in Leachate, Hydrogeology, and Release to the EnvironmentMax Waddell, Nathalie Grassineau, James Brakeley, and Kevin Clemitshaw
Inadequate management of plastic waste has resulted in its ubiquity within the environment, and presents a risk to living organisms. Harm caused by large plastics is well documented, but progressive understanding of microplastics (< 5mm) reveals an ever more unsettling issue. Microplastics contamination is considered an emerging global multidisciplinary issue that would be aided by further research on sources, distribution, abundance, and transport mechanisms. Landfills are a suspected source of such, but research at these sites is insufficient. Although the risks surrounding microplastics are still inconclusive, there is concern over their accumulation in organisms, leaching constituents, and hydrophobic nature. Studying microplastics in the environment, let alone landfill, is challenging as standard and accepted methodologies are presently non-existent.
Here, microplastics (1mm to 25µm) were evaluated at one particular and long-running UK landfill after first developing a simple, replicable, efficient, and cost effective sampling and analysis approach. Concentrations and types of microplastics were quantified in raw leachate, treated leachate, waste water, groundwater, and surface water, to characterise abundance, distribution, and released loads to the environment. Samples were filtered in-situ, with subsequent purification at the laboratory by Fenton’s reagent. Analysis relied heavily on microscopic sorting and counting, but use of Scanning Electron Microscopy – Energy Dispersive X-Ray Spectroscopy enabled instrumental interrogation of particles suspected to be plastic. Many factors investigated here appear novel to the literature, and comprehensively explore: temporal variation of microplastics in raw leachate across different landfill phases and waste ages; their abundance in local groundwater, and surface water discharge; microplastics distribution within a leachate treatment plant; and their subsequent release to the environment from a waste water treatment facility. The results build upon the small collection of existing work, but also offer new insights into microplastics’ occurrence in, around, and released from a landfill site.
In total, 62 samples were taken, and particles considered microplastics (MP) were most abundant in groundwater, followed by raw leachate > waste water > treated leachate > surface water. Average concentration in groundwater was 105.1±104.3 MP L-1, raw leachate 3.3±1.7 MP L-1, and waste water was 1.8±0.73 MP L-1. Consistent with other research, fibres were most dominant, but blank samples highlight the great potential for secondary contamination. Imaging of suspect particles revealed the extreme nature and conditions of landfill sites in their generation of microplastics. Analogous to waste water treatment, leachate treatment is shown to be reducing microplastics in the discharge by 58.1%, and it is expected that microplastics are retained in the treatment plant sludge. Daily loads from leachate treatment were 142,558±67,744 MP day-1, but from waste water this was approximately 45.2±18.3 million MP day-1. Ultimately, the landfill is not a final sink of microplastics but a source, for those >25 µm, to the environment: yet, it is unlikely to be a significant one. Results highlighted the need for reduction strategies at waste water treatment plants and in the site’s groundwater boreholes, as well as further investigation to determine the source of abundant fibres in the surface water.
How to cite: Waddell, M., Grassineau, N., Brakeley, J., and Clemitshaw, K.: Microplastics in a UK Landfill: Developing Methods and Assessing Concentrations in Leachate, Hydrogeology, and Release to the Environment, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21579, https://doi.org/10.5194/egusphere-egu2020-21579, 2020.
Inadequate management of plastic waste has resulted in its ubiquity within the environment, and presents a risk to living organisms. Harm caused by large plastics is well documented, but progressive understanding of microplastics (< 5mm) reveals an ever more unsettling issue. Microplastics contamination is considered an emerging global multidisciplinary issue that would be aided by further research on sources, distribution, abundance, and transport mechanisms. Landfills are a suspected source of such, but research at these sites is insufficient. Although the risks surrounding microplastics are still inconclusive, there is concern over their accumulation in organisms, leaching constituents, and hydrophobic nature. Studying microplastics in the environment, let alone landfill, is challenging as standard and accepted methodologies are presently non-existent.
Here, microplastics (1mm to 25µm) were evaluated at one particular and long-running UK landfill after first developing a simple, replicable, efficient, and cost effective sampling and analysis approach. Concentrations and types of microplastics were quantified in raw leachate, treated leachate, waste water, groundwater, and surface water, to characterise abundance, distribution, and released loads to the environment. Samples were filtered in-situ, with subsequent purification at the laboratory by Fenton’s reagent. Analysis relied heavily on microscopic sorting and counting, but use of Scanning Electron Microscopy – Energy Dispersive X-Ray Spectroscopy enabled instrumental interrogation of particles suspected to be plastic. Many factors investigated here appear novel to the literature, and comprehensively explore: temporal variation of microplastics in raw leachate across different landfill phases and waste ages; their abundance in local groundwater, and surface water discharge; microplastics distribution within a leachate treatment plant; and their subsequent release to the environment from a waste water treatment facility. The results build upon the small collection of existing work, but also offer new insights into microplastics’ occurrence in, around, and released from a landfill site.
In total, 62 samples were taken, and particles considered microplastics (MP) were most abundant in groundwater, followed by raw leachate > waste water > treated leachate > surface water. Average concentration in groundwater was 105.1±104.3 MP L-1, raw leachate 3.3±1.7 MP L-1, and waste water was 1.8±0.73 MP L-1. Consistent with other research, fibres were most dominant, but blank samples highlight the great potential for secondary contamination. Imaging of suspect particles revealed the extreme nature and conditions of landfill sites in their generation of microplastics. Analogous to waste water treatment, leachate treatment is shown to be reducing microplastics in the discharge by 58.1%, and it is expected that microplastics are retained in the treatment plant sludge. Daily loads from leachate treatment were 142,558±67,744 MP day-1, but from waste water this was approximately 45.2±18.3 million MP day-1. Ultimately, the landfill is not a final sink of microplastics but a source, for those >25 µm, to the environment: yet, it is unlikely to be a significant one. Results highlighted the need for reduction strategies at waste water treatment plants and in the site’s groundwater boreholes, as well as further investigation to determine the source of abundant fibres in the surface water.
How to cite: Waddell, M., Grassineau, N., Brakeley, J., and Clemitshaw, K.: Microplastics in a UK Landfill: Developing Methods and Assessing Concentrations in Leachate, Hydrogeology, and Release to the Environment, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21579, https://doi.org/10.5194/egusphere-egu2020-21579, 2020.
EGU2020-8460 | Displays | ITS2.7/HS12.2
Microplastic input in to alluvial meadows of the Rhine river in Cologne, GermanyKatharina Luise Müller, Hannes Laermanns, Markus Rolf, Florian Steininger, Martin Löder, Julia Möller, and Christina Bogner
River systems are major pathways for the transport of microplastic (MP). The Rhine is among the biggest river systems in regard to catchment size and discharge in northwestern Europe. Studies have documented the presence of MP in the Rhine and its tributaries along its course through Germany. The region of Cologne is densely populated, with a variety of land use forms occurring. Thus, an understanding of the presence and entry pathways of MP into alluvial meadows of the Rhine is important for risk assessments.
This study aims to quantitively analyse transport pathways and sedimentation of MP into the alluvial meadows of the Rhine. We expect that the main entrance pathway of MP into these alluvial meadow soils is via fluvial transport. Two study sites were chosen in Cologne, one in the southern part of the central city (Poller Wiesen) and one in northern rural areas of the city (Merkenich-Langel). These sites were chosen as there are no agricultural fields in the direct vicinity, which could account for major MP input through surface runoff. The sites were flooded intermittently in the past with records of the water level during flooding and extent of flooded areas. For each site, sampling transects were chosen increasing in elevation and distance relative to the river water level. Samples were investigated for their MP concentrations via FTIR-spectroscopy. A digital elevation model supports the understanding of the water flow during flood events. Differences in MP concentrations with increasing elevation and distance to the river are thought to be caused by differences in intensity and frequency of flooding.
How to cite: Müller, K. L., Laermanns, H., Rolf, M., Steininger, F., Löder, M., Möller, J., and Bogner, C.: Microplastic input in to alluvial meadows of the Rhine river in Cologne, Germany, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8460, https://doi.org/10.5194/egusphere-egu2020-8460, 2020.
River systems are major pathways for the transport of microplastic (MP). The Rhine is among the biggest river systems in regard to catchment size and discharge in northwestern Europe. Studies have documented the presence of MP in the Rhine and its tributaries along its course through Germany. The region of Cologne is densely populated, with a variety of land use forms occurring. Thus, an understanding of the presence and entry pathways of MP into alluvial meadows of the Rhine is important for risk assessments.
This study aims to quantitively analyse transport pathways and sedimentation of MP into the alluvial meadows of the Rhine. We expect that the main entrance pathway of MP into these alluvial meadow soils is via fluvial transport. Two study sites were chosen in Cologne, one in the southern part of the central city (Poller Wiesen) and one in northern rural areas of the city (Merkenich-Langel). These sites were chosen as there are no agricultural fields in the direct vicinity, which could account for major MP input through surface runoff. The sites were flooded intermittently in the past with records of the water level during flooding and extent of flooded areas. For each site, sampling transects were chosen increasing in elevation and distance relative to the river water level. Samples were investigated for their MP concentrations via FTIR-spectroscopy. A digital elevation model supports the understanding of the water flow during flood events. Differences in MP concentrations with increasing elevation and distance to the river are thought to be caused by differences in intensity and frequency of flooding.
How to cite: Müller, K. L., Laermanns, H., Rolf, M., Steininger, F., Löder, M., Möller, J., and Bogner, C.: Microplastic input in to alluvial meadows of the Rhine river in Cologne, Germany, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8460, https://doi.org/10.5194/egusphere-egu2020-8460, 2020.
EGU2020-5308 | Displays | ITS2.7/HS12.2
Degradation of nanoplastics in aquatic environments: reactivity and impact on dissolved organic carbonAngelica Bianco, Fabrizio Sordello, Mikael Ehn, Davide Vione, and Monica Passananti
The large production of plastic material (PlasticsEurope, 2019), together with the mishandling of plastic waste, has resulted in ubiquitous plastic pollution, which now reaches even the most remote areas of the Earth (Allen et al., 2019; Bergmann et al., 2019). Plastics undergo a slow process of erosion in the environment that decreases their size: microplastics (MPs) and nanoplastics (NPs) have diameters between 1 µm and 5 mm and lower than 1 µm, respectively (Frias and Nash, 2019).
The occurrence, transformation and fate of MPs and NPs in the environment are still unclear. Therefore, the objective of this work is to better understand the reactivity of NPs using an aqueous suspension of polystyrene NPs (PS-NPs) as a proxy, in the presence of sunlight and chemicals oxidants. The results obtained are relevant to both the atmospheric aqueous phase, such as cloud and fog droplets, and surface waters. We investigated the reactivity of PS-NPs with light and with two important oxidants in the environment: ozone (O3) and hydroxyl radicals (•OH). The adsorption of ozone (O3) on PS-NPs is investigated, showing a significant O3 uptake. Moreover, for the first time, a reactivity constant with •OH is determined. We found a linear correlation between the kinetic constants measured for three different sizes of PS-NPs and the surface exposed by the particles. Degradation products (short chain carboxylic acids and aromatic compounds), obtained by direct and •OH-mediated photolysis of PS-NPs suspensions, are identified by high-resolution mass spectrometry. Irradiation of a PS-NPs suspension under natural sunlight for 1 year has shown the formation of formic acid and organic compounds similar to those found in riverine and cloud dissolved organic matter.
This work is crucial to assess the impact of NPs abiotic degradation in atmospheric and surface waters; indeed, the reactivity constant and the degradation products can be implemented in environmental models to estimate the contribution of NPs degradation to the natural dissolved organic matter in the aqueous phase. A preliminary simulation using APEX (Aqueous Photochemistry of Environmentally occurring Xenobiotics) (Bodrato and Vione, 2014) model shows that in NPs-polluted environments (109 particles mL-1) there is potential for NPs to significantly scavenge •OH, if the content of natural organic matter is not too high, as observed for surface and cloud water.
Allen, S., et al., 2019. Nat. Geosci. 12, 339–344. https://doi.org/10.1038/s41561-019-0335-5
Bergmann, et al., 2019. Sci. Adv. 5, eaax1157. https://doi.org/10.1126/sciadv.aax1157
Bodrato, M., Vione, D., 2014. Environ. Sci.: Processes Impacts 16, 732–740. https://doi.org/10.1039/C3EM00541K
Frias, J., Nash, R., 2019. Mar. Pollut. Bull. 138, 145–147. https://doi.org/10.1016/j.marpolbul.2018.11.022
How to cite: Bianco, A., Sordello, F., Ehn, M., Vione, D., and Passananti, M.: Degradation of nanoplastics in aquatic environments: reactivity and impact on dissolved organic carbon, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5308, https://doi.org/10.5194/egusphere-egu2020-5308, 2020.
The large production of plastic material (PlasticsEurope, 2019), together with the mishandling of plastic waste, has resulted in ubiquitous plastic pollution, which now reaches even the most remote areas of the Earth (Allen et al., 2019; Bergmann et al., 2019). Plastics undergo a slow process of erosion in the environment that decreases their size: microplastics (MPs) and nanoplastics (NPs) have diameters between 1 µm and 5 mm and lower than 1 µm, respectively (Frias and Nash, 2019).
The occurrence, transformation and fate of MPs and NPs in the environment are still unclear. Therefore, the objective of this work is to better understand the reactivity of NPs using an aqueous suspension of polystyrene NPs (PS-NPs) as a proxy, in the presence of sunlight and chemicals oxidants. The results obtained are relevant to both the atmospheric aqueous phase, such as cloud and fog droplets, and surface waters. We investigated the reactivity of PS-NPs with light and with two important oxidants in the environment: ozone (O3) and hydroxyl radicals (•OH). The adsorption of ozone (O3) on PS-NPs is investigated, showing a significant O3 uptake. Moreover, for the first time, a reactivity constant with •OH is determined. We found a linear correlation between the kinetic constants measured for three different sizes of PS-NPs and the surface exposed by the particles. Degradation products (short chain carboxylic acids and aromatic compounds), obtained by direct and •OH-mediated photolysis of PS-NPs suspensions, are identified by high-resolution mass spectrometry. Irradiation of a PS-NPs suspension under natural sunlight for 1 year has shown the formation of formic acid and organic compounds similar to those found in riverine and cloud dissolved organic matter.
This work is crucial to assess the impact of NPs abiotic degradation in atmospheric and surface waters; indeed, the reactivity constant and the degradation products can be implemented in environmental models to estimate the contribution of NPs degradation to the natural dissolved organic matter in the aqueous phase. A preliminary simulation using APEX (Aqueous Photochemistry of Environmentally occurring Xenobiotics) (Bodrato and Vione, 2014) model shows that in NPs-polluted environments (109 particles mL-1) there is potential for NPs to significantly scavenge •OH, if the content of natural organic matter is not too high, as observed for surface and cloud water.
Allen, S., et al., 2019. Nat. Geosci. 12, 339–344. https://doi.org/10.1038/s41561-019-0335-5
Bergmann, et al., 2019. Sci. Adv. 5, eaax1157. https://doi.org/10.1126/sciadv.aax1157
Bodrato, M., Vione, D., 2014. Environ. Sci.: Processes Impacts 16, 732–740. https://doi.org/10.1039/C3EM00541K
Frias, J., Nash, R., 2019. Mar. Pollut. Bull. 138, 145–147. https://doi.org/10.1016/j.marpolbul.2018.11.022
How to cite: Bianco, A., Sordello, F., Ehn, M., Vione, D., and Passananti, M.: Degradation of nanoplastics in aquatic environments: reactivity and impact on dissolved organic carbon, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5308, https://doi.org/10.5194/egusphere-egu2020-5308, 2020.
EGU2020-12924 | Displays | ITS2.7/HS12.2
Microplastic contamination in remote alpine lakesDavid Gateuille, Julia Dusaucy, Frédéric Gillet, Johnny Gaspéri, Rachid Dris, Grégory Tourreau, and Emmanuel Naffrechoux
Since their first detection in the 1970s, microplastics have been a growing concern in public opinion. Although a large number of studies are interested in this contamination, the fate of microplastics in freshwater remains poorly understood. In particular, the identification of sources, the degradation processes of these compounds and their impacts on aquatic ecosystems constitute fields of research to be investigated. PLASTILAC is the first project focusing on the presence and fate of microplastics in 4 remote alpine lakes (Muzelle Lake, Vert Lake, Pormenaz Lake and Anterne Lake) that have been investigated during summer 2019. The aims of this study were to better understand the microplastic dynamics in small remote lake catchment and to quantify the impacts of various anthropic activities on the microplastic contamination.
The lakes were chosen to allow the comparison of the different transfer processes occurring at the catchment scale. Thus, the lakes of Muzelle and Anterne have similar sizes (about 10 000m²) and altitudes (about 2100 m a.s.l). These two lakes are isolated and have no direct access apart from several hour hikes. They are however separated by a distance of about 120 km. A comparison of their contamination levels therefore makes it possible to assess the background contamination at the scale of the Northern Alps. On the contrary, the Anterne, Pormenaz and Vert Lakes are very close but cover a wide gradient of altitude (from 1260 to 2100 m a.s.l.) and of exposure to anthropogenic activities. Their comparison allows us to study the influence of distance from potential sources on the microplastic contamination.
To investigate the dynamics of microplastics at the lake basin scale, a multi-compartment approach was implemented. The water column was sampled using a specially designed boat that allowed the filtration of the large volumes (approximately 200 cubic meters) of water required in lightly contaminated environments. The boat was equipped with a 50 µm mesh. A similar system was used to sample the lake outlets and determine the outflows of microplastics. In order to quantify the incoming flows, an atmospheric fallout collector was also installed. Finally, lake sediments were collected to quantify the fraction of microplastics eliminated from the water column through sedimentation. All of these data made it possible to establish a mass balance of microplastics at the scale of the watershed of lakes and to determine the characteristic times of contamination.
Although analyzes are still in progress, the first results show that even the most distant lakes from anthropogenic sources have significant microplastic contamination of the order of 1 particle per cubic meter. Due to the distance to the sources, the microplastic pollution was constituted fibers while fragments and micro-beads could not be observed.
How to cite: Gateuille, D., Dusaucy, J., Gillet, F., Gaspéri, J., Dris, R., Tourreau, G., and Naffrechoux, E.: Microplastic contamination in remote alpine lakes, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12924, https://doi.org/10.5194/egusphere-egu2020-12924, 2020.
Since their first detection in the 1970s, microplastics have been a growing concern in public opinion. Although a large number of studies are interested in this contamination, the fate of microplastics in freshwater remains poorly understood. In particular, the identification of sources, the degradation processes of these compounds and their impacts on aquatic ecosystems constitute fields of research to be investigated. PLASTILAC is the first project focusing on the presence and fate of microplastics in 4 remote alpine lakes (Muzelle Lake, Vert Lake, Pormenaz Lake and Anterne Lake) that have been investigated during summer 2019. The aims of this study were to better understand the microplastic dynamics in small remote lake catchment and to quantify the impacts of various anthropic activities on the microplastic contamination.
The lakes were chosen to allow the comparison of the different transfer processes occurring at the catchment scale. Thus, the lakes of Muzelle and Anterne have similar sizes (about 10 000m²) and altitudes (about 2100 m a.s.l). These two lakes are isolated and have no direct access apart from several hour hikes. They are however separated by a distance of about 120 km. A comparison of their contamination levels therefore makes it possible to assess the background contamination at the scale of the Northern Alps. On the contrary, the Anterne, Pormenaz and Vert Lakes are very close but cover a wide gradient of altitude (from 1260 to 2100 m a.s.l.) and of exposure to anthropogenic activities. Their comparison allows us to study the influence of distance from potential sources on the microplastic contamination.
To investigate the dynamics of microplastics at the lake basin scale, a multi-compartment approach was implemented. The water column was sampled using a specially designed boat that allowed the filtration of the large volumes (approximately 200 cubic meters) of water required in lightly contaminated environments. The boat was equipped with a 50 µm mesh. A similar system was used to sample the lake outlets and determine the outflows of microplastics. In order to quantify the incoming flows, an atmospheric fallout collector was also installed. Finally, lake sediments were collected to quantify the fraction of microplastics eliminated from the water column through sedimentation. All of these data made it possible to establish a mass balance of microplastics at the scale of the watershed of lakes and to determine the characteristic times of contamination.
Although analyzes are still in progress, the first results show that even the most distant lakes from anthropogenic sources have significant microplastic contamination of the order of 1 particle per cubic meter. Due to the distance to the sources, the microplastic pollution was constituted fibers while fragments and micro-beads could not be observed.
How to cite: Gateuille, D., Dusaucy, J., Gillet, F., Gaspéri, J., Dris, R., Tourreau, G., and Naffrechoux, E.: Microplastic contamination in remote alpine lakes, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12924, https://doi.org/10.5194/egusphere-egu2020-12924, 2020.
EGU2020-19776 | Displays | ITS2.7/HS12.2
Temporal distribution and seasonal fluxes of microplastics in the sediments of UK rural and urban lakesChiara Bancone, Professor Neil Rose, and Dr Robert Francis
Microplastics (<5 mm) are persistent environmental pollutants characterised by heterogeneous physico-chemical properties and a broad range of shapes, sizes, colours and composition. Microplastics may be directly released into the environment at this size (i.e. pellets and cosmetic microbeads) when they are known as primary microplastics. However, the majority of microplastics are secondary, i.e. they originate from the degradation of larger plastic items. An important source of secondary microplastics is represented by fibres released during washing of synthetic garments. Although microplastic contamination is thought to be ubiquitous in aquatic ecosystems, very little is known about the scale, the extent of inputs as well as rates of change in rivers and lakes. In particular lake sediments, may represent an important sink for microplastics as well as providing a means to assess historical trends.
To assess microplastic abundance, distribution, historical records and composition in the sediments of UK urban and rural lakes, sediment cores have been collected at representative locations in two ponds on Hampstead Heath, in the Borough of Camden, London, and in three lakes in the Norfolk Broads National Park, in eastern England. Microplastics extracted from sediment cores have been identified, and variation in polymer-type analysed through sediment chronostratigraphy. Sediment chronologies can help quantify the historical flux of microplastics from terrestrial environments to freshwaters, reflecting changes in microplastic production over time.
To highlight seasonal fluxes and variations in microplastic distribution and abundance in the lakes examined, new-design sediment traps were built at UCL Geography Laboratories and anchored to the bottom of the study sites to collect material sinking from the water column. The traps are being monitored, emptied, cleaned and redeployed every three months over about a 2-year period.
This study presents the results about temporal distribution and seasonal fluxes of microplastics in sediments from Hampstead Heath ponds in London (urban sites) and from the Norfolk Broads National Park (rural sites). The identification of plastic polymers, together with the assessment of microplastic temporal distribution and seasonal patterns of accumulation in lakes will help identify factors influencing microplastic distribution and pollution sources for lakes. The results from this project will deliver a better understanding of the number and scale of sources of microplastics in urban and rural lakes, improving future risk assessments and prevention strategies.
How to cite: Bancone, C., Rose, P. N., and Francis, D. R.: Temporal distribution and seasonal fluxes of microplastics in the sediments of UK rural and urban lakes, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19776, https://doi.org/10.5194/egusphere-egu2020-19776, 2020.
Microplastics (<5 mm) are persistent environmental pollutants characterised by heterogeneous physico-chemical properties and a broad range of shapes, sizes, colours and composition. Microplastics may be directly released into the environment at this size (i.e. pellets and cosmetic microbeads) when they are known as primary microplastics. However, the majority of microplastics are secondary, i.e. they originate from the degradation of larger plastic items. An important source of secondary microplastics is represented by fibres released during washing of synthetic garments. Although microplastic contamination is thought to be ubiquitous in aquatic ecosystems, very little is known about the scale, the extent of inputs as well as rates of change in rivers and lakes. In particular lake sediments, may represent an important sink for microplastics as well as providing a means to assess historical trends.
To assess microplastic abundance, distribution, historical records and composition in the sediments of UK urban and rural lakes, sediment cores have been collected at representative locations in two ponds on Hampstead Heath, in the Borough of Camden, London, and in three lakes in the Norfolk Broads National Park, in eastern England. Microplastics extracted from sediment cores have been identified, and variation in polymer-type analysed through sediment chronostratigraphy. Sediment chronologies can help quantify the historical flux of microplastics from terrestrial environments to freshwaters, reflecting changes in microplastic production over time.
To highlight seasonal fluxes and variations in microplastic distribution and abundance in the lakes examined, new-design sediment traps were built at UCL Geography Laboratories and anchored to the bottom of the study sites to collect material sinking from the water column. The traps are being monitored, emptied, cleaned and redeployed every three months over about a 2-year period.
This study presents the results about temporal distribution and seasonal fluxes of microplastics in sediments from Hampstead Heath ponds in London (urban sites) and from the Norfolk Broads National Park (rural sites). The identification of plastic polymers, together with the assessment of microplastic temporal distribution and seasonal patterns of accumulation in lakes will help identify factors influencing microplastic distribution and pollution sources for lakes. The results from this project will deliver a better understanding of the number and scale of sources of microplastics in urban and rural lakes, improving future risk assessments and prevention strategies.
How to cite: Bancone, C., Rose, P. N., and Francis, D. R.: Temporal distribution and seasonal fluxes of microplastics in the sediments of UK rural and urban lakes, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19776, https://doi.org/10.5194/egusphere-egu2020-19776, 2020.
EGU2020-22468 | Displays | ITS2.7/HS12.2
Analysis, prevalence and impact of microplastics in freshwater and estuarine environments: an evidence reviewJohn Iwan Jones, John Francis Murphy, Amanda Arnold, James Laurence Pretty, Kate Spencer, Adriaan Albert Markus, and Andre Dick Vethaak
We used the systematic review procedure to assess the evidence available on the analysis, prevalence and impact of microplastics in freshwater and estuarine environments. As the study of microplastics in freshwaters is relatively new, measurement methods are yet to be standardized, and a wide variety of methods of variable robustness have been used. Critically, the sampling methodology used in the literature had a systematic influence on the concentration of microplastic particles returned. The volume of water sampled varied over many orders of magnitude, and there was a direct relationship between the size of the smallest particles studied and the volume of water sampled in both freshwater and estuaries: large volumes of water can only be sampled using nets of relatively coarse mesh, which in turn do not capture smaller particles. Consequently, the mean abundance of microplastic particles reported was inversely correlated with both the volume of water sampled and the size of particles studied.
The size of microplastic particles had a substantial and overriding effect on threshold concentrations above which microplastics affect freshwater and estuarine biota. For the ecotoxicological endpoints of feeding, behaviour, growth and survival there was a clear relationship between the size of the particles used in the test and the threshold concentration at which an effect was seen. Although the taxonomic coverage of test organisms was limited, there were sufficient data to test the influence of taxonomic group used on size-specific thresholds for Crustacea, fish and algae. There was no significant effect of either the endpoint measured or the taxonomic group used, suggesting that there might not be any difference in sensitivity among different taxa.
In order to establish a threshold concentration where microplastics present a hazard to a limited number of taxa, quantile regression was used to determine the size-specific concentration of microplastics that was lower than 90% of the thresholds identified for survival and, as a more conservative limit, across all endpoints tested including sublethal effects. By comparing these thresholds with the data on concentrations of microplastics reported by field studies, it was apparent that the calculated size specific threshold concentration for lethal effects was considerably higher than 99% of reported environmental concentrations. Lethal effects of microplastics on freshwater and estuarine biota are likely to be limited to exceptional circumstances. Over certain size ranges the calculated size specific threshold concentration for sublethal effects was exceeded by the highest 10% of concentrations reported from environmental samples, suggesting that there is a risk of sublethal effects in a small proportion of sites.
How to cite: Jones, J. I., Murphy, J. F., Arnold, A., Pretty, J. L., Spencer, K., Markus, A. A., and Vethaak, A. D.: Analysis, prevalence and impact of microplastics in freshwater and estuarine environments: an evidence review, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22468, https://doi.org/10.5194/egusphere-egu2020-22468, 2020.
We used the systematic review procedure to assess the evidence available on the analysis, prevalence and impact of microplastics in freshwater and estuarine environments. As the study of microplastics in freshwaters is relatively new, measurement methods are yet to be standardized, and a wide variety of methods of variable robustness have been used. Critically, the sampling methodology used in the literature had a systematic influence on the concentration of microplastic particles returned. The volume of water sampled varied over many orders of magnitude, and there was a direct relationship between the size of the smallest particles studied and the volume of water sampled in both freshwater and estuaries: large volumes of water can only be sampled using nets of relatively coarse mesh, which in turn do not capture smaller particles. Consequently, the mean abundance of microplastic particles reported was inversely correlated with both the volume of water sampled and the size of particles studied.
The size of microplastic particles had a substantial and overriding effect on threshold concentrations above which microplastics affect freshwater and estuarine biota. For the ecotoxicological endpoints of feeding, behaviour, growth and survival there was a clear relationship between the size of the particles used in the test and the threshold concentration at which an effect was seen. Although the taxonomic coverage of test organisms was limited, there were sufficient data to test the influence of taxonomic group used on size-specific thresholds for Crustacea, fish and algae. There was no significant effect of either the endpoint measured or the taxonomic group used, suggesting that there might not be any difference in sensitivity among different taxa.
In order to establish a threshold concentration where microplastics present a hazard to a limited number of taxa, quantile regression was used to determine the size-specific concentration of microplastics that was lower than 90% of the thresholds identified for survival and, as a more conservative limit, across all endpoints tested including sublethal effects. By comparing these thresholds with the data on concentrations of microplastics reported by field studies, it was apparent that the calculated size specific threshold concentration for lethal effects was considerably higher than 99% of reported environmental concentrations. Lethal effects of microplastics on freshwater and estuarine biota are likely to be limited to exceptional circumstances. Over certain size ranges the calculated size specific threshold concentration for sublethal effects was exceeded by the highest 10% of concentrations reported from environmental samples, suggesting that there is a risk of sublethal effects in a small proportion of sites.
How to cite: Jones, J. I., Murphy, J. F., Arnold, A., Pretty, J. L., Spencer, K., Markus, A. A., and Vethaak, A. D.: Analysis, prevalence and impact of microplastics in freshwater and estuarine environments: an evidence review, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22468, https://doi.org/10.5194/egusphere-egu2020-22468, 2020.
EGU2020-3968 | Displays | ITS2.7/HS12.2 | Highlight
River corridors as global hotspots of microplastic pollutionStefan Krause, Jennifer Drummond, Holly Nel, Jesus Gomez-Velez, Iseult Lynch, and Greg Sambrook Smith
The total production of plastics is estimated to be~ 10 billion metric tons, half of which is thought to have ended up as waste in the environment. However, the total mass of plastic found in the world’s ocean garbage patches has been calculated as less than 1 million metric tons, a paradox that leaves the whereabouts of the majority (>99.9%) of plastic waste produced so far unexplained.
Recent research suggests that the accumulation of plastic (in particular microplastic < 5mm in size) in river corridors may be even greater than that in the world’s oceans. Our model-based quantifications reveal that rivers do not solely function as pure conduits for plastics travelling to the oceans, but also represent long-term sinks, with in particular microplastics being buried in streambeds and floodplain sediments. This includes the development of pronounced hotspots of long-term plastic accumulation, evidencing that these emerging pollutants have already developed a pollution legacy that will affect generations to come.
The principles that govern the spatially and temporally dynamic inputs of plastics into river corridors as well as the fate and transport mechanisms that explain how plastics are transported and where they accumulate are still poorly understood. Experimental evidence of microplastic pollution in river corridors is hampered by the absence of unified sampling, extraction and analysis approaches, inhibiting a comprehensive investigation of global source distributions and fate pathways. We have therefore initiated the 100 Plastic Rivers programme to provide a global baseline of microplastic pollution in rivers, their drivers and controls in order to develop mechanistic understanding of their fate and transport dynamics and create predictive capacity by informing the parameterisation of global plastic transport models. Preliminary results evidence the suitability of the 100 Plastic Rivers approach and help validate our predictions of global plastic storage in river corridors.
How to cite: Krause, S., Drummond, J., Nel, H., Gomez-Velez, J., Lynch, I., and Sambrook Smith, G.: River corridors as global hotspots of microplastic pollution, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3968, https://doi.org/10.5194/egusphere-egu2020-3968, 2020.
The total production of plastics is estimated to be~ 10 billion metric tons, half of which is thought to have ended up as waste in the environment. However, the total mass of plastic found in the world’s ocean garbage patches has been calculated as less than 1 million metric tons, a paradox that leaves the whereabouts of the majority (>99.9%) of plastic waste produced so far unexplained.
Recent research suggests that the accumulation of plastic (in particular microplastic < 5mm in size) in river corridors may be even greater than that in the world’s oceans. Our model-based quantifications reveal that rivers do not solely function as pure conduits for plastics travelling to the oceans, but also represent long-term sinks, with in particular microplastics being buried in streambeds and floodplain sediments. This includes the development of pronounced hotspots of long-term plastic accumulation, evidencing that these emerging pollutants have already developed a pollution legacy that will affect generations to come.
The principles that govern the spatially and temporally dynamic inputs of plastics into river corridors as well as the fate and transport mechanisms that explain how plastics are transported and where they accumulate are still poorly understood. Experimental evidence of microplastic pollution in river corridors is hampered by the absence of unified sampling, extraction and analysis approaches, inhibiting a comprehensive investigation of global source distributions and fate pathways. We have therefore initiated the 100 Plastic Rivers programme to provide a global baseline of microplastic pollution in rivers, their drivers and controls in order to develop mechanistic understanding of their fate and transport dynamics and create predictive capacity by informing the parameterisation of global plastic transport models. Preliminary results evidence the suitability of the 100 Plastic Rivers approach and help validate our predictions of global plastic storage in river corridors.
How to cite: Krause, S., Drummond, J., Nel, H., Gomez-Velez, J., Lynch, I., and Sambrook Smith, G.: River corridors as global hotspots of microplastic pollution, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3968, https://doi.org/10.5194/egusphere-egu2020-3968, 2020.
EGU2020-22000 | Displays | ITS2.7/HS12.2
Over 1000 rivers accountable for 80% of global riverine plastic emissions into the oceanLourens Meijer, Tim van Emmerik, Ruud van der Ent, Laurent Lebreton, and Christian Schmidt
Plastic waste increasingly accumulates in the marine environment, but data on the distribution and quantification of riverine sources, required for development of effective mitigation, are limited. Our new model approach includes geographical distributed data on plastic waste, landuse, wind, precipitation and rivers and calculates the probability for plastic waste to reach a river and subsequently the ocean. This probabilistic approach highlights regions which are likely to emit plastic into the ocean. We calibrated our model using recent field observations and show that emissions are distributed over up to two orders of magnitude more rivers than previously thought. We estimate that over 1,000 rivers are accountable for 80% of global annual emissions which range between 0.8 – 2.7 million metric tons per year, with small urban rivers amongst the most polluting. This high-resolution data allows for focused development of mitigation strategies and technologies to reduce riverine plastic emissions.
How to cite: Meijer, L., van Emmerik, T., van der Ent, R., Lebreton, L., and Schmidt, C.: Over 1000 rivers accountable for 80% of global riverine plastic emissions into the ocean, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22000, https://doi.org/10.5194/egusphere-egu2020-22000, 2020.
Plastic waste increasingly accumulates in the marine environment, but data on the distribution and quantification of riverine sources, required for development of effective mitigation, are limited. Our new model approach includes geographical distributed data on plastic waste, landuse, wind, precipitation and rivers and calculates the probability for plastic waste to reach a river and subsequently the ocean. This probabilistic approach highlights regions which are likely to emit plastic into the ocean. We calibrated our model using recent field observations and show that emissions are distributed over up to two orders of magnitude more rivers than previously thought. We estimate that over 1,000 rivers are accountable for 80% of global annual emissions which range between 0.8 – 2.7 million metric tons per year, with small urban rivers amongst the most polluting. This high-resolution data allows for focused development of mitigation strategies and technologies to reduce riverine plastic emissions.
How to cite: Meijer, L., van Emmerik, T., van der Ent, R., Lebreton, L., and Schmidt, C.: Over 1000 rivers accountable for 80% of global riverine plastic emissions into the ocean, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22000, https://doi.org/10.5194/egusphere-egu2020-22000, 2020.
EGU2020-2390 | Displays | ITS2.7/HS12.2
Understanding the physical and biological controls on microplastic transport in lakes.Hassan Elagami, Sven Frei, Jan-Pascal Boos, and Benjamin Gilfedder
Microplastics (MPs) have been found ubiquitously in oceanic and terrestrial environments. As the production and consumption of plastic polymers increases the amount of plastic evading accepted disposal pathways and entering natural systems is also expected to increase. To date the focus of plastic and MP research in particular has been on the ocean, there has recently been a rapid increase in interest in MP levels and distribution in terrestrial systems. However, the focus of existing studies has mostly been on the quantification and distribution of MP contamination in the sediment or on the water column of rivers and lakes. The aim of this project is to investigate the fundamental physical and biological influences on the transport of microplastics (MPs) in lake systems. In particular, we will focus on an understanding of the migration and distribution of MPs, and a systematic investigation on transport and sedimentation of MP in the lake water column. Lab and field experiments are planned to investigate the behavior of different MPs polymers, shapes and sizes under different conditions and determine how this influence the MP transport.
The settling velocity of MPs in stationary water was measured in the laboratory using particle image velocimetry (PIV) which was then compared to manual timing of the sinking velocity. The trajectories of the settling MPs have also been tracked during weak turbulences. In addition, the results were compared with theoretical calculations.
To investigate microbial colonization and biofilm formation on the surface of MPs, samples were exposed on a natural lake environment for varying time periods. The colonization of MP surfaces by microorganisms and their excretion of extracellular polymeric substances (EPS) were examined by laser microscopic techniques and subsequently the effect of the microbiological colonization of settling velocity was determined. In this work we show that the transport of MP is complex, as it is influenced by plastic type, shape, and biological colonization as well as the hydrodynamic conditions in the lake water column.
How to cite: Elagami, H., Frei, S., Boos, J.-P., and Gilfedder, B.: Understanding the physical and biological controls on microplastic transport in lakes., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2390, https://doi.org/10.5194/egusphere-egu2020-2390, 2020.
Microplastics (MPs) have been found ubiquitously in oceanic and terrestrial environments. As the production and consumption of plastic polymers increases the amount of plastic evading accepted disposal pathways and entering natural systems is also expected to increase. To date the focus of plastic and MP research in particular has been on the ocean, there has recently been a rapid increase in interest in MP levels and distribution in terrestrial systems. However, the focus of existing studies has mostly been on the quantification and distribution of MP contamination in the sediment or on the water column of rivers and lakes. The aim of this project is to investigate the fundamental physical and biological influences on the transport of microplastics (MPs) in lake systems. In particular, we will focus on an understanding of the migration and distribution of MPs, and a systematic investigation on transport and sedimentation of MP in the lake water column. Lab and field experiments are planned to investigate the behavior of different MPs polymers, shapes and sizes under different conditions and determine how this influence the MP transport.
The settling velocity of MPs in stationary water was measured in the laboratory using particle image velocimetry (PIV) which was then compared to manual timing of the sinking velocity. The trajectories of the settling MPs have also been tracked during weak turbulences. In addition, the results were compared with theoretical calculations.
To investigate microbial colonization and biofilm formation on the surface of MPs, samples were exposed on a natural lake environment for varying time periods. The colonization of MP surfaces by microorganisms and their excretion of extracellular polymeric substances (EPS) were examined by laser microscopic techniques and subsequently the effect of the microbiological colonization of settling velocity was determined. In this work we show that the transport of MP is complex, as it is influenced by plastic type, shape, and biological colonization as well as the hydrodynamic conditions in the lake water column.
How to cite: Elagami, H., Frei, S., Boos, J.-P., and Gilfedder, B.: Understanding the physical and biological controls on microplastic transport in lakes., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2390, https://doi.org/10.5194/egusphere-egu2020-2390, 2020.
EGU2020-3331 | Displays | ITS2.7/HS12.2
An experimental investigation of microplastic transport in fluvial systemsJan-Pascal Boos, Benjamin-Silas Gilfedder, Hassan Elagami, and Sven Frei
Although a major part of marine microplastic (MP) pollution originates from rivers and streams, the mechanistic behavior of MP in fluvial systems is only poorly understood. MP enter fluvial systems from e.g. waste water treatment plant (WWTP) effluents, sewer overflows during heavy rain events, agricultural runoff, aerial input/atmospheric fallout, road runoff or via fragmentation of plastic litter. As part of this project we want to investigate the hydrodynamic transport mechanisms that control the behavior and re-distribution of MP in open channel flow and the streambed sediments. Hydrodynamic conditions in open channel flow are represented in an experimental flume environment. Different porous media materials (e.g. aqua beads, glass beads and sand) are used in the flume experiments to shape typical bed form structures such as riffle-pool sequences, ripples and dunes. The aim of this experimental setup is to create hydrodynamic flow conditions such as hydraulic jumps, low and high flow velocity environments for which the transport and sedimentation behavior of MP can be investigated under realistic conditions. Hydrodynamic flow conditions in the flume are characterized using a Laser-Doppler-Anemometry (LDA) and Particle Image Velocimetry (PIV). Detection and tracking of fluorescent MP-particles in open channel flow and in porous media will be achieved with a fluorescence-camera-system.
How to cite: Boos, J.-P., Gilfedder, B.-S., Elagami, H., and Frei, S.: An experimental investigation of microplastic transport in fluvial systems , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3331, https://doi.org/10.5194/egusphere-egu2020-3331, 2020.
Although a major part of marine microplastic (MP) pollution originates from rivers and streams, the mechanistic behavior of MP in fluvial systems is only poorly understood. MP enter fluvial systems from e.g. waste water treatment plant (WWTP) effluents, sewer overflows during heavy rain events, agricultural runoff, aerial input/atmospheric fallout, road runoff or via fragmentation of plastic litter. As part of this project we want to investigate the hydrodynamic transport mechanisms that control the behavior and re-distribution of MP in open channel flow and the streambed sediments. Hydrodynamic conditions in open channel flow are represented in an experimental flume environment. Different porous media materials (e.g. aqua beads, glass beads and sand) are used in the flume experiments to shape typical bed form structures such as riffle-pool sequences, ripples and dunes. The aim of this experimental setup is to create hydrodynamic flow conditions such as hydraulic jumps, low and high flow velocity environments for which the transport and sedimentation behavior of MP can be investigated under realistic conditions. Hydrodynamic flow conditions in the flume are characterized using a Laser-Doppler-Anemometry (LDA) and Particle Image Velocimetry (PIV). Detection and tracking of fluorescent MP-particles in open channel flow and in porous media will be achieved with a fluorescence-camera-system.
How to cite: Boos, J.-P., Gilfedder, B.-S., Elagami, H., and Frei, S.: An experimental investigation of microplastic transport in fluvial systems , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3331, https://doi.org/10.5194/egusphere-egu2020-3331, 2020.
EGU2020-6786 | Displays | ITS2.7/HS12.2
Understanding the plastic cocktail using distributionsMerel Kooi and Albert A Koelmans
Plastic pollution proves a complex challenge, given the large variety in the properties of the items and particles. Usually, models and experiments focus on a small, sometimes arbitrary, subset of the total plastic continuum. This inherently implies a limitation, and will never be fully satisfactory if we are to understand the true behavior of plastic of all sizes, shapes and densities in the environment. Here, we present a novel approach, in which plastics are fully characterized by continuous distribution functions. For microplastics, we report and discuss distributions obtained for the marine and freshwater environment, from water and sediment samples. For macroplastics, we report spatial and temporal trends based on distributions that were derived from monitoring data from the OSPAR beach litter program. We discuss how these micro- and macroplastic distributions can feed directly into transport and fate models. Additionally, they can be used to design effect and fate experiments, where mixtures of (environmental) plastic should be used to better represent the real, complex mixture that plastic really is. By using this approach, the often acclaimed problem of complexity as a limiting factor is circumvented, which brings a true understanding of plastic fate and effects within reach.
How to cite: Kooi, M. and Koelmans, A. A.: Understanding the plastic cocktail using distributions, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6786, https://doi.org/10.5194/egusphere-egu2020-6786, 2020.
Plastic pollution proves a complex challenge, given the large variety in the properties of the items and particles. Usually, models and experiments focus on a small, sometimes arbitrary, subset of the total plastic continuum. This inherently implies a limitation, and will never be fully satisfactory if we are to understand the true behavior of plastic of all sizes, shapes and densities in the environment. Here, we present a novel approach, in which plastics are fully characterized by continuous distribution functions. For microplastics, we report and discuss distributions obtained for the marine and freshwater environment, from water and sediment samples. For macroplastics, we report spatial and temporal trends based on distributions that were derived from monitoring data from the OSPAR beach litter program. We discuss how these micro- and macroplastic distributions can feed directly into transport and fate models. Additionally, they can be used to design effect and fate experiments, where mixtures of (environmental) plastic should be used to better represent the real, complex mixture that plastic really is. By using this approach, the often acclaimed problem of complexity as a limiting factor is circumvented, which brings a true understanding of plastic fate and effects within reach.
How to cite: Kooi, M. and Koelmans, A. A.: Understanding the plastic cocktail using distributions, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6786, https://doi.org/10.5194/egusphere-egu2020-6786, 2020.
EGU2020-7698 | Displays | ITS2.7/HS12.2 | Highlight
Freshwater pathways and litterMaría Cabrera Fernández
In recent years, there has been an increasing number of studies about freshwater micro-litter and how it ends up in the ocean. Nevertheless, macro-litter studies are not common in freshwater landscapes and yet less frequent among rivers. Almost always, research is focused on estuaries rather than rivers.
The Asociación Paisaje Limpio has been developed, for some years, several studies as an affordable methodology to measure macro-litter in rivers throughout its.
This way, our methodology is a combination between research and action. We don’t just tackle the macro-litter data field, but also identify specific litter problems along the river. We act through campaigns, agreement for companies and public administrations, etc.
A need have been observed to combined different types of methodology to monitoring different types of rivers in order to be able to draw a conclusion:
- Visual counting: counting floating macro-litter on surface using RIMMEL app.
- By the riverbank,through a Citizen Science tool create by Asociación Paisaje Limpio and Asociación Vertidos Cero, called eLitter. Elitter is harmonized with other marine-litter methodologies (Marine litter watch, MARNOBA in Spain) and its litter classification is based on OSPAR protocol.
- If the riverbed is accessible eLitter is also used, but when is not accessible a dredge "Van Veen" have been used instead. This method has been applied in other marine-litter projects on seabed.
- Floating booms: it lets us know plastics rate in captured floating litter, and the water flow extrapolation.
- Nets from a kayak: to study the plastic concentration in the water column and the water flow extrapolation.
- Water quality general analysis:this analysis is useful to support the hypothesis about litter’s source in a river, mainly where the source is the sewage as happens with wet wipes, ear sticks...
-Case study: river Lagares, a spanish river in Pontevedra, Galicia. The river Lagares flows into the Atlantic, in a Special Protection Area (SPA), a designation under the European Union Directive on the Conservation of Wild Birds.
The Asociación Paisaje Limpio is working on this river since 2018. We have applied the different methodologies explained before, in the river Lagares.
How to cite: Cabrera Fernández, M.: Freshwater pathways and litter , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7698, https://doi.org/10.5194/egusphere-egu2020-7698, 2020.
In recent years, there has been an increasing number of studies about freshwater micro-litter and how it ends up in the ocean. Nevertheless, macro-litter studies are not common in freshwater landscapes and yet less frequent among rivers. Almost always, research is focused on estuaries rather than rivers.
The Asociación Paisaje Limpio has been developed, for some years, several studies as an affordable methodology to measure macro-litter in rivers throughout its.
This way, our methodology is a combination between research and action. We don’t just tackle the macro-litter data field, but also identify specific litter problems along the river. We act through campaigns, agreement for companies and public administrations, etc.
A need have been observed to combined different types of methodology to monitoring different types of rivers in order to be able to draw a conclusion:
- Visual counting: counting floating macro-litter on surface using RIMMEL app.
- By the riverbank,through a Citizen Science tool create by Asociación Paisaje Limpio and Asociación Vertidos Cero, called eLitter. Elitter is harmonized with other marine-litter methodologies (Marine litter watch, MARNOBA in Spain) and its litter classification is based on OSPAR protocol.
- If the riverbed is accessible eLitter is also used, but when is not accessible a dredge "Van Veen" have been used instead. This method has been applied in other marine-litter projects on seabed.
- Floating booms: it lets us know plastics rate in captured floating litter, and the water flow extrapolation.
- Nets from a kayak: to study the plastic concentration in the water column and the water flow extrapolation.
- Water quality general analysis:this analysis is useful to support the hypothesis about litter’s source in a river, mainly where the source is the sewage as happens with wet wipes, ear sticks...
-Case study: river Lagares, a spanish river in Pontevedra, Galicia. The river Lagares flows into the Atlantic, in a Special Protection Area (SPA), a designation under the European Union Directive on the Conservation of Wild Birds.
The Asociación Paisaje Limpio is working on this river since 2018. We have applied the different methodologies explained before, in the river Lagares.
How to cite: Cabrera Fernández, M.: Freshwater pathways and litter , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7698, https://doi.org/10.5194/egusphere-egu2020-7698, 2020.
EGU2020-8051 | Displays | ITS2.7/HS12.2
Abundance and distribution of microplastics in water and sediments of the river Elbe, GermanyFriederike Stock, Annkatrin Weber, Christian Scherer, Christian Kochleus, Georg Dierkes, Martin Wagner, Nicole Brennholt, and Georg Reifferscheid
Plastic pollution in the aquatic environment has gained worldwide attention in the last years. Microplastics have been investigated for over 45 years especially in the marine environment, but only in the past years research has also started to focus on freshwater environments. In the frame of the project about macro- and microplastics in German rivers, samples from 11 sites from the German part of the river Elbe were taken in order to study the plastic pollution in water and sediment, detect sinks of microplastics and better understand transport mechanisms.
The sediment samples were taken with a Van-Veen-grabber, the water samples from the Elbe with an Apstein plankton net (mesh size 150 µm) from the same location. The sediment samples were presorted with wet sieving, organic digestion and density separation and filtered on aluminium oxide filters. For the water samples, the organic matter was digested using a reagent composed of equal volumes of 10 M KOH and 30 % H2O2, then, the microplastic particles were isolated from remaining matrix by density floatation using 1.6 g/mL potassium formate solution and pressure filtration. Analysis was done by visual inspection, selected particles measured with Fourier-transform infrared spectroscopy and masses calculated with a pyrolysis GC-MS.
The results of the sediments of the Elbe reveal that tentative microplastic concentrations differed intensively between the different river compartments. Microplastics in the sediments were in average 600,000-fold higher than in the water samples (when referring to the same volume). The amount of particles also varies significantly between the sampling sites. In sediment samples, microplastic concentrations decreased downstream, in water samples, concentrations varied stronger. The form of the particles is also site specific. In two samples, more than 80% spheres were counted whereas the 6 locations downstream reveal an increase in fragments. Polymer distribution differed between the water and sediment phase with mostly PE and PP in the water samples and a more diverse distribution in the sediments.
How to cite: Stock, F., Weber, A., Scherer, C., Kochleus, C., Dierkes, G., Wagner, M., Brennholt, N., and Reifferscheid, G.: Abundance and distribution of microplastics in water and sediments of the river Elbe, Germany, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8051, https://doi.org/10.5194/egusphere-egu2020-8051, 2020.
Plastic pollution in the aquatic environment has gained worldwide attention in the last years. Microplastics have been investigated for over 45 years especially in the marine environment, but only in the past years research has also started to focus on freshwater environments. In the frame of the project about macro- and microplastics in German rivers, samples from 11 sites from the German part of the river Elbe were taken in order to study the plastic pollution in water and sediment, detect sinks of microplastics and better understand transport mechanisms.
The sediment samples were taken with a Van-Veen-grabber, the water samples from the Elbe with an Apstein plankton net (mesh size 150 µm) from the same location. The sediment samples were presorted with wet sieving, organic digestion and density separation and filtered on aluminium oxide filters. For the water samples, the organic matter was digested using a reagent composed of equal volumes of 10 M KOH and 30 % H2O2, then, the microplastic particles were isolated from remaining matrix by density floatation using 1.6 g/mL potassium formate solution and pressure filtration. Analysis was done by visual inspection, selected particles measured with Fourier-transform infrared spectroscopy and masses calculated with a pyrolysis GC-MS.
The results of the sediments of the Elbe reveal that tentative microplastic concentrations differed intensively between the different river compartments. Microplastics in the sediments were in average 600,000-fold higher than in the water samples (when referring to the same volume). The amount of particles also varies significantly between the sampling sites. In sediment samples, microplastic concentrations decreased downstream, in water samples, concentrations varied stronger. The form of the particles is also site specific. In two samples, more than 80% spheres were counted whereas the 6 locations downstream reveal an increase in fragments. Polymer distribution differed between the water and sediment phase with mostly PE and PP in the water samples and a more diverse distribution in the sediments.
How to cite: Stock, F., Weber, A., Scherer, C., Kochleus, C., Dierkes, G., Wagner, M., Brennholt, N., and Reifferscheid, G.: Abundance and distribution of microplastics in water and sediments of the river Elbe, Germany, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8051, https://doi.org/10.5194/egusphere-egu2020-8051, 2020.
EGU2020-8116 | Displays | ITS2.7/HS12.2 | Highlight
Controls on microplastic flux mechanisms in a large riverFreija Mendrik, Daniel Parsons, Christopher Hackney, Cath Waller, Vivien Cumming, Robert Dorrell, and Grigorios Vasilopoulos
The majority of marine plastic pollution originates from land-based sources with the dominant transport agent being riverine. Despite many of the potential ecotoxicological consequences of plastics being well known, research has only just recently begun to explore the source to sink dynamics of plastics in the environment. Despite the widespread recognition that rivers dominate the global flux of plastics to the ocean, there is a key knowledge gap regarding the nature of the flux, the behaviour of microplastics (<5mm) in transport and its pathways from rivers into the ocean. Additionally, little is presently known about the role of biota in the transport of microplastics through processes such as biofilm formation and how this influences microplastic fate. This prevents progress in understanding microplastic fate and hotspot formation, as well as curtailing the evolution of effective mitigation and policy measures.
As part of the National Geographic Rivers of Plastic project, a combined-laboratory and field investigation was conducted. Fieldwork was undertaken in the Mekong River, one of the top global contributors to marine plastic pollution with an estimated 37,000 tonnes of plastic being discharged from the Mekong Delta each year. This flux is set to grow significantly in accordance with the projected population increase in the basin. The results presented herein outline a suite of laboratory experiments that explore the role of biofilms on the generation of microplastic flocs and the impact on buoyancy and settling velocities. Aligned fieldwork details the particulate flux and transport of microplastic, throughout the vertical velocity profile, across the river-delta-coast system, including the Tonle Sap Lake. The results also highlight potential areas of highest ecological risk related to the dispersal and distribution of microplastics. Finally, pilot data on the levels of microplastics within fish from the Mekong system are also quantified to explore the potential impact of biological uptake on the fate and sinks of plastics within the system.
How to cite: Mendrik, F., Parsons, D., Hackney, C., Waller, C., Cumming, V., Dorrell, R., and Vasilopoulos, G.: Controls on microplastic flux mechanisms in a large river , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8116, https://doi.org/10.5194/egusphere-egu2020-8116, 2020.
The majority of marine plastic pollution originates from land-based sources with the dominant transport agent being riverine. Despite many of the potential ecotoxicological consequences of plastics being well known, research has only just recently begun to explore the source to sink dynamics of plastics in the environment. Despite the widespread recognition that rivers dominate the global flux of plastics to the ocean, there is a key knowledge gap regarding the nature of the flux, the behaviour of microplastics (<5mm) in transport and its pathways from rivers into the ocean. Additionally, little is presently known about the role of biota in the transport of microplastics through processes such as biofilm formation and how this influences microplastic fate. This prevents progress in understanding microplastic fate and hotspot formation, as well as curtailing the evolution of effective mitigation and policy measures.
As part of the National Geographic Rivers of Plastic project, a combined-laboratory and field investigation was conducted. Fieldwork was undertaken in the Mekong River, one of the top global contributors to marine plastic pollution with an estimated 37,000 tonnes of plastic being discharged from the Mekong Delta each year. This flux is set to grow significantly in accordance with the projected population increase in the basin. The results presented herein outline a suite of laboratory experiments that explore the role of biofilms on the generation of microplastic flocs and the impact on buoyancy and settling velocities. Aligned fieldwork details the particulate flux and transport of microplastic, throughout the vertical velocity profile, across the river-delta-coast system, including the Tonle Sap Lake. The results also highlight potential areas of highest ecological risk related to the dispersal and distribution of microplastics. Finally, pilot data on the levels of microplastics within fish from the Mekong system are also quantified to explore the potential impact of biological uptake on the fate and sinks of plastics within the system.
How to cite: Mendrik, F., Parsons, D., Hackney, C., Waller, C., Cumming, V., Dorrell, R., and Vasilopoulos, G.: Controls on microplastic flux mechanisms in a large river , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8116, https://doi.org/10.5194/egusphere-egu2020-8116, 2020.
EGU2020-8321 | Displays | ITS2.7/HS12.2
Monitoring submerged riverine macroplastics using echo soundingSophie Broere, Tim van Emmerik, Daniel González-Fernández, Willem Luxemburg, Andrés Cózar, Nick van de Giesen, and Matthieu de Schipper
Riverine plastics cause severe global problems, regarding the risk for human health and environmental damage. The major part of the plastic waste that ends up in the oceans is transported via rivers. However, estimations of global quantities of plastics entering the oceans are associated with great uncertainties due to methodological difficulties to accurately quantify land-based plastic fluxes into the ocean. Yet, there are no standard methods to determine quantities of plastics in rivers. For the sake of reducing the amount of plastic waste in the natural environment, information on plastic fluxes from rivers to seas is needed. Focussing on monitoring of the plastic litter that is transported by rivers is useful because measures can easier be implemented in rivers than in seas. Moreover, consistent measuring techniques are crucial to optimise prevention-and mitigation strategies, especially in countries with high expected river plastic emissions.
Additionally, based on plastic characteristics and turbulent river flow conditions, a considerable portion of the riverine litter can also be transported underneath the surface in the water column. Current monitoring methods regarding macro plastics are labour intensive and do not provide continuous measurements for submerged riverine plastics. Besides, most research done focussed on floating macro litter, instead of submerged plastics. The aim of this research was to find a standard method, applicable in different river systems, for detecting submerged macro plastics.
With the use of the Deeper Chirp+ fishfinder, several tests were conducted both in the Guadalete river basin in southern Spain and in the lab at the TU Delft. Spanish, and in general European rivers are estimated to transport two to three orders of magnitude below rivers in Asia (Malesia and Vietnam), and should not be neglected. The Guadalete river basin formed a suitable location to test this new method. First, monitoring in the Guadalquivir river was executed, with the use of a net to validate the readings of the sonar. Furthermore, the detecting abilities of the echosounder, in the Guadalete river basin, were tested with the use of plastic targets. The targets were released in the river and passed the sensor at a certain time. Moreover, tests in the lab at the TU Delft were conducted to investigate relations between sonar signal and flow velocity, object depth, and object size.
The tests show that submerged macro plastics can be detected with the use of echo sounding. Moreover, a relation between the sonar signal and litter size is found. Finally, signal intensities can be related to object properties. In conclusion, the use of echo sounding has a high potential for obtaining more accurate plastic flux estimations.
How to cite: Broere, S., van Emmerik, T., González-Fernández, D., Luxemburg, W., Cózar, A., van de Giesen, N., and de Schipper, M.: Monitoring submerged riverine macroplastics using echo sounding , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8321, https://doi.org/10.5194/egusphere-egu2020-8321, 2020.
Riverine plastics cause severe global problems, regarding the risk for human health and environmental damage. The major part of the plastic waste that ends up in the oceans is transported via rivers. However, estimations of global quantities of plastics entering the oceans are associated with great uncertainties due to methodological difficulties to accurately quantify land-based plastic fluxes into the ocean. Yet, there are no standard methods to determine quantities of plastics in rivers. For the sake of reducing the amount of plastic waste in the natural environment, information on plastic fluxes from rivers to seas is needed. Focussing on monitoring of the plastic litter that is transported by rivers is useful because measures can easier be implemented in rivers than in seas. Moreover, consistent measuring techniques are crucial to optimise prevention-and mitigation strategies, especially in countries with high expected river plastic emissions.
Additionally, based on plastic characteristics and turbulent river flow conditions, a considerable portion of the riverine litter can also be transported underneath the surface in the water column. Current monitoring methods regarding macro plastics are labour intensive and do not provide continuous measurements for submerged riverine plastics. Besides, most research done focussed on floating macro litter, instead of submerged plastics. The aim of this research was to find a standard method, applicable in different river systems, for detecting submerged macro plastics.
With the use of the Deeper Chirp+ fishfinder, several tests were conducted both in the Guadalete river basin in southern Spain and in the lab at the TU Delft. Spanish, and in general European rivers are estimated to transport two to three orders of magnitude below rivers in Asia (Malesia and Vietnam), and should not be neglected. The Guadalete river basin formed a suitable location to test this new method. First, monitoring in the Guadalquivir river was executed, with the use of a net to validate the readings of the sonar. Furthermore, the detecting abilities of the echosounder, in the Guadalete river basin, were tested with the use of plastic targets. The targets were released in the river and passed the sensor at a certain time. Moreover, tests in the lab at the TU Delft were conducted to investigate relations between sonar signal and flow velocity, object depth, and object size.
The tests show that submerged macro plastics can be detected with the use of echo sounding. Moreover, a relation between the sonar signal and litter size is found. Finally, signal intensities can be related to object properties. In conclusion, the use of echo sounding has a high potential for obtaining more accurate plastic flux estimations.
How to cite: Broere, S., van Emmerik, T., González-Fernández, D., Luxemburg, W., Cózar, A., van de Giesen, N., and de Schipper, M.: Monitoring submerged riverine macroplastics using echo sounding , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8321, https://doi.org/10.5194/egusphere-egu2020-8321, 2020.
EGU2020-10816 | Displays | ITS2.7/HS12.2
The role of biostabilisation in controlling microplastic flux in riversPieter Fourie, Annie Ockelford, and James Ebdon
Microplastic burden in aquatic environments is now recognised as a potential threat to human and environmental health. Although microplastic transfers to the ocean from the terrestrial river network contributes up to 90% of the plastics in the oceans the factors controlling that transfer remain largely unconstrained. In rivers microplastics are stored within sediment beds and whilst they are there both the microplastic particles and the sediment grains can become colonised by biofilms. Biofilm growth on river sediments has been shown to increase a particles resistance to entrainment but the effects of such biostabilisation on microplastic flux has not yet been considered. This is despite the fact that biofilm growth can change the buoyancy, surface characteristics and aggregation properties of the plastic particles such as to cause them to be deposited rather than transported and hence increase their residence time.
In order to quantify biostabilisation processes on microplastic flux a two stage experimental programme was run. During the first stage, bricks were submerged in a gravel-bed stream and biofilms allowed to colonise the bricks for 4 weeks. The biofilm covered bricks were then extracted and placed within a re-circulating ‘incubator’ flume which had been divided into 9 smaller channels. Within each of the 9 channels either a uniform sand, uniform gravel or a bimodal gravel mix were placed in Perspex boxes in the flume channels. Each sediment type was seeded with either high density PVC microplastic nurdles (D50 of 3mm, density of 1.33g/cm3) or polyester fibres (5 mm long, 0.5-1 mm wide, density of 1.38 g cm3), both at a concentration of 1%. Blanks were also run where the sediment mixtures did not contain any micropalstics. The flume was left to run with representative day/night cycles of lighting in order to let the biofilms colonise the test sediments for either 0 (control), 2, 4 or 6 weeks. At the end of the chosen colonisation periods the persepx boxes containing the sediment were removed from the incubator flume and placed within a glass-sided, flow-recirculating flume (8.2m x 0.6m x 0.5m); this constituted the second stage of the experiment. During this stage the samples were exposed to a series of flow steps of increasing discharge designed to establish the entrainment threshold of the D50 sediment grains. Entrainment thresholds were calculated for each of the growth stages such as to establish the effect of biostabilisation on sediment and microplastic flux. Bedload and microplastic transport rates were also measured at every flow step to establish biostabilisation effects on overall fluxes. Finally, photographs of the sediment surface were taken at each flow step in order to estimate the percentage loss of biofilm from the surface.
Discussion concentrates on linking the changes in the degree of biofilm colonisation with the entrainment threshold of the sediment and the links between biofilm colonisation and the character of the bedload and microplastic flux. The outcome of this research is pertinent to developing understanding surrounding the role biostabilisation has to play in the residence times of microplastics within fluvial systems.
How to cite: Fourie, P., Ockelford, A., and Ebdon, J.: The role of biostabilisation in controlling microplastic flux in rivers, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10816, https://doi.org/10.5194/egusphere-egu2020-10816, 2020.
Microplastic burden in aquatic environments is now recognised as a potential threat to human and environmental health. Although microplastic transfers to the ocean from the terrestrial river network contributes up to 90% of the plastics in the oceans the factors controlling that transfer remain largely unconstrained. In rivers microplastics are stored within sediment beds and whilst they are there both the microplastic particles and the sediment grains can become colonised by biofilms. Biofilm growth on river sediments has been shown to increase a particles resistance to entrainment but the effects of such biostabilisation on microplastic flux has not yet been considered. This is despite the fact that biofilm growth can change the buoyancy, surface characteristics and aggregation properties of the plastic particles such as to cause them to be deposited rather than transported and hence increase their residence time.
In order to quantify biostabilisation processes on microplastic flux a two stage experimental programme was run. During the first stage, bricks were submerged in a gravel-bed stream and biofilms allowed to colonise the bricks for 4 weeks. The biofilm covered bricks were then extracted and placed within a re-circulating ‘incubator’ flume which had been divided into 9 smaller channels. Within each of the 9 channels either a uniform sand, uniform gravel or a bimodal gravel mix were placed in Perspex boxes in the flume channels. Each sediment type was seeded with either high density PVC microplastic nurdles (D50 of 3mm, density of 1.33g/cm3) or polyester fibres (5 mm long, 0.5-1 mm wide, density of 1.38 g cm3), both at a concentration of 1%. Blanks were also run where the sediment mixtures did not contain any micropalstics. The flume was left to run with representative day/night cycles of lighting in order to let the biofilms colonise the test sediments for either 0 (control), 2, 4 or 6 weeks. At the end of the chosen colonisation periods the persepx boxes containing the sediment were removed from the incubator flume and placed within a glass-sided, flow-recirculating flume (8.2m x 0.6m x 0.5m); this constituted the second stage of the experiment. During this stage the samples were exposed to a series of flow steps of increasing discharge designed to establish the entrainment threshold of the D50 sediment grains. Entrainment thresholds were calculated for each of the growth stages such as to establish the effect of biostabilisation on sediment and microplastic flux. Bedload and microplastic transport rates were also measured at every flow step to establish biostabilisation effects on overall fluxes. Finally, photographs of the sediment surface were taken at each flow step in order to estimate the percentage loss of biofilm from the surface.
Discussion concentrates on linking the changes in the degree of biofilm colonisation with the entrainment threshold of the sediment and the links between biofilm colonisation and the character of the bedload and microplastic flux. The outcome of this research is pertinent to developing understanding surrounding the role biostabilisation has to play in the residence times of microplastics within fluvial systems.
How to cite: Fourie, P., Ockelford, A., and Ebdon, J.: The role of biostabilisation in controlling microplastic flux in rivers, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10816, https://doi.org/10.5194/egusphere-egu2020-10816, 2020.
EGU2020-12942 | Displays | ITS2.7/HS12.2
Behaviour of different micro-plastics during degradation in fresh and sea waters, with focus on synthetic microfibersSimone Rossouw, Nathalie Grassineau, and James Brakeley
The rising concern over plastic pollution has spiked the number of studies being undertaken globally in a large variety of environments. Microplastic studies have only recently started and there is still so much unknown. One important question to answer is how do different microplastics behave during degradation and how fast does it happen.
In this study, a range of plastic waste was tested in both tap water (fresh water conditions) and salt water (marine conditions) to observe if the water chemistry and timescale plays a significant role in degradation. The samples were exposed to natural weathering and UV light for up to 3 months and then checked for variation including their change of weight. The aim of the study was to determine if different types of plastic waste degrade differently combined with the impact of varying lengths of exposure and water medium. Following this, to reconstitute the natural aquatic environment, the samples were placed in water on a shaking table for 24 hours, and observations were made to assess their propensity for degradation.
Although the time scale was short, different degrees of degradation occurred between each type of plastic studied, with some samples losing significant mass, some none and some gaining mass. As expected, the low density plastics showed very quickly visible signs of decay, and some fragmentations, and therefore this indicates that they are quickly becoming available for small organisms at the bottom of the food chain. In opposition, hard plastics are more resistant with little degradation or none. However this study highlights specific issues with the media in which the plastics are found, particularly in the marine environment, where some harder materials become “encrusted” with sea salt, increasing their density. This means that by slowly sinking within the marine water column, they become available to all marine fauna, not just at the surface.
Although all microplastic particles require attention, the most common and abundant type found in fresh waters are synthetic fibres, with their source likely to be from washing machine effluent and sewage treatment. Following the findings above, the focus of the study turned to non-natural fibres by exploring the comparisons between water pollution from general household laundry and industrial manufacture of synthetic textiles. Methods involving collecting effluent from washing machines and industrial manufacturing machines have been tested and the resulting samples digested with hydrogen peroxide. This study shows evidence of great losses of synthetic fibres from garments, at industrial scale as well as household level. This highlights the pressing issues that urban areas need to face with current waste water management to increase recycling and the capture of microplastics.
How to cite: Rossouw, S., Grassineau, N., and Brakeley, J.: Behaviour of different micro-plastics during degradation in fresh and sea waters, with focus on synthetic microfibers, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12942, https://doi.org/10.5194/egusphere-egu2020-12942, 2020.
The rising concern over plastic pollution has spiked the number of studies being undertaken globally in a large variety of environments. Microplastic studies have only recently started and there is still so much unknown. One important question to answer is how do different microplastics behave during degradation and how fast does it happen.
In this study, a range of plastic waste was tested in both tap water (fresh water conditions) and salt water (marine conditions) to observe if the water chemistry and timescale plays a significant role in degradation. The samples were exposed to natural weathering and UV light for up to 3 months and then checked for variation including their change of weight. The aim of the study was to determine if different types of plastic waste degrade differently combined with the impact of varying lengths of exposure and water medium. Following this, to reconstitute the natural aquatic environment, the samples were placed in water on a shaking table for 24 hours, and observations were made to assess their propensity for degradation.
Although the time scale was short, different degrees of degradation occurred between each type of plastic studied, with some samples losing significant mass, some none and some gaining mass. As expected, the low density plastics showed very quickly visible signs of decay, and some fragmentations, and therefore this indicates that they are quickly becoming available for small organisms at the bottom of the food chain. In opposition, hard plastics are more resistant with little degradation or none. However this study highlights specific issues with the media in which the plastics are found, particularly in the marine environment, where some harder materials become “encrusted” with sea salt, increasing their density. This means that by slowly sinking within the marine water column, they become available to all marine fauna, not just at the surface.
Although all microplastic particles require attention, the most common and abundant type found in fresh waters are synthetic fibres, with their source likely to be from washing machine effluent and sewage treatment. Following the findings above, the focus of the study turned to non-natural fibres by exploring the comparisons between water pollution from general household laundry and industrial manufacture of synthetic textiles. Methods involving collecting effluent from washing machines and industrial manufacturing machines have been tested and the resulting samples digested with hydrogen peroxide. This study shows evidence of great losses of synthetic fibres from garments, at industrial scale as well as household level. This highlights the pressing issues that urban areas need to face with current waste water management to increase recycling and the capture of microplastics.
How to cite: Rossouw, S., Grassineau, N., and Brakeley, J.: Behaviour of different micro-plastics during degradation in fresh and sea waters, with focus on synthetic microfibers, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12942, https://doi.org/10.5194/egusphere-egu2020-12942, 2020.
EGU2020-14509 | Displays | ITS2.7/HS12.2
Heteroaggregation of micro-polystyrene in the presence of amorphous iron hydroxide (ferrihydrite)Johanna Schmidtmann, Georg Papastavrou, Nicolas Helfricht, and Stefan Peiffer
Plastic pollution in the marine and terrestrial environments is ubiquitous and a widespread problem. While the occurrence of plastics and microplastics, as well as their effects on marine and freshwater organisms, have already been investigated in numerous studies, so far only little attention has been paid to the fate, transport, and transformation processes of microplastics in the environment. In this work, the aggregation behavior of polystyrene (PS) microplastics in the presence of ferrihydrite, a natural inorganic colloid, was studied using zeta potential and hydrodynamic diameter measurements, as well as scanning electron microscope (SEM) techniques, considering the influence of pH and ionic strength. An increase of pH led to a more negative surface charge of PS. Furthermore, increasing concentrations of NaCl and CaCl2 showed that mono- and divalent cations influence the zeta potential in a different way. Divalent ions compress the electric double layer more efficiently compared to monovalent ions, which resulted in a decrease of repulsive forces. Studies on the heteroaggregation between PS and ferrihydrite showed that the highest aggregation took place at neutral pH values. Aggregate sizes in samples with neutral pH increased significantly compared to more acidic and alkaline pH values. Furthermore, the results indicated that at neutral pH values, ferrihydrite completely covers the PS surface. SEM images and hydrodynamic diameter measurements revealed that the heteroaggregation between PS and ferrihydrite increased with ionic strength. Our results demonstrate that the fate of microplastic particles in aquatic systems can be strongly influenced by natural colloidal water constituents, such as iron hydroxides.
How to cite: Schmidtmann, J., Papastavrou, G., Helfricht, N., and Peiffer, S.: Heteroaggregation of micro-polystyrene in the presence of amorphous iron hydroxide (ferrihydrite), EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-14509, https://doi.org/10.5194/egusphere-egu2020-14509, 2020.
Plastic pollution in the marine and terrestrial environments is ubiquitous and a widespread problem. While the occurrence of plastics and microplastics, as well as their effects on marine and freshwater organisms, have already been investigated in numerous studies, so far only little attention has been paid to the fate, transport, and transformation processes of microplastics in the environment. In this work, the aggregation behavior of polystyrene (PS) microplastics in the presence of ferrihydrite, a natural inorganic colloid, was studied using zeta potential and hydrodynamic diameter measurements, as well as scanning electron microscope (SEM) techniques, considering the influence of pH and ionic strength. An increase of pH led to a more negative surface charge of PS. Furthermore, increasing concentrations of NaCl and CaCl2 showed that mono- and divalent cations influence the zeta potential in a different way. Divalent ions compress the electric double layer more efficiently compared to monovalent ions, which resulted in a decrease of repulsive forces. Studies on the heteroaggregation between PS and ferrihydrite showed that the highest aggregation took place at neutral pH values. Aggregate sizes in samples with neutral pH increased significantly compared to more acidic and alkaline pH values. Furthermore, the results indicated that at neutral pH values, ferrihydrite completely covers the PS surface. SEM images and hydrodynamic diameter measurements revealed that the heteroaggregation between PS and ferrihydrite increased with ionic strength. Our results demonstrate that the fate of microplastic particles in aquatic systems can be strongly influenced by natural colloidal water constituents, such as iron hydroxides.
How to cite: Schmidtmann, J., Papastavrou, G., Helfricht, N., and Peiffer, S.: Heteroaggregation of micro-polystyrene in the presence of amorphous iron hydroxide (ferrihydrite), EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-14509, https://doi.org/10.5194/egusphere-egu2020-14509, 2020.
EGU2020-14666 | Displays | ITS2.7/HS12.2
Properties and fate of microplastics entering drinking water treatment plantsKaterina Novotna, Lenka Cermakova, Lenka Pivokonska, and Martin Pivokonsky
Microplastics (MPs) are being detected in aquatic environments worldwide, including seawaters and freshwaters. Moreover, some scarce studies have also reported the presence of MPs in potable water, both in water from public water supply and in bottled water. Despite any potential adverse effects on human health are not known yet, the occurrence of MPs in drinking water raises considerable attention. Drinking water treatment plants (DWTPs) pose a barrier for MPs to pass from raw water to treated water intended for human consumption; thus, the fate of MPs entering DWTPs is of a great interest. In order to encapsulate current knowledge in this regard, and so as to identify research needs in this filed, more than 100 studies were reviewed to provide concise conclusions. Focus was laid on: (i) summarizing available information on MP abundance and character in water resources and in drinking water; (ii) combining research results on MP contents at the inflow and outflow of DWTPs and on MP removal by distinct treatment technologies; (iii) comparing MPs to other common pollutants, the removal of which is commonly addressed at DWTPs; and (iv) providing an insight into the fate of MPs at waste water treatment plants (WWTPs), that act as a barrier for transition of MPs from waste to the nature, thus, have an “opposite” position than DWTPs. Additionally, the topic of (v) fate of MPs in DWTP and WWTP sludge was also put forward. This review brings together valuable information regarding the MP occurrence, character, and fate in freshwater aquatic environments in relation to the MP appearance at water treatment facilities, i.e. DWTPs and WWTPs, that may act as both sink and source of this emerging pollutant. Thus, the “cycle” of MPs between natural water bodies and “water in use by humans” is proposed.
How to cite: Novotna, K., Cermakova, L., Pivokonska, L., and Pivokonsky, M.: Properties and fate of microplastics entering drinking water treatment plants, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-14666, https://doi.org/10.5194/egusphere-egu2020-14666, 2020.
Microplastics (MPs) are being detected in aquatic environments worldwide, including seawaters and freshwaters. Moreover, some scarce studies have also reported the presence of MPs in potable water, both in water from public water supply and in bottled water. Despite any potential adverse effects on human health are not known yet, the occurrence of MPs in drinking water raises considerable attention. Drinking water treatment plants (DWTPs) pose a barrier for MPs to pass from raw water to treated water intended for human consumption; thus, the fate of MPs entering DWTPs is of a great interest. In order to encapsulate current knowledge in this regard, and so as to identify research needs in this filed, more than 100 studies were reviewed to provide concise conclusions. Focus was laid on: (i) summarizing available information on MP abundance and character in water resources and in drinking water; (ii) combining research results on MP contents at the inflow and outflow of DWTPs and on MP removal by distinct treatment technologies; (iii) comparing MPs to other common pollutants, the removal of which is commonly addressed at DWTPs; and (iv) providing an insight into the fate of MPs at waste water treatment plants (WWTPs), that act as a barrier for transition of MPs from waste to the nature, thus, have an “opposite” position than DWTPs. Additionally, the topic of (v) fate of MPs in DWTP and WWTP sludge was also put forward. This review brings together valuable information regarding the MP occurrence, character, and fate in freshwater aquatic environments in relation to the MP appearance at water treatment facilities, i.e. DWTPs and WWTPs, that may act as both sink and source of this emerging pollutant. Thus, the “cycle” of MPs between natural water bodies and “water in use by humans” is proposed.
How to cite: Novotna, K., Cermakova, L., Pivokonska, L., and Pivokonsky, M.: Properties and fate of microplastics entering drinking water treatment plants, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-14666, https://doi.org/10.5194/egusphere-egu2020-14666, 2020.
EGU2020-15198 | Displays | ITS2.7/HS12.2
Plants, plastic and rivers: Do water hyacinths play a role in riverine macroplastic transport?Evelien Castrop, Tim van Emmerik, Sanne van den Berg, Sarian Kosten, Emilie Strady, and Thuy-Chung Kieu-Le
Excess of plastic debris in the environment is a worldwide problem. The origin of the plastic sources is partly land-based, the function of rivers as transportation pathways for plastics is an emerging research field. However, the transportation dynamics of macroplastic in river systems are still poorly understood(Blettler et al., 2018; Schmidt et al., 2017). By studying the interaction of riverine plastic transport and water plants, the transport dynamics can be better understood, which might help with the mitigation of the environmental plastic debris problem.
A field study based in the Saigon river, Vietnam, found a correlation between macroplastic(>5 cm) abundance and organic material, where no other correlations were found(van Emmerik et al., 2019). We hypothesize that water hyacinths have an important role in the spatiotemporal dynamics of riverine macroplastic transport. The organic material in this river was predominantly identified as water hyacinths, an invasive plant common in Southeast Asia. In this study, we developed a method using image analysis, to detect macro plastics and floating vegetation(lab-grown water hyacinths). Image analysis in combination with drone technology creates opportunities to collect field data, with already promising results(Geraeds et al., 2019). We analyzed the images, to obtain an approximation of the amount of plastic and vegetation, visible from the surface. We subsequently use this data to evaluate the relationship between plastic abundance and vegetation in rivers. The method developed in this study can be used to collect data in the field. Targeted observations of plastic entrapment in water hyacinths may shed additional light on the potential of using water hyacinths as a proxy for riverine macroplastic transport dynamics.
References
M. C. Blettler, E. Abrial, F. R. Khan, N. Sivri, and L. A. Espinola. Freshwater plastic pollution:Recognizing research biases and identifying knowledge gaps. Water research, 143:416-424, 2018.
M. Geraeds, T. van Emmerik, R. de Vries, and M. S. bin Ab Razak. Riverine plastic litter monitoring using unmanned aerial vehicles (uavs). Remote Sensing, 11(17):2045, 2019.
C. Schmidt, T. Krauth, and S.Wagner. Export of plastic debris by rivers into the sea. Environmental science & technology, 51(21):12246-12253, 2017.
T. van Emmerik, E. Strady, T.-C. Kieu-Le, L. Nguyen, and N. Gratoit. Seasonality of riverine macroplastic transport. Nature Scientific Reports, 2019.
How to cite: Castrop, E., van Emmerik, T., van den Berg, S., Kosten, S., Strady, E., and Kieu-Le, T.-C.: Plants, plastic and rivers: Do water hyacinths play a role in riverine macroplastic transport?, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-15198, https://doi.org/10.5194/egusphere-egu2020-15198, 2020.
Excess of plastic debris in the environment is a worldwide problem. The origin of the plastic sources is partly land-based, the function of rivers as transportation pathways for plastics is an emerging research field. However, the transportation dynamics of macroplastic in river systems are still poorly understood(Blettler et al., 2018; Schmidt et al., 2017). By studying the interaction of riverine plastic transport and water plants, the transport dynamics can be better understood, which might help with the mitigation of the environmental plastic debris problem.
A field study based in the Saigon river, Vietnam, found a correlation between macroplastic(>5 cm) abundance and organic material, where no other correlations were found(van Emmerik et al., 2019). We hypothesize that water hyacinths have an important role in the spatiotemporal dynamics of riverine macroplastic transport. The organic material in this river was predominantly identified as water hyacinths, an invasive plant common in Southeast Asia. In this study, we developed a method using image analysis, to detect macro plastics and floating vegetation(lab-grown water hyacinths). Image analysis in combination with drone technology creates opportunities to collect field data, with already promising results(Geraeds et al., 2019). We analyzed the images, to obtain an approximation of the amount of plastic and vegetation, visible from the surface. We subsequently use this data to evaluate the relationship between plastic abundance and vegetation in rivers. The method developed in this study can be used to collect data in the field. Targeted observations of plastic entrapment in water hyacinths may shed additional light on the potential of using water hyacinths as a proxy for riverine macroplastic transport dynamics.
References
M. C. Blettler, E. Abrial, F. R. Khan, N. Sivri, and L. A. Espinola. Freshwater plastic pollution:Recognizing research biases and identifying knowledge gaps. Water research, 143:416-424, 2018.
M. Geraeds, T. van Emmerik, R. de Vries, and M. S. bin Ab Razak. Riverine plastic litter monitoring using unmanned aerial vehicles (uavs). Remote Sensing, 11(17):2045, 2019.
C. Schmidt, T. Krauth, and S.Wagner. Export of plastic debris by rivers into the sea. Environmental science & technology, 51(21):12246-12253, 2017.
T. van Emmerik, E. Strady, T.-C. Kieu-Le, L. Nguyen, and N. Gratoit. Seasonality of riverine macroplastic transport. Nature Scientific Reports, 2019.
How to cite: Castrop, E., van Emmerik, T., van den Berg, S., Kosten, S., Strady, E., and Kieu-Le, T.-C.: Plants, plastic and rivers: Do water hyacinths play a role in riverine macroplastic transport?, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-15198, https://doi.org/10.5194/egusphere-egu2020-15198, 2020.
EGU2020-17389 | Displays | ITS2.7/HS12.2 | Highlight
Fluid mechanics of plastic debris clogging the hydraulic structures in IndonesiaVesna Bertoncelj, Wim Uijttewaal, Mohammad Farid, and Jeremy Bricker
The frequent urban floods in Jakarta and Bandung, Indonesia affect the lives and livelihoods of millions of people. Floods cause damage and casualties, while climate change, unchecked development and land subsidence are worsening the problem. One factor contributing to these floods is floating debris clogging the city's drainage structures. A major proportion of floating debris consists of macro plastics which are extremely persistent in the environment. Trash racks that are clogged due to continuous accumulation of plastics in front of them can block the water flow in the river, leading to an increase in upstream water level and causing floods.
The understanding of transport and accumulation of the macro plastics in the river systems is limited as the field surveys are difficult to perform and the variety of properties of plastic debris is enormous. However, understanding of the origin, fate and pathway of plastic waste is required in order to come up with an optimal solution for plastic collection and prevention of harmful accumulation in front of the hydraulic structures. With this urge in mind field observations will be conducted on the selected river sections in Bandung and Jakarta during the monsoon season in 2020. Field observations will include the measurements of bathymetry, velocity profiles, concentrations and the characterization of floating debris, as well as identifying the accumulation hot spots of floating debris. Furthermore, experimental and numerical modelling will be performed based on the data collected during the field campaign in order to couple different debris classes to a range of riverine situations and understand the differences in their driving mechanisms.
Using a combination of field measurements, experimental modelling and empirical relations we aim to investigate the driving mechanisms of riverine plastic transport and changes in hydraulic properties due to local disturbances of the current. We will therefore link the type of hydraulic structures and the extend of obstructions due to accumulation of plastic debris to the changes in the upstream water level. This will lead to a better understanding of plastic transport in the river systems in Bandung and Jakarta, to formulate design criteria for structures in trash-laden streams and devise ways to pass trash during floods.
How to cite: Bertoncelj, V., Uijttewaal, W., Farid, M., and Bricker, J.: Fluid mechanics of plastic debris clogging the hydraulic structures in Indonesia, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-17389, https://doi.org/10.5194/egusphere-egu2020-17389, 2020.
The frequent urban floods in Jakarta and Bandung, Indonesia affect the lives and livelihoods of millions of people. Floods cause damage and casualties, while climate change, unchecked development and land subsidence are worsening the problem. One factor contributing to these floods is floating debris clogging the city's drainage structures. A major proportion of floating debris consists of macro plastics which are extremely persistent in the environment. Trash racks that are clogged due to continuous accumulation of plastics in front of them can block the water flow in the river, leading to an increase in upstream water level and causing floods.
The understanding of transport and accumulation of the macro plastics in the river systems is limited as the field surveys are difficult to perform and the variety of properties of plastic debris is enormous. However, understanding of the origin, fate and pathway of plastic waste is required in order to come up with an optimal solution for plastic collection and prevention of harmful accumulation in front of the hydraulic structures. With this urge in mind field observations will be conducted on the selected river sections in Bandung and Jakarta during the monsoon season in 2020. Field observations will include the measurements of bathymetry, velocity profiles, concentrations and the characterization of floating debris, as well as identifying the accumulation hot spots of floating debris. Furthermore, experimental and numerical modelling will be performed based on the data collected during the field campaign in order to couple different debris classes to a range of riverine situations and understand the differences in their driving mechanisms.
Using a combination of field measurements, experimental modelling and empirical relations we aim to investigate the driving mechanisms of riverine plastic transport and changes in hydraulic properties due to local disturbances of the current. We will therefore link the type of hydraulic structures and the extend of obstructions due to accumulation of plastic debris to the changes in the upstream water level. This will lead to a better understanding of plastic transport in the river systems in Bandung and Jakarta, to formulate design criteria for structures in trash-laden streams and devise ways to pass trash during floods.
How to cite: Bertoncelj, V., Uijttewaal, W., Farid, M., and Bricker, J.: Fluid mechanics of plastic debris clogging the hydraulic structures in Indonesia, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-17389, https://doi.org/10.5194/egusphere-egu2020-17389, 2020.
EGU2020-18444 | Displays | ITS2.7/HS12.2
Sediment trapping as a method for monitoring microplastic flux rates and deposition at aquatic environmentsSaija Saarni, Samuel Hartikainen, Emilia Uurasjärvi, Senja Meronen, Jari Hänninen, Maarit Kalliokoski, and Arto Koistinen
Microplastics are reported from wide range of aquatic environments with concentrations up to thousands of particles per kilogram of sediment. Due to a lack of temporal control, evaluation of the influx rate of microplastic pollution is not enabled. However, understanding the annual flux rate of microplastics to the aquatic environments is a crucial aspect for environmental monitoring and for risk assessment. A sediment trap method is widely applied in aquatic sedimentary studies in order to measure sedimentation rates and understand sedimentation processes. We have tested near-bottom sediment trap method in lacustrine and estuary environments, at central and coastal Finland, for measuring and quantifying the microplastic influx rate during one year. Near-bottom sediment traps with two collector tubes and known surface area, fixed one meter from the bottom, collect all particles that are about to accumulate on the basin floor of the water body. Controlled temporal interval of trap maintenance enables calculation and determination of local microplastic influx rate i.e. number of accumulating particles per time per surface area. The test results are very promising. Near-bottom sediment traps can be used for long term monitoring in order to gain a deeper understanding of the microplastic transport and sedimentation processes, confirm and compare the feasibility and efficiency of different environmental conservation methods, setting threshold values for microplastic influx, and supervising that the defined target conditions are met.
How to cite: Saarni, S., Hartikainen, S., Uurasjärvi, E., Meronen, S., Hänninen, J., Kalliokoski, M., and Koistinen, A.: Sediment trapping as a method for monitoring microplastic flux rates and deposition at aquatic environments , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18444, https://doi.org/10.5194/egusphere-egu2020-18444, 2020.
Microplastics are reported from wide range of aquatic environments with concentrations up to thousands of particles per kilogram of sediment. Due to a lack of temporal control, evaluation of the influx rate of microplastic pollution is not enabled. However, understanding the annual flux rate of microplastics to the aquatic environments is a crucial aspect for environmental monitoring and for risk assessment. A sediment trap method is widely applied in aquatic sedimentary studies in order to measure sedimentation rates and understand sedimentation processes. We have tested near-bottom sediment trap method in lacustrine and estuary environments, at central and coastal Finland, for measuring and quantifying the microplastic influx rate during one year. Near-bottom sediment traps with two collector tubes and known surface area, fixed one meter from the bottom, collect all particles that are about to accumulate on the basin floor of the water body. Controlled temporal interval of trap maintenance enables calculation and determination of local microplastic influx rate i.e. number of accumulating particles per time per surface area. The test results are very promising. Near-bottom sediment traps can be used for long term monitoring in order to gain a deeper understanding of the microplastic transport and sedimentation processes, confirm and compare the feasibility and efficiency of different environmental conservation methods, setting threshold values for microplastic influx, and supervising that the defined target conditions are met.
How to cite: Saarni, S., Hartikainen, S., Uurasjärvi, E., Meronen, S., Hänninen, J., Kalliokoski, M., and Koistinen, A.: Sediment trapping as a method for monitoring microplastic flux rates and deposition at aquatic environments , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18444, https://doi.org/10.5194/egusphere-egu2020-18444, 2020.
EGU2020-20263 | Displays | ITS2.7/HS12.2
Waste water treatment as a source of microplastic pollutionSimon Dixon, Megan Trusler, and Charlotte Kiernan
Plastic pollution has now been found across the Earth’s active zone, with recent studies finding plastics in remote parts of the Pacific Ocean, in deep ocean trenches, and in the high Arctic. Of particular concern are microplastics (<5mm diameter), these can be ingested by organisms where they have been shown to cause both chronic and acute health problems. In order to address plastic pollution there is a need to understand how plastic in the oceans is linked to terrestrial sources. Recent conceptual models have illustrated that plastic pollution is a complex interlinked problem with myriad sources and pathways introducing and redistributing plastic around the environment. Terrestrial and freshwater sources are likely to be significant contributors to overall plastic pollution; however, to date they remain poorly understood or quantified. There is a need to both identify and quantify sources of microplastic pollution in terrestrial and freshwater environments, as well as vectors which lead to the redistribution and storage of microplastics in hotspots of accumulation.
In this study we present pilot data attempting to characterise the influence of Waste Water Treatment (WWT) processes on environmental plastic pollution. Using the concept of the “Plastic Cycle” we identify various pathways for plastics present in domestic waste water to enter the environment after treatment. Using two study areas in the UK, we quantify the microplastic loading to the environment from WWT effluent, which is discharged to freshwaters, and from WWT sludge, which is spread on agricultural land as fertiliser. Our results show that both effluent and sludge are important sources of microplastics to the environment. However, these can be of the same order of magnitude as other sources indicating that addressing environmental microplastic pollution is likely to need an integrated approach. Our results also show these sources have lower loadings at some of our sites than reported in other studies, this indicates both treatment processes in WWT and management practices in sludge spreading are likely to be important in determining environmental loading of microplastics at specific sites. The influence of waste water treatment as a source of microplastic pollution needs to be further constrained, but our pilot data indicates a complex picture which needs to be better understood in order to inform environmental governance.
How to cite: Dixon, S., Trusler, M., and Kiernan, C.: Waste water treatment as a source of microplastic pollution, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20263, https://doi.org/10.5194/egusphere-egu2020-20263, 2020.
Plastic pollution has now been found across the Earth’s active zone, with recent studies finding plastics in remote parts of the Pacific Ocean, in deep ocean trenches, and in the high Arctic. Of particular concern are microplastics (<5mm diameter), these can be ingested by organisms where they have been shown to cause both chronic and acute health problems. In order to address plastic pollution there is a need to understand how plastic in the oceans is linked to terrestrial sources. Recent conceptual models have illustrated that plastic pollution is a complex interlinked problem with myriad sources and pathways introducing and redistributing plastic around the environment. Terrestrial and freshwater sources are likely to be significant contributors to overall plastic pollution; however, to date they remain poorly understood or quantified. There is a need to both identify and quantify sources of microplastic pollution in terrestrial and freshwater environments, as well as vectors which lead to the redistribution and storage of microplastics in hotspots of accumulation.
In this study we present pilot data attempting to characterise the influence of Waste Water Treatment (WWT) processes on environmental plastic pollution. Using the concept of the “Plastic Cycle” we identify various pathways for plastics present in domestic waste water to enter the environment after treatment. Using two study areas in the UK, we quantify the microplastic loading to the environment from WWT effluent, which is discharged to freshwaters, and from WWT sludge, which is spread on agricultural land as fertiliser. Our results show that both effluent and sludge are important sources of microplastics to the environment. However, these can be of the same order of magnitude as other sources indicating that addressing environmental microplastic pollution is likely to need an integrated approach. Our results also show these sources have lower loadings at some of our sites than reported in other studies, this indicates both treatment processes in WWT and management practices in sludge spreading are likely to be important in determining environmental loading of microplastics at specific sites. The influence of waste water treatment as a source of microplastic pollution needs to be further constrained, but our pilot data indicates a complex picture which needs to be better understood in order to inform environmental governance.
How to cite: Dixon, S., Trusler, M., and Kiernan, C.: Waste water treatment as a source of microplastic pollution, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20263, https://doi.org/10.5194/egusphere-egu2020-20263, 2020.
EGU2020-20643 | Displays | ITS2.7/HS12.2 | Highlight
Wastewater as a potential source of microplastics in aquatic environmentsTanveer M. Adyel
Rapid increasing production and utilization of microplastics (MPs) raise concerns about environmental risks globally. Literature indicates that wastewater and wastewater treatment plants (WWTPs) play critical sources in releasing MPs to the environment. Among different MPs, microbeads added into the facial cleanser, and toothpaste can be directly discharged into wastewater through human activities. Synthetic clothing, i.e., polyester (PES) and nylon, might shed thousands of fibers into wastewater during the washing process. WWTPs are not designed to capture MPs, and therefore, a huge amount of MPs load can be discharged without or poor treatment, and can accumulate in aquatic environments. This work shows a comprehensive overview of available information on the presence of MPs in different freshwater environments, particularly rivers, along with MPs types, sizes, shapes, and properties. Moreover, the study also indicates significant technical advancement in MPs detection, characterization, and quantification from the complex sample matrix.
How to cite: Adyel, T. M.: Wastewater as a potential source of microplastics in aquatic environments , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20643, https://doi.org/10.5194/egusphere-egu2020-20643, 2020.
Rapid increasing production and utilization of microplastics (MPs) raise concerns about environmental risks globally. Literature indicates that wastewater and wastewater treatment plants (WWTPs) play critical sources in releasing MPs to the environment. Among different MPs, microbeads added into the facial cleanser, and toothpaste can be directly discharged into wastewater through human activities. Synthetic clothing, i.e., polyester (PES) and nylon, might shed thousands of fibers into wastewater during the washing process. WWTPs are not designed to capture MPs, and therefore, a huge amount of MPs load can be discharged without or poor treatment, and can accumulate in aquatic environments. This work shows a comprehensive overview of available information on the presence of MPs in different freshwater environments, particularly rivers, along with MPs types, sizes, shapes, and properties. Moreover, the study also indicates significant technical advancement in MPs detection, characterization, and quantification from the complex sample matrix.
How to cite: Adyel, T. M.: Wastewater as a potential source of microplastics in aquatic environments , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20643, https://doi.org/10.5194/egusphere-egu2020-20643, 2020.
EGU2020-11840 | Displays | ITS2.7/HS12.2 | Highlight
Riverine Macroplastics and How to Find ThemTim van Emmerik and Anna Schwarz
Macroplastic (>0.5 cm) pollution in aquatic environments is an emerging environmental risk, as it negatively impacts ecosystems, endangers aquatic species, and causes economic damage. Rivers are known to play a crucial role in transporting land-based plastic waste into the world’s oceans. However, rivers and their ecosystems are also directly affected by plastic pollution. To better quantify global plastic pollution pathways and to effectively reduce sources and risks, a thorough understanding of riverine macroplastic sources, transport, fate and effects is crucial. In our presentation, we discuss the current scientific state on macroplastic in rivers and evaluate existing knowledge gaps. We discuss the origin and fate of riverine plastics, including processes and factors influencing macroplastic transport and its spatiotemporal variation. Moreover, we present an overview of monitoring and modeling efforts to characterize riverine plastic transport and give examples of typical values from around the world (van Emmerik & Schwarz, 2020). With our presentation, we aim to present a comprehensive overview of riverine macroplastic research to date and suggest multiple ways forward for future research.
References
van Emmerik, T, Schwarz, A. Plastic debris in rivers. WIREs Water. 2020; 7:e1398. https://doi.org/10.1002/wat2.1398
How to cite: van Emmerik, T. and Schwarz, A.: Riverine Macroplastics and How to Find Them, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11840, https://doi.org/10.5194/egusphere-egu2020-11840, 2020.
Macroplastic (>0.5 cm) pollution in aquatic environments is an emerging environmental risk, as it negatively impacts ecosystems, endangers aquatic species, and causes economic damage. Rivers are known to play a crucial role in transporting land-based plastic waste into the world’s oceans. However, rivers and their ecosystems are also directly affected by plastic pollution. To better quantify global plastic pollution pathways and to effectively reduce sources and risks, a thorough understanding of riverine macroplastic sources, transport, fate and effects is crucial. In our presentation, we discuss the current scientific state on macroplastic in rivers and evaluate existing knowledge gaps. We discuss the origin and fate of riverine plastics, including processes and factors influencing macroplastic transport and its spatiotemporal variation. Moreover, we present an overview of monitoring and modeling efforts to characterize riverine plastic transport and give examples of typical values from around the world (van Emmerik & Schwarz, 2020). With our presentation, we aim to present a comprehensive overview of riverine macroplastic research to date and suggest multiple ways forward for future research.
References
van Emmerik, T, Schwarz, A. Plastic debris in rivers. WIREs Water. 2020; 7:e1398. https://doi.org/10.1002/wat2.1398
How to cite: van Emmerik, T. and Schwarz, A.: Riverine Macroplastics and How to Find Them, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11840, https://doi.org/10.5194/egusphere-egu2020-11840, 2020.
EGU2020-698 | Displays | ITS2.7/HS12.2
Rapid assessment of floating macroplastic transport in the RhinePaul Vriend, Tim van Emmerik, Caroline van Calcar, Merel Kooi, Harm Landman, and Remco Pikaar
Most marine litter pollution is assumed to originate from land-based sources, entering the marine environment through rivers. To better understand and quantify the risk that plastic pollution poses on aquatic ecosystems, and to develop effective prevention and mitigation methods, a better understanding of riverine plastic transport is needed. To achieve this, quantification of riverine plastic transport is crucial. Here, we demonstrate how established methods can be combined to provide a rapid and cost-effective characterization and quantification of floating macroplastic transport in the River Rhine We combine visual observations with passive sampling to arrive at a first-order estimate of macroplastic transport, both in number (10 - 75 items per hour) and mass per unit of time (1.3 – 9.7 kg per day). Additionally, our assessment gives insight in the most abundant macroplastic polymer types the downstream reach of the River Rhine. Furthermore, we explore the spatial and temporal variation of plastic transport within the river, and discuss the benefits and drawbacks of current sampling methods. Finally, we present an outlook for future monitoring of major rivers, including several suggestions on how to expand the rapid assessment presented in this paper.
How to cite: Vriend, P., van Emmerik, T., van Calcar, C., Kooi, M., Landman, H., and Pikaar, R.: Rapid assessment of floating macroplastic transport in the Rhine, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-698, https://doi.org/10.5194/egusphere-egu2020-698, 2020.
Most marine litter pollution is assumed to originate from land-based sources, entering the marine environment through rivers. To better understand and quantify the risk that plastic pollution poses on aquatic ecosystems, and to develop effective prevention and mitigation methods, a better understanding of riverine plastic transport is needed. To achieve this, quantification of riverine plastic transport is crucial. Here, we demonstrate how established methods can be combined to provide a rapid and cost-effective characterization and quantification of floating macroplastic transport in the River Rhine We combine visual observations with passive sampling to arrive at a first-order estimate of macroplastic transport, both in number (10 - 75 items per hour) and mass per unit of time (1.3 – 9.7 kg per day). Additionally, our assessment gives insight in the most abundant macroplastic polymer types the downstream reach of the River Rhine. Furthermore, we explore the spatial and temporal variation of plastic transport within the river, and discuss the benefits and drawbacks of current sampling methods. Finally, we present an outlook for future monitoring of major rivers, including several suggestions on how to expand the rapid assessment presented in this paper.
How to cite: Vriend, P., van Emmerik, T., van Calcar, C., Kooi, M., Landman, H., and Pikaar, R.: Rapid assessment of floating macroplastic transport in the Rhine, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-698, https://doi.org/10.5194/egusphere-egu2020-698, 2020.
EGU2020-22384 | Displays | ITS2.7/HS12.2
Forecasting plastic mobilization during extreme hydrological eventsJasper Roebroek, Shaun Harrigan, and Tim van Emmerik
Plastic pollution of aquatic ecosystems is an emerging environmental risk. Land-based plastics are considered the main source of plastic litter in the world’s oceans. Quantifying the emission from rivers into the oceans is crucial to optimize prevention, mitigation and cleanup strategies. Although several studies have focused on estimating annual plastic emission based on average hydrology, the role of extreme events remains underexplored. Recent work has demonstrated that floods can mobilize additional plastics. For example, the 2015/2016 UK floods resulted in a 70% decrease of microplastic sediments in several catchments. In this project, the use of the Global Flood Awareness System (GloFAS) flood forecasting system to assess additional mobilization of plastic pollution will be explored.
How to cite: Roebroek, J., Harrigan, S., and van Emmerik, T.: Forecasting plastic mobilization during extreme hydrological events, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22384, https://doi.org/10.5194/egusphere-egu2020-22384, 2020.
Plastic pollution of aquatic ecosystems is an emerging environmental risk. Land-based plastics are considered the main source of plastic litter in the world’s oceans. Quantifying the emission from rivers into the oceans is crucial to optimize prevention, mitigation and cleanup strategies. Although several studies have focused on estimating annual plastic emission based on average hydrology, the role of extreme events remains underexplored. Recent work has demonstrated that floods can mobilize additional plastics. For example, the 2015/2016 UK floods resulted in a 70% decrease of microplastic sediments in several catchments. In this project, the use of the Global Flood Awareness System (GloFAS) flood forecasting system to assess additional mobilization of plastic pollution will be explored.
How to cite: Roebroek, J., Harrigan, S., and van Emmerik, T.: Forecasting plastic mobilization during extreme hydrological events, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22384, https://doi.org/10.5194/egusphere-egu2020-22384, 2020.
EGU2020-22406 | Displays | ITS2.7/HS12.2
Crowd-based observations of riverine macroplastic pollutionBarbara Strobl, Tim van Emmerik, Jan Seibert, Simon Etter, Tijmen den Oudendammer, Martine Rutten, and Ilja van Meerveld
Plastic debris in aquatic environments is an emerging environmental hazard. Macroplastic pollution (>5 cm) negatively impacts aquatic life and threatens human livelihood, on land, in oceans and within river systems. Reliable information of the origin, fate and pathways of plastic through river systems are required to optimize prevention, mitigation and reduction strategies. Yet, accurate and long-term data on plastic transport are still lacking. Current macroplastic monitoring strategies involve labor intensive sampling methods, require investment in infrastructure. As a result, these measurements have a low temporal resolution and are available for only a few locations. Crowd-based observations of riverine macroplastic pollution may offer a way for more frequent cost-effective data collection over an extensive geographical range. In this presentation we demonstrate the potential of crowd-based observations of floating plastic and plastic on riverbanks. We extended the existing CrowdWater smartphone app for hydrological observations with a module for plastic observations in rivers. We analyzed data from two cases: (1) floating plastic in the River Klang, Malaysia, and (2) plastic the banks of the River Rhine in The Netherlands. Crowd-based observations of floating plastic yield similar estimates of plastic transport, distribution of plastic across the river width, and polymer composition as reference observations. The riverbank observations provided the first data of plastic pollution on the most downstream stretches of the Rhine, revealing peaks close to urban areas and an increasing plastic density towards the river mouth. With this presentation we aim to highlight the important role that crowd-based observations of macroplastic pollution in river systems can play in future monitoring strategies to provide complementing data of plastic transport composition at a higher spatial and temporal resolution than is possible with standard methods.
How to cite: Strobl, B., van Emmerik, T., Seibert, J., Etter, S., den Oudendammer, T., Rutten, M., and van Meerveld, I.: Crowd-based observations of riverine macroplastic pollution, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22406, https://doi.org/10.5194/egusphere-egu2020-22406, 2020.
Plastic debris in aquatic environments is an emerging environmental hazard. Macroplastic pollution (>5 cm) negatively impacts aquatic life and threatens human livelihood, on land, in oceans and within river systems. Reliable information of the origin, fate and pathways of plastic through river systems are required to optimize prevention, mitigation and reduction strategies. Yet, accurate and long-term data on plastic transport are still lacking. Current macroplastic monitoring strategies involve labor intensive sampling methods, require investment in infrastructure. As a result, these measurements have a low temporal resolution and are available for only a few locations. Crowd-based observations of riverine macroplastic pollution may offer a way for more frequent cost-effective data collection over an extensive geographical range. In this presentation we demonstrate the potential of crowd-based observations of floating plastic and plastic on riverbanks. We extended the existing CrowdWater smartphone app for hydrological observations with a module for plastic observations in rivers. We analyzed data from two cases: (1) floating plastic in the River Klang, Malaysia, and (2) plastic the banks of the River Rhine in The Netherlands. Crowd-based observations of floating plastic yield similar estimates of plastic transport, distribution of plastic across the river width, and polymer composition as reference observations. The riverbank observations provided the first data of plastic pollution on the most downstream stretches of the Rhine, revealing peaks close to urban areas and an increasing plastic density towards the river mouth. With this presentation we aim to highlight the important role that crowd-based observations of macroplastic pollution in river systems can play in future monitoring strategies to provide complementing data of plastic transport composition at a higher spatial and temporal resolution than is possible with standard methods.
How to cite: Strobl, B., van Emmerik, T., Seibert, J., Etter, S., den Oudendammer, T., Rutten, M., and van Meerveld, I.: Crowd-based observations of riverine macroplastic pollution, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22406, https://doi.org/10.5194/egusphere-egu2020-22406, 2020.
EGU2020-9176 | Displays | ITS2.7/HS12.2 | Highlight
Design of Microplastics Citizen Science KitRachael Hughson-Gill
Microplastics are an ever-increasing problem. Every river that was tested in a recent study found the presence of microplastics, with 80% of all plastic in the ocean coming from upstream. Despite this, there is little understanding into the abundance of plastic, its characteristics and the full impact that is it having on marine, freshwater ecosystems and wider ecological systems.
Current fresh water monitoring does not consider the fluid dynamics of rivers, is difficult to use and is inaccessible to the wider public. My project will focus on creating a product that allows for the large-scale data collection of microplastic through citizen science. Allowing groups of people to analyse their local natural environment for the presence and abundance of microplastics within the water. This method of data collection could provide information on a scale that is not possible with traditional methods and would allow for the comparison between freshwater systems. This comparison is fundamental to begin to fill the knowledge gaps around the understanding of microplastics.
Inaccessibility of monitoring to the public is not just through tools but also through the current communication of data with research rarely breaking into the public domain. Citizen science offers not just an improvement in understanding but also offers an opportunity for engagement with the public body. Increasing awareness of the impact of habits round plastic through the sharing of monitoring data can generate the much-needed change on both an individual and policy level to address the problem from the source. This method of change through public opinion can be seen to have an effect on freshwater systems through microbeads ban, plastic bags, plastic straws and industrial pollution regulation.
Through the creation of this product a multidisciplinary approach that blends engineering and design practices is implemented. The wholistic approach to creation is something that is fundamental in the success of tools and therefore the success of the research that is implemented through them. A tool such as this whose function is within the public engagement of its use - increased awareness, as well as the outcome of its use - microplastics data, is required to have an engaging user experience as well as data integrity implemented through engineering design.
This project offers an opportunity to show the importance of the design process within research tools to aid the research process and the positive impact that can come from it.
How to cite: Hughson-Gill, R.: Design of Microplastics Citizen Science Kit, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9176, https://doi.org/10.5194/egusphere-egu2020-9176, 2020.
Microplastics are an ever-increasing problem. Every river that was tested in a recent study found the presence of microplastics, with 80% of all plastic in the ocean coming from upstream. Despite this, there is little understanding into the abundance of plastic, its characteristics and the full impact that is it having on marine, freshwater ecosystems and wider ecological systems.
Current fresh water monitoring does not consider the fluid dynamics of rivers, is difficult to use and is inaccessible to the wider public. My project will focus on creating a product that allows for the large-scale data collection of microplastic through citizen science. Allowing groups of people to analyse their local natural environment for the presence and abundance of microplastics within the water. This method of data collection could provide information on a scale that is not possible with traditional methods and would allow for the comparison between freshwater systems. This comparison is fundamental to begin to fill the knowledge gaps around the understanding of microplastics.
Inaccessibility of monitoring to the public is not just through tools but also through the current communication of data with research rarely breaking into the public domain. Citizen science offers not just an improvement in understanding but also offers an opportunity for engagement with the public body. Increasing awareness of the impact of habits round plastic through the sharing of monitoring data can generate the much-needed change on both an individual and policy level to address the problem from the source. This method of change through public opinion can be seen to have an effect on freshwater systems through microbeads ban, plastic bags, plastic straws and industrial pollution regulation.
Through the creation of this product a multidisciplinary approach that blends engineering and design practices is implemented. The wholistic approach to creation is something that is fundamental in the success of tools and therefore the success of the research that is implemented through them. A tool such as this whose function is within the public engagement of its use - increased awareness, as well as the outcome of its use - microplastics data, is required to have an engaging user experience as well as data integrity implemented through engineering design.
This project offers an opportunity to show the importance of the design process within research tools to aid the research process and the positive impact that can come from it.
How to cite: Hughson-Gill, R.: Design of Microplastics Citizen Science Kit, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9176, https://doi.org/10.5194/egusphere-egu2020-9176, 2020.
EGU2020-10049 | Displays | ITS2.7/HS12.2
Coupled CFD-DEM modelling to assess settlement velocity and drag coefficient of microplasticsRobin Jérémy, Latessa Pablo Gaston, and Manousos Valyrakis
Several studies have documented high concentration of microplastics on fresh water sources, oceans and even on treated tap and bottled water. Understanding the physics behind these particles in the water environment has become one of the key research needs identified in the World Health Organization Report (2019). In order to develop novel and efficient methodologies for sampling, treating and removing microplastics from water bodies, a thorough understanding of the sources and transportation and storage mechanisms of these particles is required.
In this article, the settlement velocity affecting the transport [1, 2] of low-density particles (1<r<1.4 g.cm-3) and drag coefficients is assessed through numerical modelling. The effects of fluid and particle relative densities and media temperatures are analysed, as well as the impact of the particle size and shapes [3].
Computational Fluid Dynamics (CFD) techniques are applied to solve the fluid dynamics while the Discrete Element Method (DEM) approach is used to model the particle trajectories [4]. These two modules are coupled under the CFDEM module, which transmits the forces from the fluid into the particle and from the particle into the surrounding water through the Fictitious Boundary Method approach.
Several tests are run under the same particle conditions in order to estimate the influence of turbulent flows on these experiments. The influence from different particle densities and diameters on settling velocities and drag coefficients is assessed. The numerical results are validated against a wide range of experimental data [2, 3] and compared against empirical predictions.
There is an urge for gaining a better understanding of the sources and transport of microplastics through fresh water bodies. In this sense, sampling and quantification of microplastics in a drinking water source is key to evaluate the environment status and to design the most appropriate techniques to reduce or remove the microplastics from the aquatic environments. The implementation of coupled CFD-DEM models provides a very powerful tool for the understanding and prediction of the transport processes and the accumulation of microplastics along the fluvial vectors.
References
[1] Valyrakis M., Diplas P. and Dancey C.L. 2013. Entrainment of coarse particles in turbulent flows: An energy approach. J. Geophys. Res. Earth Surf., Vol. 118, No. 1., pp 42- 53, doi:340210.1029/2012JF002354.
[2] Valyrakis, M., Farhadi, H. 2017. Investigating coarse sediment particles transport using PTV and “smart-pebbles” instrumented with inertial sensors, EGU General Assembly 2017, Vienna, Austria, 23-28 April 2017, id. 9980.
[3] Valyrakis, M., J. Kh. Al-Hinai, D. Liu (2018), Transport of floating plastics along a channel with a vegetated riverbank, 12th International Symposium on Ecohydraulics, Tokyo, Japan, August 19-24, 2018, a11_2705647.
[4] Valyrakis M., P. Diplas, C.L. Dancey, and A.O. Celik. 2008. Investigation of evolution of gravel river bed microforms using a simplified Discrete Particle Model, International Conference on Fluvial Hydraulics River Flow 2008, Ismir, Turkey, 03-05 September 2008, 10p.
How to cite: Jérémy, R., Pablo Gaston, L., and Valyrakis, M.: Coupled CFD-DEM modelling to assess settlement velocity and drag coefficient of microplastics , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10049, https://doi.org/10.5194/egusphere-egu2020-10049, 2020.
Several studies have documented high concentration of microplastics on fresh water sources, oceans and even on treated tap and bottled water. Understanding the physics behind these particles in the water environment has become one of the key research needs identified in the World Health Organization Report (2019). In order to develop novel and efficient methodologies for sampling, treating and removing microplastics from water bodies, a thorough understanding of the sources and transportation and storage mechanisms of these particles is required.
In this article, the settlement velocity affecting the transport [1, 2] of low-density particles (1<r<1.4 g.cm-3) and drag coefficients is assessed through numerical modelling. The effects of fluid and particle relative densities and media temperatures are analysed, as well as the impact of the particle size and shapes [3].
Computational Fluid Dynamics (CFD) techniques are applied to solve the fluid dynamics while the Discrete Element Method (DEM) approach is used to model the particle trajectories [4]. These two modules are coupled under the CFDEM module, which transmits the forces from the fluid into the particle and from the particle into the surrounding water through the Fictitious Boundary Method approach.
Several tests are run under the same particle conditions in order to estimate the influence of turbulent flows on these experiments. The influence from different particle densities and diameters on settling velocities and drag coefficients is assessed. The numerical results are validated against a wide range of experimental data [2, 3] and compared against empirical predictions.
There is an urge for gaining a better understanding of the sources and transport of microplastics through fresh water bodies. In this sense, sampling and quantification of microplastics in a drinking water source is key to evaluate the environment status and to design the most appropriate techniques to reduce or remove the microplastics from the aquatic environments. The implementation of coupled CFD-DEM models provides a very powerful tool for the understanding and prediction of the transport processes and the accumulation of microplastics along the fluvial vectors.
References
[1] Valyrakis M., Diplas P. and Dancey C.L. 2013. Entrainment of coarse particles in turbulent flows: An energy approach. J. Geophys. Res. Earth Surf., Vol. 118, No. 1., pp 42- 53, doi:340210.1029/2012JF002354.
[2] Valyrakis, M., Farhadi, H. 2017. Investigating coarse sediment particles transport using PTV and “smart-pebbles” instrumented with inertial sensors, EGU General Assembly 2017, Vienna, Austria, 23-28 April 2017, id. 9980.
[3] Valyrakis, M., J. Kh. Al-Hinai, D. Liu (2018), Transport of floating plastics along a channel with a vegetated riverbank, 12th International Symposium on Ecohydraulics, Tokyo, Japan, August 19-24, 2018, a11_2705647.
[4] Valyrakis M., P. Diplas, C.L. Dancey, and A.O. Celik. 2008. Investigation of evolution of gravel river bed microforms using a simplified Discrete Particle Model, International Conference on Fluvial Hydraulics River Flow 2008, Ismir, Turkey, 03-05 September 2008, 10p.
How to cite: Jérémy, R., Pablo Gaston, L., and Valyrakis, M.: Coupled CFD-DEM modelling to assess settlement velocity and drag coefficient of microplastics , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10049, https://doi.org/10.5194/egusphere-egu2020-10049, 2020.
EGU2020-22467 | Displays | ITS2.7/HS12.2
Characterization of Plastic Pollution in Rivers: Case of Sapang Baho River, Rizal, PhilippinesMa. Brida Lea D. Diola, Maria Antonia N. Tanchuling, Dawn Rhodette G. Bonifacio, and Marian Jave N. Delos Santos
Philippines is considered as one of the top contributors of plastic wastes in the oceans globally. Lack of strict implementation of solid waste management regulations has led to mismanaged wastes, especially plastics, that eventually end up in water bodies. This study focuses on characterizing plastic waste pollution in Sapang Baho River in the province of Rizal. The river is located in an urban area and is a significant tributary of Laguna Lake, the largest lake in the country. Through this study, macrowastes and microplastics in Sapang Baho River, Rizal were characterized and analyzed to provide baseline information and to raise awareness to address plastic pollution, in macro- and micro-scale. This study also determined possible sources of microplastics by relating the particles to the plastic wastes present as well as activities in the sites. Waste analysis and characterization studies (WACS) were conducted for four sampling stations along the river. Microplastic samples were also collected from surface water and were characterized based on form such as filament, fragment, film, foam, and pellet through microscope examination. Representative samples were subjected to Raman spectroscopy testing to identify the polymer types. Results show that macrowaste samples were mostly plastic wastes (27.33%) in terms of mass. Plastic wastes were composed of film plastic (47%). Most of the microplastic particles were in the form of filaments (92.24%) which were fragmented from textile wastes and cloth washing. In terms of color, transparent particles were dominant and particles in the lower size range (0.3 mm - 0.8 mm) were predominant. Samples subjected to Raman spectroscopy were mainly polyethylene (PE), a material used in containers and packaging. Lastly, it was calculated that the surface water of Sapang Baho River contributes approximately 24 - 362 microplastic particles to Laguna Lake.
How to cite: Diola, Ma. B. L. D., Tanchuling, M. A. N., Bonifacio, D. R. G., and Delos Santos, M. J. N.: Characterization of Plastic Pollution in Rivers: Case of Sapang Baho River, Rizal, Philippines, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22467, https://doi.org/10.5194/egusphere-egu2020-22467, 2020.
Philippines is considered as one of the top contributors of plastic wastes in the oceans globally. Lack of strict implementation of solid waste management regulations has led to mismanaged wastes, especially plastics, that eventually end up in water bodies. This study focuses on characterizing plastic waste pollution in Sapang Baho River in the province of Rizal. The river is located in an urban area and is a significant tributary of Laguna Lake, the largest lake in the country. Through this study, macrowastes and microplastics in Sapang Baho River, Rizal were characterized and analyzed to provide baseline information and to raise awareness to address plastic pollution, in macro- and micro-scale. This study also determined possible sources of microplastics by relating the particles to the plastic wastes present as well as activities in the sites. Waste analysis and characterization studies (WACS) were conducted for four sampling stations along the river. Microplastic samples were also collected from surface water and were characterized based on form such as filament, fragment, film, foam, and pellet through microscope examination. Representative samples were subjected to Raman spectroscopy testing to identify the polymer types. Results show that macrowaste samples were mostly plastic wastes (27.33%) in terms of mass. Plastic wastes were composed of film plastic (47%). Most of the microplastic particles were in the form of filaments (92.24%) which were fragmented from textile wastes and cloth washing. In terms of color, transparent particles were dominant and particles in the lower size range (0.3 mm - 0.8 mm) were predominant. Samples subjected to Raman spectroscopy were mainly polyethylene (PE), a material used in containers and packaging. Lastly, it was calculated that the surface water of Sapang Baho River contributes approximately 24 - 362 microplastic particles to Laguna Lake.
How to cite: Diola, Ma. B. L. D., Tanchuling, M. A. N., Bonifacio, D. R. G., and Delos Santos, M. J. N.: Characterization of Plastic Pollution in Rivers: Case of Sapang Baho River, Rizal, Philippines, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22467, https://doi.org/10.5194/egusphere-egu2020-22467, 2020.
EGU2020-11223 | Displays | ITS2.7/HS12.2
Riverine transport of microplastics from the Dutch border to the North seaFrans Buschman, Annelotte van der Linden, and Arjen Markus
Microplastics may affect marine and freshwater ecosystems and human health negatively. Important point sources of microplastics in rivers are locations where microplastics are released into the river, such as waste water treatment plants. Diffuse sources include the fragmentation of macroplastic items and tire and road wear particles that are flushed into the river (Unice et al., 2019). Once in the river, the different types and sizes of microplastics are transported with the flow. How this transport depends on environmental conditions is largely unknown. Due to the effort needed to monitor the microplastic concentration and composition, usually observations are carried out at one location in the water column only and are only repeated a few times. With a model, the spatial and temporal variation of the microplastics concentration can be predicted.
We modeled the transport and fate of microplastics (here defined as particles within 0.05 and 5 mm) in Dutch rivers and streams. We used a depth and width averaged flow model for the Netherlands. At the main upstream boundaries of the model (Lobith in the Rhine and Eijsden in the Meuse) microplastics were released. The concentration of different types of microplastics was based on observations by Urgert (2015). The model included the processes advection, deposition and hetero-aggregation of microplastics with sediment to determine the transport and fate. Overall, the model results suggest that the deposition is small: about 66-90 percent of the released microplastics are transported out of the model towards the sea, meaning that 10-34 percent are either deposited to the river bed or are stored in the water column. Resuspension of deposited microplastics was not included in the model. A sensitivity study for which resuspension was included suggests that it is not an important process in the current 1D simulation, since the flow velocities at accumulation areas rarely exceed the critical flow velocity for resuspension. The simulated annual transport of microplastics is higher than estimates based on observations (van der Wal et al., 2015; Mani et al., 2015), although sources within the Netherlands are not yet included in the model. This needs to be re-evaluated in the future, after sources of microplastics from within The Netherlands have been introduced in the model.
- Mani T., A. Hauk, U. Walter and P. Burkhardt-Holm (2015) Microplastics profile along the Rhine River. Nature Scientific Reports.
- Unice, K.M., M.P. Weeber, M.M. Abramson, R.C.D. Reid, J.A.G. van Gils, A.A. Markus, A.D. Vethaak and J.M. Panko (2019) Characterizing export of land-based microplastics to the estuary – Part I: Application of integrated geospatial microplastic transport models to assess tire and road wear particles in the Seine watershed. Science of the Total Environment. https://doi.org/10.1016/j.scitotenv.2018.07.368
- Urgert, W. (2015) Microplastics in the rivers Meuse and Rhine-Developing guidance for a possible future monitoring program (MSc Thesis)
- Van der Wal, M., M. van der Meulen, G. Tweehuijsen, M. Peterlin, A. Palatinus, K. Virsek, L. Coscia and A. Krzan (2015) Identification and Assessment of Riverine Input of (Marine) Litter. Final Report for the European Commission DG Environment under Framework Contract No ENV.D.2/FRA/2012/0025.
How to cite: Buschman, F., van der Linden, A., and Markus, A.: Riverine transport of microplastics from the Dutch border to the North sea, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11223, https://doi.org/10.5194/egusphere-egu2020-11223, 2020.
Microplastics may affect marine and freshwater ecosystems and human health negatively. Important point sources of microplastics in rivers are locations where microplastics are released into the river, such as waste water treatment plants. Diffuse sources include the fragmentation of macroplastic items and tire and road wear particles that are flushed into the river (Unice et al., 2019). Once in the river, the different types and sizes of microplastics are transported with the flow. How this transport depends on environmental conditions is largely unknown. Due to the effort needed to monitor the microplastic concentration and composition, usually observations are carried out at one location in the water column only and are only repeated a few times. With a model, the spatial and temporal variation of the microplastics concentration can be predicted.
We modeled the transport and fate of microplastics (here defined as particles within 0.05 and 5 mm) in Dutch rivers and streams. We used a depth and width averaged flow model for the Netherlands. At the main upstream boundaries of the model (Lobith in the Rhine and Eijsden in the Meuse) microplastics were released. The concentration of different types of microplastics was based on observations by Urgert (2015). The model included the processes advection, deposition and hetero-aggregation of microplastics with sediment to determine the transport and fate. Overall, the model results suggest that the deposition is small: about 66-90 percent of the released microplastics are transported out of the model towards the sea, meaning that 10-34 percent are either deposited to the river bed or are stored in the water column. Resuspension of deposited microplastics was not included in the model. A sensitivity study for which resuspension was included suggests that it is not an important process in the current 1D simulation, since the flow velocities at accumulation areas rarely exceed the critical flow velocity for resuspension. The simulated annual transport of microplastics is higher than estimates based on observations (van der Wal et al., 2015; Mani et al., 2015), although sources within the Netherlands are not yet included in the model. This needs to be re-evaluated in the future, after sources of microplastics from within The Netherlands have been introduced in the model.
- Mani T., A. Hauk, U. Walter and P. Burkhardt-Holm (2015) Microplastics profile along the Rhine River. Nature Scientific Reports.
- Unice, K.M., M.P. Weeber, M.M. Abramson, R.C.D. Reid, J.A.G. van Gils, A.A. Markus, A.D. Vethaak and J.M. Panko (2019) Characterizing export of land-based microplastics to the estuary – Part I: Application of integrated geospatial microplastic transport models to assess tire and road wear particles in the Seine watershed. Science of the Total Environment. https://doi.org/10.1016/j.scitotenv.2018.07.368
- Urgert, W. (2015) Microplastics in the rivers Meuse and Rhine-Developing guidance for a possible future monitoring program (MSc Thesis)
- Van der Wal, M., M. van der Meulen, G. Tweehuijsen, M. Peterlin, A. Palatinus, K. Virsek, L. Coscia and A. Krzan (2015) Identification and Assessment of Riverine Input of (Marine) Litter. Final Report for the European Commission DG Environment under Framework Contract No ENV.D.2/FRA/2012/0025.
How to cite: Buschman, F., van der Linden, A., and Markus, A.: Riverine transport of microplastics from the Dutch border to the North sea, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11223, https://doi.org/10.5194/egusphere-egu2020-11223, 2020.
ITS2.8/OS4.10 – Plastic in the marine environment: observing and explaining where it comes from and where it goes
EGU2020-10456 | Displays | ITS2.8/OS4.10 | Highlight
Macroplastics Pollution in the Southern North Sea – Sources, Pathways and Abatement StrategiesJörg-Olaf Wolff, Florian Hahner, Jens Meyerjürgens, Marcel Ricker, Rosanna Isabel Schöneich-Argent, Thomas Badewien, Karsten Alexander Lettmann, Peter Schaal, Holger Freund, Ingo Mose, Emil Stanev, and Oliver Zielinski
Since 2016, an interdisciplinary consortium at the Carl von Ossietzky University in Oldenburg has been funded by the Lower Saxony Ministry for Science and Culture in order to provide solid, scientific knowledge of the sources, pathways and accumulation zones of plastic litter. This team consists of physical oceanographers, geoecologists, biologists and environmental planners.
Using simple wooden drifters, GPS-drifters and high resolution, numerical modelling, the consortium studied the dispersal of floating macroplastics (i.e. visible plastic fragments and objects) and accumulation areas within the German Bight and the Wadden Sea. Furthermore, coastal sensors and observation systems were employed to gather data of hydrodynamic parameters. In addition, the general public has actively participated in the collection of litter data via a web-based registration system for reporting the findings of wooden drifters.
In this presentation we will highlight some of the most important results of the project amongst them the surprising observation of a complete reversal of the circulation in the southern North Sea in March 2018, supported by drifter reports from citizen scientists from Britain. We will also shortly shed light on the heavy workload involved with presentations to the public (Radio, TV, print media, presentations to various stakeholder groups) which future projects should anticipate already at the planning stage.
How to cite: Wolff, J.-O., Hahner, F., Meyerjürgens, J., Ricker, M., Schöneich-Argent, R. I., Badewien, T., Lettmann, K. A., Schaal, P., Freund, H., Mose, I., Stanev, E., and Zielinski, O.: Macroplastics Pollution in the Southern North Sea – Sources, Pathways and Abatement Strategies, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10456, https://doi.org/10.5194/egusphere-egu2020-10456, 2020.
Since 2016, an interdisciplinary consortium at the Carl von Ossietzky University in Oldenburg has been funded by the Lower Saxony Ministry for Science and Culture in order to provide solid, scientific knowledge of the sources, pathways and accumulation zones of plastic litter. This team consists of physical oceanographers, geoecologists, biologists and environmental planners.
Using simple wooden drifters, GPS-drifters and high resolution, numerical modelling, the consortium studied the dispersal of floating macroplastics (i.e. visible plastic fragments and objects) and accumulation areas within the German Bight and the Wadden Sea. Furthermore, coastal sensors and observation systems were employed to gather data of hydrodynamic parameters. In addition, the general public has actively participated in the collection of litter data via a web-based registration system for reporting the findings of wooden drifters.
In this presentation we will highlight some of the most important results of the project amongst them the surprising observation of a complete reversal of the circulation in the southern North Sea in March 2018, supported by drifter reports from citizen scientists from Britain. We will also shortly shed light on the heavy workload involved with presentations to the public (Radio, TV, print media, presentations to various stakeholder groups) which future projects should anticipate already at the planning stage.
How to cite: Wolff, J.-O., Hahner, F., Meyerjürgens, J., Ricker, M., Schöneich-Argent, R. I., Badewien, T., Lettmann, K. A., Schaal, P., Freund, H., Mose, I., Stanev, E., and Zielinski, O.: Macroplastics Pollution in the Southern North Sea – Sources, Pathways and Abatement Strategies, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10456, https://doi.org/10.5194/egusphere-egu2020-10456, 2020.
EGU2020-17048 | Displays | ITS2.8/OS4.10
Modelling the accumulation and transport of microplastics by Arctic sea iceMiguel Angel Morales Maqueda and Alethea Sara Mountford
The presence of microplastics in the Arctic sea ice cover and water column, as well as on land, has raised the already high concerns about the dispersion of litter in the global environment. We present a 50-year simulation carried out with the NEMO ocean general circulation model of the dispersion of buoyant and neutrally buoyant microplastics in the global ocean that includes a simple formulation of microplastic accumulation in, and advection by, sea ice. Microplastics enter the Arctic predominantly through the Barents Sea, with a smaller input through the Bering Strait, although the simulation also takes into account small plastic sources along the Arctic coastline. Microplastics become trapped in newly formed sea ice chiefly on the Eurasian shelves and the Chukchi Sea, but a still significant amount is transferred from the mixed layer to the ice base through congelation in the central Arctic, where microplastics congregate nearer to the surface than elsewhere in the global ocean due to the strong stratification and the relatively small levels of vertical turbulence underneath multiyear sea ice. In the model, the maximum average residence time of sea ice in the Arctic is about six years, and this is also, therefore, the typical timescale for maximum microplastic accumulation in the ice cover. Plastics trapped in sea ice undergo a seasonal cycle of accumulation and release in consonance with the freeze and melt sea ice cycle but ultimately are release back into the ocean in the Greenland and Labrador seas, from where they will be subsequently transported into the North Atlantic.
How to cite: Morales Maqueda, M. A. and Mountford, A. S.: Modelling the accumulation and transport of microplastics by Arctic sea ice, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-17048, https://doi.org/10.5194/egusphere-egu2020-17048, 2020.
The presence of microplastics in the Arctic sea ice cover and water column, as well as on land, has raised the already high concerns about the dispersion of litter in the global environment. We present a 50-year simulation carried out with the NEMO ocean general circulation model of the dispersion of buoyant and neutrally buoyant microplastics in the global ocean that includes a simple formulation of microplastic accumulation in, and advection by, sea ice. Microplastics enter the Arctic predominantly through the Barents Sea, with a smaller input through the Bering Strait, although the simulation also takes into account small plastic sources along the Arctic coastline. Microplastics become trapped in newly formed sea ice chiefly on the Eurasian shelves and the Chukchi Sea, but a still significant amount is transferred from the mixed layer to the ice base through congelation in the central Arctic, where microplastics congregate nearer to the surface than elsewhere in the global ocean due to the strong stratification and the relatively small levels of vertical turbulence underneath multiyear sea ice. In the model, the maximum average residence time of sea ice in the Arctic is about six years, and this is also, therefore, the typical timescale for maximum microplastic accumulation in the ice cover. Plastics trapped in sea ice undergo a seasonal cycle of accumulation and release in consonance with the freeze and melt sea ice cycle but ultimately are release back into the ocean in the Greenland and Labrador seas, from where they will be subsequently transported into the North Atlantic.
How to cite: Morales Maqueda, M. A. and Mountford, A. S.: Modelling the accumulation and transport of microplastics by Arctic sea ice, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-17048, https://doi.org/10.5194/egusphere-egu2020-17048, 2020.
EGU2020-1892 | Displays | ITS2.8/OS4.10
Modelling the global biological microplastic particle sinkKarin Kvale, AE Friederike Prowe, Chia-Te Chien, Angela Landolfi, and Andreas Oschlies
Forty percent of the plastic produced annually ends up in the ocean. What happens to the plastic after that is poorly understood, though a growing body of data suggests it is rapidly spreading throughout the ocean. The mechanisms of this spread are not straightforward for small, weakly or neutrally buoyant plastic size fractions (the microplastics), in part because they aggregate in marine snow and are consumed by zooplankton. This biological transport pathway is suspected to be a primary surface microplastic removal mechanism, but exactly how it might work in the real ocean is unknown. We search the parameter space of a new microplastic model embedded in an earth system model to show biological uptake significantly shapes global microplastic inventory and distributions, despite its being an apparently inefficient removal pathway.
How to cite: Kvale, K., Prowe, A. F., Chien, C.-T., Landolfi, A., and Oschlies, A.: Modelling the global biological microplastic particle sink, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1892, https://doi.org/10.5194/egusphere-egu2020-1892, 2020.
Forty percent of the plastic produced annually ends up in the ocean. What happens to the plastic after that is poorly understood, though a growing body of data suggests it is rapidly spreading throughout the ocean. The mechanisms of this spread are not straightforward for small, weakly or neutrally buoyant plastic size fractions (the microplastics), in part because they aggregate in marine snow and are consumed by zooplankton. This biological transport pathway is suspected to be a primary surface microplastic removal mechanism, but exactly how it might work in the real ocean is unknown. We search the parameter space of a new microplastic model embedded in an earth system model to show biological uptake significantly shapes global microplastic inventory and distributions, despite its being an apparently inefficient removal pathway.
How to cite: Kvale, K., Prowe, A. F., Chien, C.-T., Landolfi, A., and Oschlies, A.: Modelling the global biological microplastic particle sink, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1892, https://doi.org/10.5194/egusphere-egu2020-1892, 2020.
EGU2020-15253 | Displays | ITS2.8/OS4.10
A stable isotope assay for determining microbial degradation rates of plastics in the marine environmentMaaike Goudriaan, Victor Hernando Morales, Ronald van Bommel, Marcel van der Meer, Rachel Rachel Ndhlovu, Johan van Heerwaarden, Kai-Uwe Hinrichs, and Helge Niemann
The popularity of plastic as a cheap and easy to use, moldable material has been growing exponentially, leading to a likewise increase in plastic waste. As a result, plastic pollution has been surging in the marine realm, and the effects and fates of these modern, man-made compounds in our oceans are unresolved. Pathways of plastic degradation (physicochemical and biological) in the marine environment are not well constrained; yet, microbial plastic degradation is a potential plastic sink in the ocean. However, there is a lack of methods to determine this process, particular if the overall turnover is in the sub-percent range. We developed a novel method based on incubations with isotopically labelled polymers for investigating microbial plastic degradation in marine environments. We tested our method with a Rhodococcus Ruber strain (C-208), a known plastic degrader, as a model organism. In our experiments we used granular polyethylene (PE) that was almost completely labelled with the stable isotope 13C (99%) as a sole carbon source. We monitored CO2 concentration and stable carbon isotope ratios over time in the headspace during 35-day incubations at atmospheric oxygen concentrations and found an excess production of 13C-CO2. This result provides direct evidence for the microbially mediated mineralization of carbon that was ultimately derived from the polymer. After terminating the incubation, we measured the dissolved inorganic carbon (DIC), and pH, allowing us to determine the total excess production of 13C-CO2 and DIC, and thus the rate of plastic degradation. Of the 2000 μg PE added, ~0.1% was degraded over a time course of 35 days at a rate of ~1.5 μg month-1, providing a first characterization of the mineralization kinetics of PE by R. Ruber. The results show that isotopically labelled polymers can be used to determine plastic degradation rates. The method shows promise for being more accurate than the classic gravimetrical methods.
How to cite: Goudriaan, M., Hernando Morales, V., van Bommel, R., van der Meer, M., Rachel Ndhlovu, R., van Heerwaarden, J., Hinrichs, K.-U., and Niemann, H.: A stable isotope assay for determining microbial degradation rates of plastics in the marine environment, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-15253, https://doi.org/10.5194/egusphere-egu2020-15253, 2020.
The popularity of plastic as a cheap and easy to use, moldable material has been growing exponentially, leading to a likewise increase in plastic waste. As a result, plastic pollution has been surging in the marine realm, and the effects and fates of these modern, man-made compounds in our oceans are unresolved. Pathways of plastic degradation (physicochemical and biological) in the marine environment are not well constrained; yet, microbial plastic degradation is a potential plastic sink in the ocean. However, there is a lack of methods to determine this process, particular if the overall turnover is in the sub-percent range. We developed a novel method based on incubations with isotopically labelled polymers for investigating microbial plastic degradation in marine environments. We tested our method with a Rhodococcus Ruber strain (C-208), a known plastic degrader, as a model organism. In our experiments we used granular polyethylene (PE) that was almost completely labelled with the stable isotope 13C (99%) as a sole carbon source. We monitored CO2 concentration and stable carbon isotope ratios over time in the headspace during 35-day incubations at atmospheric oxygen concentrations and found an excess production of 13C-CO2. This result provides direct evidence for the microbially mediated mineralization of carbon that was ultimately derived from the polymer. After terminating the incubation, we measured the dissolved inorganic carbon (DIC), and pH, allowing us to determine the total excess production of 13C-CO2 and DIC, and thus the rate of plastic degradation. Of the 2000 μg PE added, ~0.1% was degraded over a time course of 35 days at a rate of ~1.5 μg month-1, providing a first characterization of the mineralization kinetics of PE by R. Ruber. The results show that isotopically labelled polymers can be used to determine plastic degradation rates. The method shows promise for being more accurate than the classic gravimetrical methods.
How to cite: Goudriaan, M., Hernando Morales, V., van Bommel, R., van der Meer, M., Rachel Ndhlovu, R., van Heerwaarden, J., Hinrichs, K.-U., and Niemann, H.: A stable isotope assay for determining microbial degradation rates of plastics in the marine environment, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-15253, https://doi.org/10.5194/egusphere-egu2020-15253, 2020.
EGU2020-13617 | Displays | ITS2.8/OS4.10
Flocculation of microplastic and cohesive sediment in natural seawaterThorbjørn Joest Andersen, Stiffani Rominikan, Ida Stuhr Laursen, Kristoffer Hofer Skinnebach, Nynne Zaza Grube, Soeren Roger Jedal, Simon Nyboe Laursen, and Mikkel Fruergaard
The flocculation of combinations of microplastic particles (MP) and natural cohesive sediment has been investigated in a laboratory setup using unfiltered seawater. The experiments were conducted in order to test the hypothesis that MP may flocculate in estuarine and marine environments with natural organic and inorganic particles. MP particles in the size-range 63 – 125 µm were incubated with suspensions of local untreated seawater and untreated fine-grained sediment (< 20µm) collected from a tidal mudflat. Settling experiments were carried out with both a floc-camera video equipment (PCam) and conventional settling tubes.
Flocculation and sedimentation of MP-particles of PVC have been investigated as well as particles from high density polypropylene which is used in certain fishing gear. The studies have generally confirmed our hypothesis that microplastics are incorporated into aggregates along with other natural particles, thus settling faster than they would as single particles. The exact aggregation mechanisms still remains to be revealed but the general cohesiveness of fine-grained natural particles, organic particles as well as particulate and dissolved organic polymers are believed to be responsible for the flocculation. A strong effect of salt ions was also observed, confirming the classical concept of increased flocculation of fine-grained particles as they are transported from fresh-water to estuarine and marine waters.
The implication of the aggregation is that primary MP from land-based sources are likely to flocculate with other suspended particles, especially as they enter saline waters. The particles are therefore expected to deposit close to the sources, typically rivers. This applies to both micro-plastic particles that are denser than seawater but also to low-density plastic types which should otherwise float. However, secondary MP may be formed by disintegration of plastic anywhere and these MP particles could therefore settle wherever there is plastic present at the sea surface, for example under the ocean gyres where plastic is known to accumulate. Here, too, interaction with other particles in the water column is expected, but the concentration of natural particles is much lower than in coastal waters and it may be that the transport of natural organic and inorganic particles will start to be modified if the concentration of plastic in the marine environment continues to rise.
How to cite: Andersen, T. J., Rominikan, S., Laursen, I. S., Skinnebach, K. H., Grube, N. Z., Jedal, S. R., Laursen, S. N., and Fruergaard, M.: Flocculation of microplastic and cohesive sediment in natural seawater, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13617, https://doi.org/10.5194/egusphere-egu2020-13617, 2020.
The flocculation of combinations of microplastic particles (MP) and natural cohesive sediment has been investigated in a laboratory setup using unfiltered seawater. The experiments were conducted in order to test the hypothesis that MP may flocculate in estuarine and marine environments with natural organic and inorganic particles. MP particles in the size-range 63 – 125 µm were incubated with suspensions of local untreated seawater and untreated fine-grained sediment (< 20µm) collected from a tidal mudflat. Settling experiments were carried out with both a floc-camera video equipment (PCam) and conventional settling tubes.
Flocculation and sedimentation of MP-particles of PVC have been investigated as well as particles from high density polypropylene which is used in certain fishing gear. The studies have generally confirmed our hypothesis that microplastics are incorporated into aggregates along with other natural particles, thus settling faster than they would as single particles. The exact aggregation mechanisms still remains to be revealed but the general cohesiveness of fine-grained natural particles, organic particles as well as particulate and dissolved organic polymers are believed to be responsible for the flocculation. A strong effect of salt ions was also observed, confirming the classical concept of increased flocculation of fine-grained particles as they are transported from fresh-water to estuarine and marine waters.
The implication of the aggregation is that primary MP from land-based sources are likely to flocculate with other suspended particles, especially as they enter saline waters. The particles are therefore expected to deposit close to the sources, typically rivers. This applies to both micro-plastic particles that are denser than seawater but also to low-density plastic types which should otherwise float. However, secondary MP may be formed by disintegration of plastic anywhere and these MP particles could therefore settle wherever there is plastic present at the sea surface, for example under the ocean gyres where plastic is known to accumulate. Here, too, interaction with other particles in the water column is expected, but the concentration of natural particles is much lower than in coastal waters and it may be that the transport of natural organic and inorganic particles will start to be modified if the concentration of plastic in the marine environment continues to rise.
How to cite: Andersen, T. J., Rominikan, S., Laursen, I. S., Skinnebach, K. H., Grube, N. Z., Jedal, S. R., Laursen, S. N., and Fruergaard, M.: Flocculation of microplastic and cohesive sediment in natural seawater, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13617, https://doi.org/10.5194/egusphere-egu2020-13617, 2020.
EGU2020-3715 | Displays | ITS2.8/OS4.10
All that glitters is not plastic: the case of open-ocean fibresGiuseppe Suaria, Aikaterini Achtypi, Vonica Perold, Stefano Aliani, Andrea Pierucci, Jasmine Lee, and Peter Ryan
Textile fibres are ubiquitous contaminants of emerging concern. Traditionally ascribed to the ’microplastics’ family, their widespread occurrence in the natural environment is commonly reported in plastic pollution studies, with the misleading belief that they largely derive from wear and tear of synthetic fabrics. Their synthetic nature has been largely used to motivate their persistence in the environment, thus explaining their presence in virtually all compartments of the planet, including sea-ice, deep-seas, soils, atmospheric fall-out, foods and drinks. As of today however, an extensive characterization of their polymeric composition has never been performed, even though the evidence that most of these fibres are not synthetic, is slowly emerging. By compiling a dataset of more than 916 seawater samples collected in six different ocean basins, we confirm that microfibres are ubiquitous in the world seas, but mainly composed of natural polymers. The chemical characterization of almost 2000 fibres through µFTIR techniques revealed that in striking contrast to global production patterns, only 8.2% of marine fibres are actually synthetic, with the rest being predominantly of animal (12.3%) or vegetal origin (79.5%). These results demonstrate the widespread occurrence of cellulosic fibres in the marine environment, emphasizing the need for full chemical identification of these particles, before classifying them as microplastics. On the basis of our findings it appears critical to assess origins, impacts and degradation times of cellulosic fibers in the marine environment, as well as to assess the wider implications of a global overestimation of microplastic loads in natural ecosystems.
How to cite: Suaria, G., Achtypi, A., Perold, V., Aliani, S., Pierucci, A., Lee, J., and Ryan, P.: All that glitters is not plastic: the case of open-ocean fibres , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3715, https://doi.org/10.5194/egusphere-egu2020-3715, 2020.
Textile fibres are ubiquitous contaminants of emerging concern. Traditionally ascribed to the ’microplastics’ family, their widespread occurrence in the natural environment is commonly reported in plastic pollution studies, with the misleading belief that they largely derive from wear and tear of synthetic fabrics. Their synthetic nature has been largely used to motivate their persistence in the environment, thus explaining their presence in virtually all compartments of the planet, including sea-ice, deep-seas, soils, atmospheric fall-out, foods and drinks. As of today however, an extensive characterization of their polymeric composition has never been performed, even though the evidence that most of these fibres are not synthetic, is slowly emerging. By compiling a dataset of more than 916 seawater samples collected in six different ocean basins, we confirm that microfibres are ubiquitous in the world seas, but mainly composed of natural polymers. The chemical characterization of almost 2000 fibres through µFTIR techniques revealed that in striking contrast to global production patterns, only 8.2% of marine fibres are actually synthetic, with the rest being predominantly of animal (12.3%) or vegetal origin (79.5%). These results demonstrate the widespread occurrence of cellulosic fibres in the marine environment, emphasizing the need for full chemical identification of these particles, before classifying them as microplastics. On the basis of our findings it appears critical to assess origins, impacts and degradation times of cellulosic fibers in the marine environment, as well as to assess the wider implications of a global overestimation of microplastic loads in natural ecosystems.
How to cite: Suaria, G., Achtypi, A., Perold, V., Aliani, S., Pierucci, A., Lee, J., and Ryan, P.: All that glitters is not plastic: the case of open-ocean fibres , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3715, https://doi.org/10.5194/egusphere-egu2020-3715, 2020.
EGU2020-2387 * | Displays | ITS2.8/OS4.10 | Highlight
First evidence of plastic fallout from the Great Pacific Garbage PatchMatthias Egger, Fatimah Sulu-Gambari, and Laurent Lebreton
Increasing amounts of plastic debris in the ocean is a global environmental concern. Each year, several million tons of plastic waste enter the ocean from coastal environments. Transported by currents, wind and waves, positively buoyant plastic objects eventually accumulate at the sea surface of subtropical oceanic gyres, forming the so-called ocean garbage patches. To date, the fate of floating plastic debris ‘trapped’ in the oceanic gyres remains largely unknown. To more accurately assess the persistence of floating plastics accumulating in offshore areas, a better understanding of the plastic inputs and outputs into and from ocean garbage patches is crucial. An important component of this mass balance currently missing is the vertical plastic flux from the sea surface of subtropical waters towards the seabed. Numerical models have major difficulties in constraining the sinking flux of plastic to the ocean interior in these areas since validation against observational data is not possible yet.
Here, we provide the first water column profiles (0-2000m water depth) of plastic particles (>500µm) in the North Pacific subtropical gyre (Great Pacific Garbage Patch; GPGP). We show that plastic particles in the water column are mostly in the size range of particles that are apparently missing from the ocean surface and that their polymer composition is similar to that of floating debris circulating in the surface waters. Furthermore, water column plastic concentrations increase with higher concentrations at the sea surface and show a power law decline with water depth. These findings strongly suggest that plastics present in the deep sea below the GPGP are small fragments of initially buoyant plastic debris that accumulated at the sea surface. Although the amount of plastic in the GPGP water column is significant compared to the surface accumulation, our results further indicate that the ocean water column is unlikely to harbor a major fraction of the tens of millions of metric tons of missing ocean plastic.
How to cite: Egger, M., Sulu-Gambari, F., and Lebreton, L.: First evidence of plastic fallout from the Great Pacific Garbage Patch, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2387, https://doi.org/10.5194/egusphere-egu2020-2387, 2020.
Increasing amounts of plastic debris in the ocean is a global environmental concern. Each year, several million tons of plastic waste enter the ocean from coastal environments. Transported by currents, wind and waves, positively buoyant plastic objects eventually accumulate at the sea surface of subtropical oceanic gyres, forming the so-called ocean garbage patches. To date, the fate of floating plastic debris ‘trapped’ in the oceanic gyres remains largely unknown. To more accurately assess the persistence of floating plastics accumulating in offshore areas, a better understanding of the plastic inputs and outputs into and from ocean garbage patches is crucial. An important component of this mass balance currently missing is the vertical plastic flux from the sea surface of subtropical waters towards the seabed. Numerical models have major difficulties in constraining the sinking flux of plastic to the ocean interior in these areas since validation against observational data is not possible yet.
Here, we provide the first water column profiles (0-2000m water depth) of plastic particles (>500µm) in the North Pacific subtropical gyre (Great Pacific Garbage Patch; GPGP). We show that plastic particles in the water column are mostly in the size range of particles that are apparently missing from the ocean surface and that their polymer composition is similar to that of floating debris circulating in the surface waters. Furthermore, water column plastic concentrations increase with higher concentrations at the sea surface and show a power law decline with water depth. These findings strongly suggest that plastics present in the deep sea below the GPGP are small fragments of initially buoyant plastic debris that accumulated at the sea surface. Although the amount of plastic in the GPGP water column is significant compared to the surface accumulation, our results further indicate that the ocean water column is unlikely to harbor a major fraction of the tens of millions of metric tons of missing ocean plastic.
How to cite: Egger, M., Sulu-Gambari, F., and Lebreton, L.: First evidence of plastic fallout from the Great Pacific Garbage Patch, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2387, https://doi.org/10.5194/egusphere-egu2020-2387, 2020.
EGU2020-442 | Displays | ITS2.8/OS4.10
Smart algorithms for monitoring plastic litterShungu Garaba, Nina Gnann, and Oliver Zielinski
Plastic Litter (PL) has become more ubiquitous in the last decades posing socio-economic as well as health problems for the blue and green economy. However, to date PL monitoring strategies have been based on field sampling by citizens and scientists during recreational, sporting, scientific and clean-up campaigns. To this end, remote sensing technologies combined with artificial intelligence (AI) have gained rising interest as a potential source of complementary scientific evidence-based information with the capabilities to (i) detect, (ii) track, (iii) characterise and (iv) quantify PL. Within the smart algorithms, convoluted and recurrent neural networks ingest vast multi to hyperspectral images from smartphones, unmanned aerial systems, fixed observatories, high-altitude pseudo-satellites and space stations. Detection would involve the application of object recognition algorithms to true colour Red-Green-Blue (RGB) composite images. Typical essential descriptors that are derived from RGB images include apparent colour, shape, type and dimensions of PL. In addition to object recognition algorithms supported by visual inspection, AI is also used to classify and estimate counts of PL in captured imagery. Quantification assisted by smart systems have the advantage of uncertainties associated with predictions, a cruial aspect in determing budgets of PL in the natural environment. Hyperspectral data is then utilized to further characterise the polymer composition of PL based on spectral reference libraries of known polymers. Fixed observatories and repeated image capture at regions-of-interest have prospective applications in tracking of PL. Here we present plausible applications of remote detection, tracking and quantification of PL assisted by smart AI algorithms. Smart remote sensing of PL will be integrated in future operational smart observing system with near real-time capabilities to generate user (citizens, stakeholders, policymakers) defined end-products relevant to plastic litter. These tailor-made descriptors will thus contribute towards scientific evidence-based knowledge important in assisting legislature in policy making, awareness campaigns as well as evaluating the efficacy of mitigation strategies for plastic litter. Essential descriptors proposed need to include geolocations, quantities, size distributions, shape/form, apparent colour and polymer composition of PL.
How to cite: Garaba, S., Gnann, N., and Zielinski, O.: Smart algorithms for monitoring plastic litter, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-442, https://doi.org/10.5194/egusphere-egu2020-442, 2020.
Plastic Litter (PL) has become more ubiquitous in the last decades posing socio-economic as well as health problems for the blue and green economy. However, to date PL monitoring strategies have been based on field sampling by citizens and scientists during recreational, sporting, scientific and clean-up campaigns. To this end, remote sensing technologies combined with artificial intelligence (AI) have gained rising interest as a potential source of complementary scientific evidence-based information with the capabilities to (i) detect, (ii) track, (iii) characterise and (iv) quantify PL. Within the smart algorithms, convoluted and recurrent neural networks ingest vast multi to hyperspectral images from smartphones, unmanned aerial systems, fixed observatories, high-altitude pseudo-satellites and space stations. Detection would involve the application of object recognition algorithms to true colour Red-Green-Blue (RGB) composite images. Typical essential descriptors that are derived from RGB images include apparent colour, shape, type and dimensions of PL. In addition to object recognition algorithms supported by visual inspection, AI is also used to classify and estimate counts of PL in captured imagery. Quantification assisted by smart systems have the advantage of uncertainties associated with predictions, a cruial aspect in determing budgets of PL in the natural environment. Hyperspectral data is then utilized to further characterise the polymer composition of PL based on spectral reference libraries of known polymers. Fixed observatories and repeated image capture at regions-of-interest have prospective applications in tracking of PL. Here we present plausible applications of remote detection, tracking and quantification of PL assisted by smart AI algorithms. Smart remote sensing of PL will be integrated in future operational smart observing system with near real-time capabilities to generate user (citizens, stakeholders, policymakers) defined end-products relevant to plastic litter. These tailor-made descriptors will thus contribute towards scientific evidence-based knowledge important in assisting legislature in policy making, awareness campaigns as well as evaluating the efficacy of mitigation strategies for plastic litter. Essential descriptors proposed need to include geolocations, quantities, size distributions, shape/form, apparent colour and polymer composition of PL.
How to cite: Garaba, S., Gnann, N., and Zielinski, O.: Smart algorithms for monitoring plastic litter, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-442, https://doi.org/10.5194/egusphere-egu2020-442, 2020.
EGU2020-975 | Displays | ITS2.8/OS4.10
Plastic waste detection assisted by artifical intelligenceNina Gnann, Shungu Garaba, and Oliver Zielinski
Plastic pollution has a big impact on living organisms. At the same time, plastics are everywhere in our daily life. For example, plastic is used in packaging, construction of buildings, cars, electronics, agriculture and many other fields. In fact, plastic production has been increasing rapidly since the 1950s. However, plastic waste management strategies have not adapted accordingly to these rising amounts, which end up in the blue and green planet. Unfortunately, for developing nations it is even more complicated and strategies are still developing. Here we investigate the possibilities of plastic waste detection in Cambodia focusing on cities, rivers and coastal areas. Very fine geo-spatial resolution Red-Green-Blue (RGB) drone imagery was captured over regions of interest in Phnom Penh, Sihanoukville and Siem Reap. To this date, techniques of detecting plastic litter are based on RGB imagery analyses, generating descriptors such as colour, shape, size and form. However, we believe by adding infrared wavebands additional descriptors, such as polymer composition or type can be retrieved for improved classification of plastic litter. Furthermore, remote sensing technologies will be merged with object-based deep learning methodologies to enhance identification of plastic waste items, thus creating a robust learning system. Due to the size and complexity of this problem, automated detection, tracking, characterization and quantification of plastic pollution is a key aspect to improve waste management strategies. We therefore explore multispectral band combinations relevant to the detection of plastic waste and operational approaches in imagery processing. This work will contribute towards algorithm development for analysis of video datasets enhancing future near real-time detection of plastic litter. Eventually, this scientific evidence-based tool can be utilized by stakeholders, policymakers and citizens.
How to cite: Gnann, N., Garaba, S., and Zielinski, O.: Plastic waste detection assisted by artifical intelligence, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-975, https://doi.org/10.5194/egusphere-egu2020-975, 2020.
Plastic pollution has a big impact on living organisms. At the same time, plastics are everywhere in our daily life. For example, plastic is used in packaging, construction of buildings, cars, electronics, agriculture and many other fields. In fact, plastic production has been increasing rapidly since the 1950s. However, plastic waste management strategies have not adapted accordingly to these rising amounts, which end up in the blue and green planet. Unfortunately, for developing nations it is even more complicated and strategies are still developing. Here we investigate the possibilities of plastic waste detection in Cambodia focusing on cities, rivers and coastal areas. Very fine geo-spatial resolution Red-Green-Blue (RGB) drone imagery was captured over regions of interest in Phnom Penh, Sihanoukville and Siem Reap. To this date, techniques of detecting plastic litter are based on RGB imagery analyses, generating descriptors such as colour, shape, size and form. However, we believe by adding infrared wavebands additional descriptors, such as polymer composition or type can be retrieved for improved classification of plastic litter. Furthermore, remote sensing technologies will be merged with object-based deep learning methodologies to enhance identification of plastic waste items, thus creating a robust learning system. Due to the size and complexity of this problem, automated detection, tracking, characterization and quantification of plastic pollution is a key aspect to improve waste management strategies. We therefore explore multispectral band combinations relevant to the detection of plastic waste and operational approaches in imagery processing. This work will contribute towards algorithm development for analysis of video datasets enhancing future near real-time detection of plastic litter. Eventually, this scientific evidence-based tool can be utilized by stakeholders, policymakers and citizens.
How to cite: Gnann, N., Garaba, S., and Zielinski, O.: Plastic waste detection assisted by artifical intelligence, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-975, https://doi.org/10.5194/egusphere-egu2020-975, 2020.
EGU2020-19145 | Displays | ITS2.8/OS4.10
Detecting and Identifying Floating Plastic Debris in Coastal Waters using Sentinel-2 Earth Observation DataLauren Biermann, Daniel Clewley, Victor Martinez-Vicente, and Konstantinos Topouzelis
Satellite remote sensing is an invaluable tool for observing our earth systems. However, few studies have succeeded in applying this for detection of floating litter in the marine environment. We demonstrate that plastic debris aggregated on the ocean surface is detectable in optical data acquired by the European Space Agency (ESA) Sentinel-2 satellites. Furthermore, using an automated classification approach, we show that floating macroplastics are distinguishable from seawater, seaweed, sea foam, pumice, and driftwood.
Sentinel-2 was used to detect floating aggregations likely to include macroplastics across four study sites, namely: coastal waters of Accra (Ghana), Da Nang (Vietnam), the east coast of Scotland (UK), and the San Juan Islands (BC, Canada). Aggregations were detectable on sub-pixel scales using a Floating Debris Index (FDI), and were composed of a mix of materials including sea foam and seaweed. A probabilistic machine learning approach was then applied to assess if detected plastics could be discriminated from the natural sources of marine debris. Our automated Naïve Bayes classifier was trained using a library of pumice, seaweed, timber, sea foam and seawater detections, as well as validated macroplastics from Durban Harbour (South Africa). Across the four study sites, suspected marine plastics were classified as such with an accuracy approaching 90%. The ‘misclassified’ plastics were mostly identified as seawater, suggesting an insufficient amount of pixel was filled with materials.
Results from this study show that plastic debris aggregated on the ocean surface can be detected in optical data collected by Sentinel-2, and identified. With the aim of generating global ‘hotspot’ maps of floating plastics in coastal waters, automating this two-stage process across the Sentinel-2 archive is being progressed; however, the method would also be applicable to drones and other remote sensing platforms with similar band characteristics. To extend remote detection methods to river systems and optically complex and/or tidal coastal waters, in situ data collection across optical water types is the next key step.
How to cite: Biermann, L., Clewley, D., Martinez-Vicente, V., and Topouzelis, K.: Detecting and Identifying Floating Plastic Debris in Coastal Waters using Sentinel-2 Earth Observation Data, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19145, https://doi.org/10.5194/egusphere-egu2020-19145, 2020.
Satellite remote sensing is an invaluable tool for observing our earth systems. However, few studies have succeeded in applying this for detection of floating litter in the marine environment. We demonstrate that plastic debris aggregated on the ocean surface is detectable in optical data acquired by the European Space Agency (ESA) Sentinel-2 satellites. Furthermore, using an automated classification approach, we show that floating macroplastics are distinguishable from seawater, seaweed, sea foam, pumice, and driftwood.
Sentinel-2 was used to detect floating aggregations likely to include macroplastics across four study sites, namely: coastal waters of Accra (Ghana), Da Nang (Vietnam), the east coast of Scotland (UK), and the San Juan Islands (BC, Canada). Aggregations were detectable on sub-pixel scales using a Floating Debris Index (FDI), and were composed of a mix of materials including sea foam and seaweed. A probabilistic machine learning approach was then applied to assess if detected plastics could be discriminated from the natural sources of marine debris. Our automated Naïve Bayes classifier was trained using a library of pumice, seaweed, timber, sea foam and seawater detections, as well as validated macroplastics from Durban Harbour (South Africa). Across the four study sites, suspected marine plastics were classified as such with an accuracy approaching 90%. The ‘misclassified’ plastics were mostly identified as seawater, suggesting an insufficient amount of pixel was filled with materials.
Results from this study show that plastic debris aggregated on the ocean surface can be detected in optical data collected by Sentinel-2, and identified. With the aim of generating global ‘hotspot’ maps of floating plastics in coastal waters, automating this two-stage process across the Sentinel-2 archive is being progressed; however, the method would also be applicable to drones and other remote sensing platforms with similar band characteristics. To extend remote detection methods to river systems and optically complex and/or tidal coastal waters, in situ data collection across optical water types is the next key step.
How to cite: Biermann, L., Clewley, D., Martinez-Vicente, V., and Topouzelis, K.: Detecting and Identifying Floating Plastic Debris in Coastal Waters using Sentinel-2 Earth Observation Data, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19145, https://doi.org/10.5194/egusphere-egu2020-19145, 2020.
EGU2020-7312 | Displays | ITS2.8/OS4.10
Beach observations of plastic and marine litter along the Northwest PassagePeter Gijsbers and Hester Jiskoot
Marine litter and microplastics are everywhere. Even the Arctic Ocean, Svalbard and Jan Mayen Island are contaminated as various publications confirm. Little, however, is reported about marine waters and shores of the Canadian Arctic Archipelago. This poster presents the results of a privately funded citizen science observation to scan remote beaches along the Northwest Passage for marine litter pollution.
The observations were conducted while enjoying the 2019 Northwest Passage sailing expedition of the Tecla, a 1915 gaff-ketch herring drifter. The expedition started in Ilulissat, Greenland, on 1 August and ended in Nome, Alaska, on 18 September. After crossing Baffin Bay, the ship continued along Pond Inlet, Navy Board Inlet, Lancaster Sound, Barrow Strait, Peel Sound, Franklin Strait, Rea Strait, Simpson Strait, Queen Maud Gulf, Coronation Gulf, Amundsen Gulf, Beaufort Sea, Chukchi Sea and Bering Strait. The vessel anchored in the settlement harbours of Pond Inlet, Taloyoak, Gjoa Haven, Cambridge Bay and Herschel Island. In addition, Tecla’s crew made landings at remote beaches on Disko Island (Fortune Bay, Disko Fjord), Beechey Island (Union Bay), Somerset Island (Four Rivers Bay), Boothia Peninsula (Weld Harbour), King William Island (M’Clintock Bay), Jenny Lind Island, and at Kugluktuk and Tuktoyaktuk Peninsula.
Following the categorization of the OSPAR Guideline for Monitoring Marine Litter on Beaches, litter observations were conducted without penetrating the beach surfaces. Beach stretches scanned varied in length from 100-400 m. No observations were conducted at inhabited settlements or at the abandoned settlements visited on Disko Island (Nipisat) and Beechey Island (Northumberland House).
Observations on the most remote beaches found 2-5 strongly bleached or decayed items in places such as Union Bay, Four Rivers Bay, Weld Harbour, Jenny Lind Island (Queen Maud Gulf side). Landings within 15 km of local settlements (Fortune Bay, Disko Fjord, Kugluktuk, Tuktoyaktuk) or near military activity (Jenny Lind Island, bay side) showed traces of local camping, hunting or fishing activities, resulting in item counts between 7 and 29. At the lee shore spit of M’Clintock Bay, significant pollution (> 100 items: including outboard engine parts, broken ceramic, glass, clothing, decayed batteries, a crampon and a vinyl record) was found, in contrast to a near-pristine beach on the Simpson Strait side. The litter type and concentration, as well as the remains of a building and shipwrecked fishing vessel indicate that this is an abandoned settlement, possibly related to the construction of the nearby Distant Early Warning Line radar site CAM-2 of Gladman Point. DEW Line sites have long been associated with environmental disturbances.
Given the 197 beach items recorded, it can be concluded that the beaches of the Canadian Arctic Archipelago, which are blocked by sea ice during most of the year, are not pristine. Truly remote places have received marine pollution for decades to centuries. Where (abandoned) settlements are at close range pollution from local activities can be discovered, while ocean currents, wind patterns, ice rafting, distance to river mouths, and flotsam, jetsam and derelict also determine the type and amount of marine litter along the Northwest Passage.
How to cite: Gijsbers, P. and Jiskoot, H.: Beach observations of plastic and marine litter along the Northwest Passage, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7312, https://doi.org/10.5194/egusphere-egu2020-7312, 2020.
Marine litter and microplastics are everywhere. Even the Arctic Ocean, Svalbard and Jan Mayen Island are contaminated as various publications confirm. Little, however, is reported about marine waters and shores of the Canadian Arctic Archipelago. This poster presents the results of a privately funded citizen science observation to scan remote beaches along the Northwest Passage for marine litter pollution.
The observations were conducted while enjoying the 2019 Northwest Passage sailing expedition of the Tecla, a 1915 gaff-ketch herring drifter. The expedition started in Ilulissat, Greenland, on 1 August and ended in Nome, Alaska, on 18 September. After crossing Baffin Bay, the ship continued along Pond Inlet, Navy Board Inlet, Lancaster Sound, Barrow Strait, Peel Sound, Franklin Strait, Rea Strait, Simpson Strait, Queen Maud Gulf, Coronation Gulf, Amundsen Gulf, Beaufort Sea, Chukchi Sea and Bering Strait. The vessel anchored in the settlement harbours of Pond Inlet, Taloyoak, Gjoa Haven, Cambridge Bay and Herschel Island. In addition, Tecla’s crew made landings at remote beaches on Disko Island (Fortune Bay, Disko Fjord), Beechey Island (Union Bay), Somerset Island (Four Rivers Bay), Boothia Peninsula (Weld Harbour), King William Island (M’Clintock Bay), Jenny Lind Island, and at Kugluktuk and Tuktoyaktuk Peninsula.
Following the categorization of the OSPAR Guideline for Monitoring Marine Litter on Beaches, litter observations were conducted without penetrating the beach surfaces. Beach stretches scanned varied in length from 100-400 m. No observations were conducted at inhabited settlements or at the abandoned settlements visited on Disko Island (Nipisat) and Beechey Island (Northumberland House).
Observations on the most remote beaches found 2-5 strongly bleached or decayed items in places such as Union Bay, Four Rivers Bay, Weld Harbour, Jenny Lind Island (Queen Maud Gulf side). Landings within 15 km of local settlements (Fortune Bay, Disko Fjord, Kugluktuk, Tuktoyaktuk) or near military activity (Jenny Lind Island, bay side) showed traces of local camping, hunting or fishing activities, resulting in item counts between 7 and 29. At the lee shore spit of M’Clintock Bay, significant pollution (> 100 items: including outboard engine parts, broken ceramic, glass, clothing, decayed batteries, a crampon and a vinyl record) was found, in contrast to a near-pristine beach on the Simpson Strait side. The litter type and concentration, as well as the remains of a building and shipwrecked fishing vessel indicate that this is an abandoned settlement, possibly related to the construction of the nearby Distant Early Warning Line radar site CAM-2 of Gladman Point. DEW Line sites have long been associated with environmental disturbances.
Given the 197 beach items recorded, it can be concluded that the beaches of the Canadian Arctic Archipelago, which are blocked by sea ice during most of the year, are not pristine. Truly remote places have received marine pollution for decades to centuries. Where (abandoned) settlements are at close range pollution from local activities can be discovered, while ocean currents, wind patterns, ice rafting, distance to river mouths, and flotsam, jetsam and derelict also determine the type and amount of marine litter along the Northwest Passage.
How to cite: Gijsbers, P. and Jiskoot, H.: Beach observations of plastic and marine litter along the Northwest Passage, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7312, https://doi.org/10.5194/egusphere-egu2020-7312, 2020.
EGU2020-22246 | Displays | ITS2.8/OS4.10
Norwegian Institute for Water Research (NIVA) and Hurtigruten partnership to bring light to the gaps in plastic marine litter knowledge.Verena Meraldi, Tudor Morgan, and Bert Van Bavel
Plastic pollution has become one of today’s biggest environmental problems. Yearly worldwide production of plastic was 360 million tonnes in 2018, of which approximately 10 million reached the oceans. But there is very little data from remote regions of the world.
Several studies have pointed to the tourism and fishing industries as the main sources of plastic marine litter. Hurtigruten as an operator of expedition cruise vessels, believes that it is our responsibility to invest in the understanding and conservation of the areas we visit, this is reflected on our sustainability efforts: Single Use Plastics were banned from all our ships and Hotels in 2018, we have built the first electric/fuel hybrid ships and are transforming other ships in the fleet to the same technology or to run on Liquid biogas.
Scientific data collection in the polar regions is challenging due to remoteness, the harsh environment and high operational costs. For the last couple of years, we have supported the scientific community by transporting researchers and their equipment to and from their study areas in polar regions, we have established collaborations with numerous scientific institutions, such as University Centre in Svalbard, Norwegian Polar Institute, Institute for Marine Research, and Norwegian Institute for Water Research (NIVA) and we have been actively participating in clean-up projects, and are contributing to the SALT and MALINOR projects.
Plastic pollution is having a significant impact on wildlife, and recent studies show that the concentration of microplastics is also greater than estimated. The understanding of the status and impacts of marine litter has many gaps, further studies are needed to improve our knowledge of its distribution and interaction with the marine biota. In partnership with NIVA we have installed a FerryBox on MS Roald Amundsen. Amongst other sensors it has a microplastic collector and preliminary data from the first collection between Tromsø and Longyearbyen agree with published results from the same area. MS Roald Amundsen will sail to both polar areas, where data on microplastic litter is required, making it the perfect ship of opportunity and platform for data collection. Lastly, the large advantage of using cruise ships as sampling and research platforms is the long-term presence in the polar regions, allowing for continued measurements over longer time periods.
How to cite: Meraldi, V., Morgan, T., and Van Bavel, B.: Norwegian Institute for Water Research (NIVA) and Hurtigruten partnership to bring light to the gaps in plastic marine litter knowledge. , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22246, https://doi.org/10.5194/egusphere-egu2020-22246, 2020.
Plastic pollution has become one of today’s biggest environmental problems. Yearly worldwide production of plastic was 360 million tonnes in 2018, of which approximately 10 million reached the oceans. But there is very little data from remote regions of the world.
Several studies have pointed to the tourism and fishing industries as the main sources of plastic marine litter. Hurtigruten as an operator of expedition cruise vessels, believes that it is our responsibility to invest in the understanding and conservation of the areas we visit, this is reflected on our sustainability efforts: Single Use Plastics were banned from all our ships and Hotels in 2018, we have built the first electric/fuel hybrid ships and are transforming other ships in the fleet to the same technology or to run on Liquid biogas.
Scientific data collection in the polar regions is challenging due to remoteness, the harsh environment and high operational costs. For the last couple of years, we have supported the scientific community by transporting researchers and their equipment to and from their study areas in polar regions, we have established collaborations with numerous scientific institutions, such as University Centre in Svalbard, Norwegian Polar Institute, Institute for Marine Research, and Norwegian Institute for Water Research (NIVA) and we have been actively participating in clean-up projects, and are contributing to the SALT and MALINOR projects.
Plastic pollution is having a significant impact on wildlife, and recent studies show that the concentration of microplastics is also greater than estimated. The understanding of the status and impacts of marine litter has many gaps, further studies are needed to improve our knowledge of its distribution and interaction with the marine biota. In partnership with NIVA we have installed a FerryBox on MS Roald Amundsen. Amongst other sensors it has a microplastic collector and preliminary data from the first collection between Tromsø and Longyearbyen agree with published results from the same area. MS Roald Amundsen will sail to both polar areas, where data on microplastic litter is required, making it the perfect ship of opportunity and platform for data collection. Lastly, the large advantage of using cruise ships as sampling and research platforms is the long-term presence in the polar regions, allowing for continued measurements over longer time periods.
How to cite: Meraldi, V., Morgan, T., and Van Bavel, B.: Norwegian Institute for Water Research (NIVA) and Hurtigruten partnership to bring light to the gaps in plastic marine litter knowledge. , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22246, https://doi.org/10.5194/egusphere-egu2020-22246, 2020.
EGU2020-16315 | Displays | ITS2.8/OS4.10
What’s that Floating in my Soup? Characterisation and Handling of Floating Debris in the Great Pacific Garbage PatchFatimah Sulu-Gambari, Matthias Egger, and Laurent Lebreton
There is extensive documentation of plastic debris in the marine environment [1]. Citizen science programs and tracking apps have been used more recently in the collection of data on plastics in marine settings [1]. These programs, however, are focussed on debris collected from beach cleanups and coastal environments. Large debris currently afloat in ocean garbage patches, which contribute significantly to marine plastic pollution, are less well-characterised. Buoyant plastics accumulate offshore in the five ocean gyres, the largest of which is the Great Pacific Garbage Patch (GPGP) in the North Pacific Ocean. There, they are seen floating in a loosely concentrated ‘soup’. Over time they degrade in saltwater, under UV radiation, with the help of wind and wave action. They also serve as substrates for trace metal and organic pollutant adsorption, as well as the growth of microbial consortia and larger potentially invasive organisms. There is currently limited data collection on sources of large floating plastics in ocean gyres. Majority of data collected on plastics in the garbage patches is based on trawled sampling techniques that exclude objects larger than 0.5m [2]. Large debris are important for elucidation of the overall mass of plastic in the patches. We know that 8% of the GPGP is comprised of microplastics and thus larger objects constitute the greater fraction of the total plastic mass [2], which we know little about. It is important to understand what types of debris accumulate in the patches, their land-/marine-based origins and the locations from which they enter the ocean. Where the debris is produced and what practices (commercial, cultural, industrial) contribute to their accumulation in the garbage patches is also pivotal data that needs to be collected. This information, coupled to data on how long the plastics persist and how well they persevere in the marine environment, is necessary for creating effective and efficient mitigation strategies.
References
[1] Jambeck, J. R. & Johnsen, K. Citizen-Based Litter and Marine Debris Data Collection and Mapping. Computing in Science & Engineering, 17, 20-26 (2015).
[2] Lebreton, L. et al. 2018. Evidence that the Great Pacific Garbage Patch is rapidly accumulating plastic. Scientific Reports, 8, 4666 (2018).
How to cite: Sulu-Gambari, F., Egger, M., and Lebreton, L.: What’s that Floating in my Soup? Characterisation and Handling of Floating Debris in the Great Pacific Garbage Patch, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-16315, https://doi.org/10.5194/egusphere-egu2020-16315, 2020.
There is extensive documentation of plastic debris in the marine environment [1]. Citizen science programs and tracking apps have been used more recently in the collection of data on plastics in marine settings [1]. These programs, however, are focussed on debris collected from beach cleanups and coastal environments. Large debris currently afloat in ocean garbage patches, which contribute significantly to marine plastic pollution, are less well-characterised. Buoyant plastics accumulate offshore in the five ocean gyres, the largest of which is the Great Pacific Garbage Patch (GPGP) in the North Pacific Ocean. There, they are seen floating in a loosely concentrated ‘soup’. Over time they degrade in saltwater, under UV radiation, with the help of wind and wave action. They also serve as substrates for trace metal and organic pollutant adsorption, as well as the growth of microbial consortia and larger potentially invasive organisms. There is currently limited data collection on sources of large floating plastics in ocean gyres. Majority of data collected on plastics in the garbage patches is based on trawled sampling techniques that exclude objects larger than 0.5m [2]. Large debris are important for elucidation of the overall mass of plastic in the patches. We know that 8% of the GPGP is comprised of microplastics and thus larger objects constitute the greater fraction of the total plastic mass [2], which we know little about. It is important to understand what types of debris accumulate in the patches, their land-/marine-based origins and the locations from which they enter the ocean. Where the debris is produced and what practices (commercial, cultural, industrial) contribute to their accumulation in the garbage patches is also pivotal data that needs to be collected. This information, coupled to data on how long the plastics persist and how well they persevere in the marine environment, is necessary for creating effective and efficient mitigation strategies.
References
[1] Jambeck, J. R. & Johnsen, K. Citizen-Based Litter and Marine Debris Data Collection and Mapping. Computing in Science & Engineering, 17, 20-26 (2015).
[2] Lebreton, L. et al. 2018. Evidence that the Great Pacific Garbage Patch is rapidly accumulating plastic. Scientific Reports, 8, 4666 (2018).
How to cite: Sulu-Gambari, F., Egger, M., and Lebreton, L.: What’s that Floating in my Soup? Characterisation and Handling of Floating Debris in the Great Pacific Garbage Patch, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-16315, https://doi.org/10.5194/egusphere-egu2020-16315, 2020.
EGU2020-20962 * | Displays | ITS2.8/OS4.10 | Highlight
Searching for the missing plastic: a global surface mass budget for floating ocean plastics.Laurent Lebreton and Matthias Egger
Predicted global figures for plastic debris accumulation in the ocean surface layer range on the order of hundreds of thousands of metric tons, representing only a few percent of estimated annual emissions into the marine environment. A commonly accepted explanation for this difference is that positively buoyant macroplastic objects do not persist on the ocean surface. Subject to degradation into microplastics, the major part of the mass is predicted to have settled below the surface. However, we argue that such emission-degradation model cannot explain the occurrence of decades-old objects collected by oceanic expeditions. We show that debris circulation dynamics in coastal environments may be a better explanation for this difference. The results presented here suggest that there is a significant time interval, on the order of several years to decades, between terrestrial emissions and representative accumulation in offshore waters. Importantly, our results also indicate that the current generation of secondary microplastics in the global ocean is mostly a result of the degradation of objects produced in the 1990s and earlier.
How to cite: Lebreton, L. and Egger, M.: Searching for the missing plastic: a global surface mass budget for floating ocean plastics., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20962, https://doi.org/10.5194/egusphere-egu2020-20962, 2020.
Predicted global figures for plastic debris accumulation in the ocean surface layer range on the order of hundreds of thousands of metric tons, representing only a few percent of estimated annual emissions into the marine environment. A commonly accepted explanation for this difference is that positively buoyant macroplastic objects do not persist on the ocean surface. Subject to degradation into microplastics, the major part of the mass is predicted to have settled below the surface. However, we argue that such emission-degradation model cannot explain the occurrence of decades-old objects collected by oceanic expeditions. We show that debris circulation dynamics in coastal environments may be a better explanation for this difference. The results presented here suggest that there is a significant time interval, on the order of several years to decades, between terrestrial emissions and representative accumulation in offshore waters. Importantly, our results also indicate that the current generation of secondary microplastics in the global ocean is mostly a result of the degradation of objects produced in the 1990s and earlier.
How to cite: Lebreton, L. and Egger, M.: Searching for the missing plastic: a global surface mass budget for floating ocean plastics., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20962, https://doi.org/10.5194/egusphere-egu2020-20962, 2020.
EGU2020-7442 | Displays | ITS2.8/OS4.10
Macro-plastic weathering in a coastal environment: field experiment in Chesapeake Bay, MarylandMarzia Rizzo, Benjamin Lane, Sairah Malkin, Carmela Vaccaro, Umberto Simeoni, William Nardin, and Corinne Corbau
It is now widely recognized that marine plastics, which are strongly resistant to chemical and biological degradation, have become a widespread and massive pollutant in the world’s oceans. Despite this resistance, in the environment, larger plastic items fragment and degrade into secondary microplastics which are ingestible by some marine organisms and are therefore a potential threat to aquatic foodwebs. The present study aims to better understand factors that contribute to the weathering of plastics in a coastal marine environment, where most microplastics appear to be generated.
Here we performed a field experiment to test the influence of different coastal conditions on macro-plastic weathering. Strips of commercial grade high-density polyethylene (HDPE) and polystyrene (PS) were mounted in replicate on racks (similar in appearance to keys on a glockenspiel, though all of the same length) and deployed at different treatment depths (subtidal versus intertidal) and different treatment hydrodynamic intensity zones (erosional versus depositional) in a sub-estuary of Chesapeake Bay (Maryland, USA). Strips were collected after environmental exposure of 4, 8 and 43 weeks and were analyzed for mass loss, surface chlorophyll accumulation, and surface appearance via SEM imaging.
We observed the PS strips degraded more quickly than the HDPE strips. The results show minor mass variation, in some samples even a slight mass increase, contrary to expectation. This was probably due to the deposition of clay and the presence of microorganisms into the microstructure of the strips, as observed by SEM. Moreover, the SEM images show different kind of fragmentation, with holes or with desquamations. The fragmentation was most marked for the PS strips located at intertidal depths caused by a more intense hydrodynamic energy. Finally, an increase over time was observed in the concentration of chlorophyll in both subtidal depositional PS strips and in subtidal erosional HDPE strips, associated with a lower hydrodynamic energy compared to the intertidal zones. This appears to confer a greater protection of the plastic which therefore undergoes less weathering.
How to cite: Rizzo, M., Lane, B., Malkin, S., Vaccaro, C., Simeoni, U., Nardin, W., and Corbau, C.: Macro-plastic weathering in a coastal environment: field experiment in Chesapeake Bay, Maryland, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7442, https://doi.org/10.5194/egusphere-egu2020-7442, 2020.
It is now widely recognized that marine plastics, which are strongly resistant to chemical and biological degradation, have become a widespread and massive pollutant in the world’s oceans. Despite this resistance, in the environment, larger plastic items fragment and degrade into secondary microplastics which are ingestible by some marine organisms and are therefore a potential threat to aquatic foodwebs. The present study aims to better understand factors that contribute to the weathering of plastics in a coastal marine environment, where most microplastics appear to be generated.
Here we performed a field experiment to test the influence of different coastal conditions on macro-plastic weathering. Strips of commercial grade high-density polyethylene (HDPE) and polystyrene (PS) were mounted in replicate on racks (similar in appearance to keys on a glockenspiel, though all of the same length) and deployed at different treatment depths (subtidal versus intertidal) and different treatment hydrodynamic intensity zones (erosional versus depositional) in a sub-estuary of Chesapeake Bay (Maryland, USA). Strips were collected after environmental exposure of 4, 8 and 43 weeks and were analyzed for mass loss, surface chlorophyll accumulation, and surface appearance via SEM imaging.
We observed the PS strips degraded more quickly than the HDPE strips. The results show minor mass variation, in some samples even a slight mass increase, contrary to expectation. This was probably due to the deposition of clay and the presence of microorganisms into the microstructure of the strips, as observed by SEM. Moreover, the SEM images show different kind of fragmentation, with holes or with desquamations. The fragmentation was most marked for the PS strips located at intertidal depths caused by a more intense hydrodynamic energy. Finally, an increase over time was observed in the concentration of chlorophyll in both subtidal depositional PS strips and in subtidal erosional HDPE strips, associated with a lower hydrodynamic energy compared to the intertidal zones. This appears to confer a greater protection of the plastic which therefore undergoes less weathering.
How to cite: Rizzo, M., Lane, B., Malkin, S., Vaccaro, C., Simeoni, U., Nardin, W., and Corbau, C.: Macro-plastic weathering in a coastal environment: field experiment in Chesapeake Bay, Maryland, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7442, https://doi.org/10.5194/egusphere-egu2020-7442, 2020.
EGU2020-9473 | Displays | ITS2.8/OS4.10
Marine macrophytes retain microplasticsIrina Chubarenko, Elena Esiukova, Olga Lobchuk, Alexandra Volodina, Anastasiya Kupriyanova, and Tatiana Bukanova
Plastic contamination of marine beaches, sediments, water is widely reported. It is known that lot of plastic debris appears on marine shores after storms together with natural marine litter, like ragged vegetation, pieces of wood, etc. The goal of our field campaign in the southeastern part of the Baltic Sea was to check whether growing macrophytes also concentrate and retain plastics, particularly that of microplastic (MP, 0.2-5 mm here) size range. Three summer expeditions were conducted (July 30, August 5 and 7, 2019) in sea coastal zone (depth down to 10 m), where communities of attached macroalgae (Furcellaria lumbricalis, Coccotylus truncatus, Polysiphonia fucoides, Cladophora rupestris) are developed on underwater boulders off the Cape Taran. Samples were collected at 8 stations, covering areas with filamentous algae (at depths of 3.2 and 4 m) and with perennial algae furcellaria (depths of 5.6 and 8.2 m). Along with sampling of growing algae (from area 25x25 cm2 in triplicate), a hand pump was used to sample 20-100 liters of sea water from both algae thicket and algae-free water in surrounding area.
The samples were processed and examined in laboratory. Microplastic particles were found in all the collected samples. Preliminary analysis shows 1.3-5.3 times higher microplastic contamination in water samples taken from algae thicket than in samples taken in free water nearby. The majority of microparticles are fibers, mainly colorless and blue, but also red, black, golden, and yellow.
Investigations are supported by the Russian Science Foundation, grant No. 19-17-00041.
How to cite: Chubarenko, I., Esiukova, E., Lobchuk, O., Volodina, A., Kupriyanova, A., and Bukanova, T.: Marine macrophytes retain microplastics, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9473, https://doi.org/10.5194/egusphere-egu2020-9473, 2020.
Plastic contamination of marine beaches, sediments, water is widely reported. It is known that lot of plastic debris appears on marine shores after storms together with natural marine litter, like ragged vegetation, pieces of wood, etc. The goal of our field campaign in the southeastern part of the Baltic Sea was to check whether growing macrophytes also concentrate and retain plastics, particularly that of microplastic (MP, 0.2-5 mm here) size range. Three summer expeditions were conducted (July 30, August 5 and 7, 2019) in sea coastal zone (depth down to 10 m), where communities of attached macroalgae (Furcellaria lumbricalis, Coccotylus truncatus, Polysiphonia fucoides, Cladophora rupestris) are developed on underwater boulders off the Cape Taran. Samples were collected at 8 stations, covering areas with filamentous algae (at depths of 3.2 and 4 m) and with perennial algae furcellaria (depths of 5.6 and 8.2 m). Along with sampling of growing algae (from area 25x25 cm2 in triplicate), a hand pump was used to sample 20-100 liters of sea water from both algae thicket and algae-free water in surrounding area.
The samples were processed and examined in laboratory. Microplastic particles were found in all the collected samples. Preliminary analysis shows 1.3-5.3 times higher microplastic contamination in water samples taken from algae thicket than in samples taken in free water nearby. The majority of microparticles are fibers, mainly colorless and blue, but also red, black, golden, and yellow.
Investigations are supported by the Russian Science Foundation, grant No. 19-17-00041.
How to cite: Chubarenko, I., Esiukova, E., Lobchuk, O., Volodina, A., Kupriyanova, A., and Bukanova, T.: Marine macrophytes retain microplastics, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9473, https://doi.org/10.5194/egusphere-egu2020-9473, 2020.
EGU2020-2303 | Displays | ITS2.8/OS4.10
Microplastic transport, deposition and burial in seafloor sediments by turbidity currentsFlorian Pohl, Joris Eggenhuisen, Ian Kane, and Michael Clare
Plastic pollution of the world’s oceans represents a threat to marine eco-systems and human health and has come under increasing scrutiny from the general public. Today the global input of plastic waste into the oceans is in the order of 10 million tons per year and predicted to rise by an order of magnitude by 2025; much of this plastic ends up on the seafloor. Plastics, and microplastics, are known to be concentrated in submarine canyons due to their proximity to terrestrial plastic sources, i.e. rivers. Plastics are transported in canyons by turbidity currents, mixtures of sediment and water which flow down-canyon due to their density; these flows can also ‘flush’ canyons, eroding and entraining the sediment lining the canyon walls and bottom. A single turbidity current can last for weeks and transport more sediment than the annual flux of all terrestrial rivers combined. Although it is known that these flows play a critical role in delivering terrestrial sediment and organic carbon to the seafloor, their ability to transport and bury plastics is poorly-understood. Using flume experiments we investigate turbidity currents as agents for the transport and burial of microplastic fragments and fibers. Microplastic fragments are focused at the flow base, whereas fibers are more homogeneously distributed throughout the flow. Surprisingly though, the resultant deposits show the opposite trend with fibers having a higher concentration that fragments. We explain this observation with a depositional mechanism whereby fibers are dragged out of suspension by settling sand grains, are trapped in the aggrading sediment bed and are buried in the deposits. Conversely, fragments may remain suspended in the flow and are less likely to be trapped on the bed. Our results suggest that turbidity currents can transport microplastics over long distances across the ocean floor, and that turbidity currents potentially distribute and bury large quantities of microplastics in seafloor sediments.
How to cite: Pohl, F., Eggenhuisen, J., Kane, I., and Clare, M.: Microplastic transport, deposition and burial in seafloor sediments by turbidity currents, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2303, https://doi.org/10.5194/egusphere-egu2020-2303, 2020.
Plastic pollution of the world’s oceans represents a threat to marine eco-systems and human health and has come under increasing scrutiny from the general public. Today the global input of plastic waste into the oceans is in the order of 10 million tons per year and predicted to rise by an order of magnitude by 2025; much of this plastic ends up on the seafloor. Plastics, and microplastics, are known to be concentrated in submarine canyons due to their proximity to terrestrial plastic sources, i.e. rivers. Plastics are transported in canyons by turbidity currents, mixtures of sediment and water which flow down-canyon due to their density; these flows can also ‘flush’ canyons, eroding and entraining the sediment lining the canyon walls and bottom. A single turbidity current can last for weeks and transport more sediment than the annual flux of all terrestrial rivers combined. Although it is known that these flows play a critical role in delivering terrestrial sediment and organic carbon to the seafloor, their ability to transport and bury plastics is poorly-understood. Using flume experiments we investigate turbidity currents as agents for the transport and burial of microplastic fragments and fibers. Microplastic fragments are focused at the flow base, whereas fibers are more homogeneously distributed throughout the flow. Surprisingly though, the resultant deposits show the opposite trend with fibers having a higher concentration that fragments. We explain this observation with a depositional mechanism whereby fibers are dragged out of suspension by settling sand grains, are trapped in the aggrading sediment bed and are buried in the deposits. Conversely, fragments may remain suspended in the flow and are less likely to be trapped on the bed. Our results suggest that turbidity currents can transport microplastics over long distances across the ocean floor, and that turbidity currents potentially distribute and bury large quantities of microplastics in seafloor sediments.
How to cite: Pohl, F., Eggenhuisen, J., Kane, I., and Clare, M.: Microplastic transport, deposition and burial in seafloor sediments by turbidity currents, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2303, https://doi.org/10.5194/egusphere-egu2020-2303, 2020.
EGU2020-21376 | Displays | ITS2.8/OS4.10
The influence of biofilms and mineral loading on marine plastic fateNicole Rita Posth, Joan Antoni Carreres Calabuig, Sascha Mueller, Kelsey Rogers, and Nynke Keulen
Plastic pollution is a global concern and potential marker of the Anthropocene, yet controls on the environmental fate of this contaminant remain underexplored. Synthetic polymers emitted to aquatic systems undergo chemical, physical and biological forces that affect their weathering, aggregation, degradation, leaching, transport and burial. In the aquatic environment, plastic surfaces attract both biological and mineralogical loading. The presence of biofilm on marine plastics suggests a significant microbial role in the fate of plastic in this new ecological niche, called the Plastisphere. Microorganisms may influence degradation, transport and burial of plastic in the sediment, but also plastic´s incorporation into biogeochemical cycles. Likewise, mineral crystallization on plastic surfaces (i.e., phosphate, iron – rich) induced by microbial processes or formed abiotically may play an important role in plastic aggregation, transport, degradation and burial of meso- to nanoscale size plastics.
Here, we present our current field and laboratory investigations of biological and mineralogical loading of plastics in various geochemical settings. We combine bioimaging (He-ion microscopy (HIM), Scanning Electron Microscopy-Energy Dispersive X-Ray Spectroscopy (SEM-EDS), microbial community and eco-physiology studies, as well as elemental analysis to test mechanisms of loading on plastics, aggregation, transport, and potential impact on element cycling. Results of an on-going in situ study of polystyrene (PS), polyethylene (PE), marine paint, and wood exposed in Svanemøllen Harbor, Copenhagen and laboratory experiments are described. We explore whether surface characteristics and biogeochemical setting are important drivers for the development of mineral-rich biofilm and the role of these mineral-microbe associations in the fate of plastics.
How to cite: Posth, N. R., Carreres Calabuig, J. A., Mueller, S., Rogers, K., and Keulen, N.: The influence of biofilms and mineral loading on marine plastic fate , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21376, https://doi.org/10.5194/egusphere-egu2020-21376, 2020.
Plastic pollution is a global concern and potential marker of the Anthropocene, yet controls on the environmental fate of this contaminant remain underexplored. Synthetic polymers emitted to aquatic systems undergo chemical, physical and biological forces that affect their weathering, aggregation, degradation, leaching, transport and burial. In the aquatic environment, plastic surfaces attract both biological and mineralogical loading. The presence of biofilm on marine plastics suggests a significant microbial role in the fate of plastic in this new ecological niche, called the Plastisphere. Microorganisms may influence degradation, transport and burial of plastic in the sediment, but also plastic´s incorporation into biogeochemical cycles. Likewise, mineral crystallization on plastic surfaces (i.e., phosphate, iron – rich) induced by microbial processes or formed abiotically may play an important role in plastic aggregation, transport, degradation and burial of meso- to nanoscale size plastics.
Here, we present our current field and laboratory investigations of biological and mineralogical loading of plastics in various geochemical settings. We combine bioimaging (He-ion microscopy (HIM), Scanning Electron Microscopy-Energy Dispersive X-Ray Spectroscopy (SEM-EDS), microbial community and eco-physiology studies, as well as elemental analysis to test mechanisms of loading on plastics, aggregation, transport, and potential impact on element cycling. Results of an on-going in situ study of polystyrene (PS), polyethylene (PE), marine paint, and wood exposed in Svanemøllen Harbor, Copenhagen and laboratory experiments are described. We explore whether surface characteristics and biogeochemical setting are important drivers for the development of mineral-rich biofilm and the role of these mineral-microbe associations in the fate of plastics.
How to cite: Posth, N. R., Carreres Calabuig, J. A., Mueller, S., Rogers, K., and Keulen, N.: The influence of biofilms and mineral loading on marine plastic fate , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21376, https://doi.org/10.5194/egusphere-egu2020-21376, 2020.
EGU2020-21417 | Displays | ITS2.8/OS4.10
Investigating the impact of wind, waves and currents on the distribution of surface drifting particles with drifter data and a high resolution numerical model in the nearshore regionFlorian Hahner, Jens Meyerjürgens, Tim Wüllner, Karsten Alexander Lettmann, Thomas Badewien, Oliver Zielinski, and Jörg-Olaf Wolff
A coupled wave and ocean model within the COAWST Modelling System is used in a one-way nesting scenario to investigate the importance of wind, surface currents and Stokes drift for the distribution of surface drifting objects in the nearshore region of the East Frisian barrier island Spiekeroog in the North Sea. Stokes drift and surface currents are computed on a high resolution grid. Combination with meteorological data, Lagrangian floats and in situ data of surface drifters and wave radar measurements allows for a realistic estimation of wind drag coefficients and Stokes Drift. Therefore GPS-Box Drifters have been developed which resemble surface floating macroplastics. Complex topographic features with shallow areas and deep channels within this coastal region lead to strongly heterogeneous wave and current fields. Due to the high resolution of our numerical model these features can be described with the needed accuracy. At the same time computational costs are minimized by using a two-step nesting approach. We show that Stokes Drift becomes a major role in shallow coastal regions, even exceeding the influence of the wind drag, hence playing a key role for realistic descriptions of beaching and the recognition of litter accumulation.
How to cite: Hahner, F., Meyerjürgens, J., Wüllner, T., Lettmann, K. A., Badewien, T., Zielinski, O., and Wolff, J.-O.: Investigating the impact of wind, waves and currents on the distribution of surface drifting particles with drifter data and a high resolution numerical model in the nearshore region, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21417, https://doi.org/10.5194/egusphere-egu2020-21417, 2020.
A coupled wave and ocean model within the COAWST Modelling System is used in a one-way nesting scenario to investigate the importance of wind, surface currents and Stokes drift for the distribution of surface drifting objects in the nearshore region of the East Frisian barrier island Spiekeroog in the North Sea. Stokes drift and surface currents are computed on a high resolution grid. Combination with meteorological data, Lagrangian floats and in situ data of surface drifters and wave radar measurements allows for a realistic estimation of wind drag coefficients and Stokes Drift. Therefore GPS-Box Drifters have been developed which resemble surface floating macroplastics. Complex topographic features with shallow areas and deep channels within this coastal region lead to strongly heterogeneous wave and current fields. Due to the high resolution of our numerical model these features can be described with the needed accuracy. At the same time computational costs are minimized by using a two-step nesting approach. We show that Stokes Drift becomes a major role in shallow coastal regions, even exceeding the influence of the wind drag, hence playing a key role for realistic descriptions of beaching and the recognition of litter accumulation.
How to cite: Hahner, F., Meyerjürgens, J., Wüllner, T., Lettmann, K. A., Badewien, T., Zielinski, O., and Wolff, J.-O.: Investigating the impact of wind, waves and currents on the distribution of surface drifting particles with drifter data and a high resolution numerical model in the nearshore region, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21417, https://doi.org/10.5194/egusphere-egu2020-21417, 2020.
EGU2020-9752 | Displays | ITS2.8/OS4.10
The Stokes drift in ocean surface drift predictionMichel Tamkpanka Tamtare, Dany Dumont, and Cédric Chavanne
Ocean surface drift forecasts are essential for numerous applications. It is a central asset in search and rescue and oil spill response operations, but it is also used for predicting the transport of pelagic eggs, larvae and detritus or other organisms and solutes, for evaluating ecological isolation of marine species, for tracking plastic debris, and for environmental planning and management. The accuracy of surface drift forecasts depends to a large extent on the quality of ocean current, wind and waves forecasts, but also on the drift model used. The standard Eulerian leeway drift model used in most operational systems considers near-surface currents provided by the top grid cell of the ocean circulation model and a correction term proportional to the near-surface wind. Such formulation assumes that the 'wind correction term' accounts for many processes including windage, unresolved ocean current vertical shear, and wave-induced drift. However, the latter two processes are not necessarily linearly related to the local wind velocity. We propose three other drift models that attempt to account for the unresolved near-surface current shear by extrapolating the near-surface currents to the surface assuming Ekman dynamics. Among them two models consider explicitly the Stokes drift, one without and the other with a wind correction term. We assess the performance of the drift models using observations from drifting buoys deployed in the Estuary and Gulf of St. Lawrence, Canada. Drift model inputs are obtained from regional atmospheric, ocean circulation, and spectral wave models. The performance of these drift models is evaluated based on a number of error metrics (e.g. speed, direction, separation distance between the observed and simulated positions) and skill scores determined at different lead times ranging from 3h to 72h. Results show that extrapolating the top-layer ocean model currents to the surface assuming Ekman dynamics for the ageostrophic currents, and adding the Stokes drift predicted by a spectral wave model, leads to the best drift forecast skills without the need to include a wind correction term.
How to cite: Tamtare, M. T., Dumont, D., and Chavanne, C.: The Stokes drift in ocean surface drift prediction, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9752, https://doi.org/10.5194/egusphere-egu2020-9752, 2020.
Ocean surface drift forecasts are essential for numerous applications. It is a central asset in search and rescue and oil spill response operations, but it is also used for predicting the transport of pelagic eggs, larvae and detritus or other organisms and solutes, for evaluating ecological isolation of marine species, for tracking plastic debris, and for environmental planning and management. The accuracy of surface drift forecasts depends to a large extent on the quality of ocean current, wind and waves forecasts, but also on the drift model used. The standard Eulerian leeway drift model used in most operational systems considers near-surface currents provided by the top grid cell of the ocean circulation model and a correction term proportional to the near-surface wind. Such formulation assumes that the 'wind correction term' accounts for many processes including windage, unresolved ocean current vertical shear, and wave-induced drift. However, the latter two processes are not necessarily linearly related to the local wind velocity. We propose three other drift models that attempt to account for the unresolved near-surface current shear by extrapolating the near-surface currents to the surface assuming Ekman dynamics. Among them two models consider explicitly the Stokes drift, one without and the other with a wind correction term. We assess the performance of the drift models using observations from drifting buoys deployed in the Estuary and Gulf of St. Lawrence, Canada. Drift model inputs are obtained from regional atmospheric, ocean circulation, and spectral wave models. The performance of these drift models is evaluated based on a number of error metrics (e.g. speed, direction, separation distance between the observed and simulated positions) and skill scores determined at different lead times ranging from 3h to 72h. Results show that extrapolating the top-layer ocean model currents to the surface assuming Ekman dynamics for the ageostrophic currents, and adding the Stokes drift predicted by a spectral wave model, leads to the best drift forecast skills without the need to include a wind correction term.
How to cite: Tamtare, M. T., Dumont, D., and Chavanne, C.: The Stokes drift in ocean surface drift prediction, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9752, https://doi.org/10.5194/egusphere-egu2020-9752, 2020.
EGU2020-21895 | Displays | ITS2.8/OS4.10
MOHID-Lagrangian: A lagrangian transport model from local to globals scales. Applications to the marine litter problem.Hilda de Pablo, Daniel Garaboa-Paz, Ricardo Canelas, Francisco Campuzano, and Ramiro Neves
The CleanAtlantic project (http://www.cleanatlantic.eu/) aims to protect biodiversity and ecosystem services in the Atlantic Area by improving knowledge and capabilities to monitor, prevent and remove (macro) marine litter. The project will also contribute to raise awareness and change attitudes among stakeholders. Marine litter originates from diverse sources (land and sea-based origins) and has no frontiers as the coastal and ocean circulation turns it into a transnational issue that demands collaborative work and coordination. The need for transnational consistent approaches is at the heart of the Marine Strategy Framework Directive (MSFD) implementation, which requires consistency in terms of marine litter assessment, monitoring and development of programme of measures. This modeling objective, within the CleanAtlantic project, is fully aligned with the collective action nº55 of the OSPAR Regional Plan, which aims to develop sub-regional or regional maps of hotspots of floating litter. These maps will be based on mapping of circulation of floating masses of marine litter, identification of hotspots of accumulation on coastal areas and the role of prevailing currents and winds. The biggest challenge to marine litter modeling is the heterogeneity of the actual litter particles spanning a wide range of different physical properties such as size, density or shape, among others. This, together with a strong interaction with the medium, through processes such as degradation, sinking, beaching, etc and an inherent sensitiveness to initial conditions due to chaotic advection by ocean currents, the effect of wind and waves and the necessary time and space scales to resolve ocean transport, shows how intricate marine litter modeling can be. The number of free parameters, absence of well-known initial conditions and precise equations set to describe all the processes involved require the use large ensembles of simulations to explore a range of possible scenarios, in order to derive useful information about the motion of marine litter. As part of the project, the MARETEC modeling group at the Instituto Superior Técnico – Universidade de Lisboa in collaboration with the University of Santiago de Compostela, developed a Lagrangian transport model, MOHID Lagrangian. This tool can be applied to forecast the formation of retention areas (hotspots) with the highest probability for litter accumulation in any particular region. The abilities of this open-source lagrangian tool include its easy implementation, robustness, computing efficiency being able to simulate millions of particles in short times, the capacity to use any Eulerian circulation fields from other models, as well as the ability to simulate different types of lagrangian particles. The capabilities of the models to predict the origin of marine litter accumulated on the seafloor and coastal areas were assessed and the connection of major rivers with sinks of marine litter during heavy raining conditions was studied. When appropriate, models were calibrated by matching real and predicted marine litter accumulations locations on the shoreline. The area of influence of land and sea-based marine litter sources was assessed and different scenarios of mitigation measures will be evaluated.
How to cite: de Pablo, H., Garaboa-Paz, D., Canelas, R., Campuzano, F., and Neves, R.: MOHID-Lagrangian: A lagrangian transport model from local to globals scales. Applications to the marine litter problem., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21895, https://doi.org/10.5194/egusphere-egu2020-21895, 2020.
The CleanAtlantic project (http://www.cleanatlantic.eu/) aims to protect biodiversity and ecosystem services in the Atlantic Area by improving knowledge and capabilities to monitor, prevent and remove (macro) marine litter. The project will also contribute to raise awareness and change attitudes among stakeholders. Marine litter originates from diverse sources (land and sea-based origins) and has no frontiers as the coastal and ocean circulation turns it into a transnational issue that demands collaborative work and coordination. The need for transnational consistent approaches is at the heart of the Marine Strategy Framework Directive (MSFD) implementation, which requires consistency in terms of marine litter assessment, monitoring and development of programme of measures. This modeling objective, within the CleanAtlantic project, is fully aligned with the collective action nº55 of the OSPAR Regional Plan, which aims to develop sub-regional or regional maps of hotspots of floating litter. These maps will be based on mapping of circulation of floating masses of marine litter, identification of hotspots of accumulation on coastal areas and the role of prevailing currents and winds. The biggest challenge to marine litter modeling is the heterogeneity of the actual litter particles spanning a wide range of different physical properties such as size, density or shape, among others. This, together with a strong interaction with the medium, through processes such as degradation, sinking, beaching, etc and an inherent sensitiveness to initial conditions due to chaotic advection by ocean currents, the effect of wind and waves and the necessary time and space scales to resolve ocean transport, shows how intricate marine litter modeling can be. The number of free parameters, absence of well-known initial conditions and precise equations set to describe all the processes involved require the use large ensembles of simulations to explore a range of possible scenarios, in order to derive useful information about the motion of marine litter. As part of the project, the MARETEC modeling group at the Instituto Superior Técnico – Universidade de Lisboa in collaboration with the University of Santiago de Compostela, developed a Lagrangian transport model, MOHID Lagrangian. This tool can be applied to forecast the formation of retention areas (hotspots) with the highest probability for litter accumulation in any particular region. The abilities of this open-source lagrangian tool include its easy implementation, robustness, computing efficiency being able to simulate millions of particles in short times, the capacity to use any Eulerian circulation fields from other models, as well as the ability to simulate different types of lagrangian particles. The capabilities of the models to predict the origin of marine litter accumulated on the seafloor and coastal areas were assessed and the connection of major rivers with sinks of marine litter during heavy raining conditions was studied. When appropriate, models were calibrated by matching real and predicted marine litter accumulations locations on the shoreline. The area of influence of land and sea-based marine litter sources was assessed and different scenarios of mitigation measures will be evaluated.
How to cite: de Pablo, H., Garaboa-Paz, D., Canelas, R., Campuzano, F., and Neves, R.: MOHID-Lagrangian: A lagrangian transport model from local to globals scales. Applications to the marine litter problem., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21895, https://doi.org/10.5194/egusphere-egu2020-21895, 2020.
EGU2020-4795 | Displays | ITS2.8/OS4.10
3D hotspots of marine litter in the Mediterranean: a modeling studyJavier Soto-Navarro, Gabriel Jordá, Salud Deudero, Montserrat Compa, Carme Alomar, and Ángel Amores
The 3D dispersion of marine litter (ML) over the Mediterranean basin has been simulated using the current fields from a very high resolution regional circulation model (RCM) as a base to run a 3D lagrangian model. Three simulations have been carried out to mimic the evolution of ML with density lower, in the range of, or higher than seawater. In all cases a realistic distribution of ML sources has been used. Our results show that the accumulation/dispersion areas of the floating and buoyancy neutral particles are practically the same, although in the latter the particles are distributed in the water column with 90% of the particles inside the photic layer. Regarding to the denser particles, they rapidly sink and reach the seafloor close to their origin. The analysis of the temporal variability of the ML concentration shows that the regions of higher variability mostly coincide with the accumulation regions. Seasonal variability occurs at a sub-basin scale as a result of the particles redistribution induced by the seasonal variability of the current field. The comparison with previous studies suggests that the accuracy of numerical studies is strongly dependent on the quality of the information about ML sources, and to the modelling strategy adopted. Finally, our results can be used to guide the design of effective observational sampling strategies to estimate the actual ML concentrations in the Mediterranean.
How to cite: Soto-Navarro, J., Jordá, G., Deudero, S., Compa, M., Alomar, C., and Amores, Á.: 3D hotspots of marine litter in the Mediterranean: a modeling study, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4795, https://doi.org/10.5194/egusphere-egu2020-4795, 2020.
The 3D dispersion of marine litter (ML) over the Mediterranean basin has been simulated using the current fields from a very high resolution regional circulation model (RCM) as a base to run a 3D lagrangian model. Three simulations have been carried out to mimic the evolution of ML with density lower, in the range of, or higher than seawater. In all cases a realistic distribution of ML sources has been used. Our results show that the accumulation/dispersion areas of the floating and buoyancy neutral particles are practically the same, although in the latter the particles are distributed in the water column with 90% of the particles inside the photic layer. Regarding to the denser particles, they rapidly sink and reach the seafloor close to their origin. The analysis of the temporal variability of the ML concentration shows that the regions of higher variability mostly coincide with the accumulation regions. Seasonal variability occurs at a sub-basin scale as a result of the particles redistribution induced by the seasonal variability of the current field. The comparison with previous studies suggests that the accuracy of numerical studies is strongly dependent on the quality of the information about ML sources, and to the modelling strategy adopted. Finally, our results can be used to guide the design of effective observational sampling strategies to estimate the actual ML concentrations in the Mediterranean.
How to cite: Soto-Navarro, J., Jordá, G., Deudero, S., Compa, M., Alomar, C., and Amores, Á.: 3D hotspots of marine litter in the Mediterranean: a modeling study, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4795, https://doi.org/10.5194/egusphere-egu2020-4795, 2020.
EGU2020-7975 | Displays | ITS2.8/OS4.10
Differential microbial colonization on microplastic in the Mediterranean Sea coastal zoneAnnika Vaksmaa, Katrin Knittel, Alejandro Abdala Asbun, Maaike Goudriaan, Andreas Ellrott, Harry Witte, and Helge Niemann
Ocean plastic debris poses a large threat to the marine environment. Millions of tons of plastic end up in the ocean each year and the Mediterranean Sea is one of the most plastic polluted sea. Ocean plastic particles are typically covered with microbial biofilms, but it remains unclear if different polymer types are colonized by different communities. Knowledge in this aspect strengthens our understanding if microbes purely use plastic debris as attachment surface or if they may even contribute to the degradation of plastic. To gain a better understanding of the composition and structure of biofilms on micro plastic particles (MP) in the Mediterranean Sea, we analyzed microbial community covering floating MP in a bay/marina (Marina di Campo) on the island of Elba. MPs were collected with a plankton net (mesh size 50µm), fixed for fluorescence microscopy and stored for subsequent DNA extraction, and identification of the polymer with Raman spectroscopy. The particles were mainly comprised of polyethylene (PE), polypropylene (PP) and polystyrene (PS) and were often brittle and with cracks (PE, PP) and showed visual signs of biofouling (PE, PP, PS). Fluorescence in situ hybridization and imaging by high resolution confocal laser scanning microscopy of single MPs revealed high densities of colonization by microbes. 16S rRNA gene amplicon sequencing (Illumina Miseq) revealed higher abundance of archaeal sequences on PS (up to 29% of the reads) in comparison to PE or PP (up to 3% of the reads). The bacterial community in the biofilms on each of the three plastic types consisted mainly of the orders Flavobacteriales, Rickettsiales, Alteromonadales, Cytophagales, Rhodobacterales and Oceanospirillales. Furthermore, we found significant difference in the community composition of biofilms on PE compared to PP and PS but not between PP and PS. The indicator species on PE were Calditrichales, detected at 10 times higher sequence abundance on PE than on PP and PS, as well as several uncultured orders. This study sheds light on preferential microbial attachment and biofilm formation on microplastic particles, yet it remains to be revealed, whether and which of these may contribute to plastic degradation.
How to cite: Vaksmaa, A., Knittel, K., Abdala Asbun, A., Goudriaan, M., Ellrott, A., Witte, H., and Niemann, H.: Differential microbial colonization on microplastic in the Mediterranean Sea coastal zone, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7975, https://doi.org/10.5194/egusphere-egu2020-7975, 2020.
Ocean plastic debris poses a large threat to the marine environment. Millions of tons of plastic end up in the ocean each year and the Mediterranean Sea is one of the most plastic polluted sea. Ocean plastic particles are typically covered with microbial biofilms, but it remains unclear if different polymer types are colonized by different communities. Knowledge in this aspect strengthens our understanding if microbes purely use plastic debris as attachment surface or if they may even contribute to the degradation of plastic. To gain a better understanding of the composition and structure of biofilms on micro plastic particles (MP) in the Mediterranean Sea, we analyzed microbial community covering floating MP in a bay/marina (Marina di Campo) on the island of Elba. MPs were collected with a plankton net (mesh size 50µm), fixed for fluorescence microscopy and stored for subsequent DNA extraction, and identification of the polymer with Raman spectroscopy. The particles were mainly comprised of polyethylene (PE), polypropylene (PP) and polystyrene (PS) and were often brittle and with cracks (PE, PP) and showed visual signs of biofouling (PE, PP, PS). Fluorescence in situ hybridization and imaging by high resolution confocal laser scanning microscopy of single MPs revealed high densities of colonization by microbes. 16S rRNA gene amplicon sequencing (Illumina Miseq) revealed higher abundance of archaeal sequences on PS (up to 29% of the reads) in comparison to PE or PP (up to 3% of the reads). The bacterial community in the biofilms on each of the three plastic types consisted mainly of the orders Flavobacteriales, Rickettsiales, Alteromonadales, Cytophagales, Rhodobacterales and Oceanospirillales. Furthermore, we found significant difference in the community composition of biofilms on PE compared to PP and PS but not between PP and PS. The indicator species on PE were Calditrichales, detected at 10 times higher sequence abundance on PE than on PP and PS, as well as several uncultured orders. This study sheds light on preferential microbial attachment and biofilm formation on microplastic particles, yet it remains to be revealed, whether and which of these may contribute to plastic degradation.
How to cite: Vaksmaa, A., Knittel, K., Abdala Asbun, A., Goudriaan, M., Ellrott, A., Witte, H., and Niemann, H.: Differential microbial colonization on microplastic in the Mediterranean Sea coastal zone, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7975, https://doi.org/10.5194/egusphere-egu2020-7975, 2020.
EGU2020-19500 | Displays | ITS2.8/OS4.10
Marine litter in local environments from mussel aquiculture activities: modelling and validationDaniel Garaboa-Paz, Sara Cloux-González, Pedro Montero-Vilar, and Vicente Pérez-Muñuzuri
The initial conditions of marine litter transport models continue to be one of the big handicaps to produce accurate results to obtain useful information for stakeholders. The amount and the type of marine debris emitted by the different sources introduces a huge uncertainty.
In marine local environments under industrial activity, the sources are confined in space and time and some industrial activities introduce particular debris objects. This allows us to reduce the uncertainties mentioned above in the marine litter modelling problem.
One of these activities is the mussel aquiculture. In Galicia (NW Spain), the mussel farms (Fig.(1)) are based in floating rafts inside the rias(estuaries), with vertical ropes submerged where the mussels are attached to grow up. To avoid the mussel detachment, plastic sticks called mussel pegs or stoppers with a length of 22 cm and a width of 2 cm on average are used (Fig. (2)). These mussel pegs can be lost when the mussel extracting activity takes place. There are estimations of lost around 3 million units per year due to this activity.
The CleanAtlantic project (http://www.cleanatlantic.eu/) aims to protect biodiversity and ecosystem services in the Atlantic Area by improving knowledge and capabilities to monitor, prevent and remove (macro) marine litter. In the scope of this project, we will focus on the modelling of floating mussel pegs lost by mussel farm activity in Ría de Arousa, in the region of Galicia (northwest of Spain).
To that end, we use the met-ocean operational model data from Meteogalicia to perform Lagrangian simulations with MOHID-Lagrangian transport model to obtain concentrations of mussel pegs and the probability maps on surrounding areas inside the Ría de Arousa for the years 2018-2019. Also, we analyze the impact of the different met-ocean conditions in the beaching and coastal accumulation.
Finally, we validate the results with real data obtained from clean beaches surveys from beaches inside the ría during 2018 and 2019.
How to cite: Garaboa-Paz, D., Cloux-González, S., Montero-Vilar, P., and Pérez-Muñuzuri, V.: Marine litter in local environments from mussel aquiculture activities: modelling and validation, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19500, https://doi.org/10.5194/egusphere-egu2020-19500, 2020.
The initial conditions of marine litter transport models continue to be one of the big handicaps to produce accurate results to obtain useful information for stakeholders. The amount and the type of marine debris emitted by the different sources introduces a huge uncertainty.
In marine local environments under industrial activity, the sources are confined in space and time and some industrial activities introduce particular debris objects. This allows us to reduce the uncertainties mentioned above in the marine litter modelling problem.
One of these activities is the mussel aquiculture. In Galicia (NW Spain), the mussel farms (Fig.(1)) are based in floating rafts inside the rias(estuaries), with vertical ropes submerged where the mussels are attached to grow up. To avoid the mussel detachment, plastic sticks called mussel pegs or stoppers with a length of 22 cm and a width of 2 cm on average are used (Fig. (2)). These mussel pegs can be lost when the mussel extracting activity takes place. There are estimations of lost around 3 million units per year due to this activity.
The CleanAtlantic project (http://www.cleanatlantic.eu/) aims to protect biodiversity and ecosystem services in the Atlantic Area by improving knowledge and capabilities to monitor, prevent and remove (macro) marine litter. In the scope of this project, we will focus on the modelling of floating mussel pegs lost by mussel farm activity in Ría de Arousa, in the region of Galicia (northwest of Spain).
To that end, we use the met-ocean operational model data from Meteogalicia to perform Lagrangian simulations with MOHID-Lagrangian transport model to obtain concentrations of mussel pegs and the probability maps on surrounding areas inside the Ría de Arousa for the years 2018-2019. Also, we analyze the impact of the different met-ocean conditions in the beaching and coastal accumulation.
Finally, we validate the results with real data obtained from clean beaches surveys from beaches inside the ría during 2018 and 2019.
How to cite: Garaboa-Paz, D., Cloux-González, S., Montero-Vilar, P., and Pérez-Muñuzuri, V.: Marine litter in local environments from mussel aquiculture activities: modelling and validation, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19500, https://doi.org/10.5194/egusphere-egu2020-19500, 2020.
EGU2020-18476 * | Displays | ITS2.8/OS4.10 | Highlight
Marine plastic waste input between 1990-2015 and potential beaching scenariosCharlotte Laufkoetter, Kevin Lang, Fabio Benedetti, Victor Onink, and Meike Vogt
Marine plastic pollution has been recognized as a serious issue of global concern with substantial risks for marine ecosystems, fisheries, and food supply to people. Yet, the amount of plastic entering the ocean from land and rivers is barely understood. Currently, estimates exist for the coastal plastic input in the year 2010 on country-level resolution and for riverine plastic input for the year 2017. Key limitations are the restricted data availability on plastic waste production, waste collection and waste management. In addition, the transport of mismanaged plastic via wind and rivers is currently not well understood.
We present a model to estimate the global plastic input to the ocean for the years 1990-2015 on a 0.1x0.1° raster. To this end, we first train a machine learning model (random forests) and a linear mixed model to predict plastic waste production on country level, using data of municipal waste collection and several socio-economic predictor variables. We then estimate the amount of plastic waste that enters the environment, using high resolution population data and waste management data of each country. This is combined with distance-based probabilities of land and river transport to obtain the annual amount of plastic entering the ocean on a 0.1x0.1° spatial resolution. Our results indicate that global plastic waste production increased roughly linearly between 1990 to 2015. However, estimating the amount of mismanaged waste and the subsequent transport towards the ocean is afflicted with high uncertainties.
We then use the estimated plastic input into the ocean to force several Lagrangian model runs. These Lagrangian simulations include different parameterizations of plastic beaching, in particular they vary in terms of the beaching probabilities and the assumed residence time of plastic on beaches. We present the global distribution of beached plastic and the size of the reservoir of beached plastic in these model scenarios.
How to cite: Laufkoetter, C., Lang, K., Benedetti, F., Onink, V., and Vogt, M.: Marine plastic waste input between 1990-2015 and potential beaching scenarios, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18476, https://doi.org/10.5194/egusphere-egu2020-18476, 2020.
Marine plastic pollution has been recognized as a serious issue of global concern with substantial risks for marine ecosystems, fisheries, and food supply to people. Yet, the amount of plastic entering the ocean from land and rivers is barely understood. Currently, estimates exist for the coastal plastic input in the year 2010 on country-level resolution and for riverine plastic input for the year 2017. Key limitations are the restricted data availability on plastic waste production, waste collection and waste management. In addition, the transport of mismanaged plastic via wind and rivers is currently not well understood.
We present a model to estimate the global plastic input to the ocean for the years 1990-2015 on a 0.1x0.1° raster. To this end, we first train a machine learning model (random forests) and a linear mixed model to predict plastic waste production on country level, using data of municipal waste collection and several socio-economic predictor variables. We then estimate the amount of plastic waste that enters the environment, using high resolution population data and waste management data of each country. This is combined with distance-based probabilities of land and river transport to obtain the annual amount of plastic entering the ocean on a 0.1x0.1° spatial resolution. Our results indicate that global plastic waste production increased roughly linearly between 1990 to 2015. However, estimating the amount of mismanaged waste and the subsequent transport towards the ocean is afflicted with high uncertainties.
We then use the estimated plastic input into the ocean to force several Lagrangian model runs. These Lagrangian simulations include different parameterizations of plastic beaching, in particular they vary in terms of the beaching probabilities and the assumed residence time of plastic on beaches. We present the global distribution of beached plastic and the size of the reservoir of beached plastic in these model scenarios.
How to cite: Laufkoetter, C., Lang, K., Benedetti, F., Onink, V., and Vogt, M.: Marine plastic waste input between 1990-2015 and potential beaching scenarios, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18476, https://doi.org/10.5194/egusphere-egu2020-18476, 2020.
ITS2.9/SSS8.1 – Plastics in terrestrial ecosystems: detection, quantification and description of their effects on soils and plants
EGU2020-20480 | Displays | ITS2.9/SSS8.1
Investigation into the Vertical Migration of Microplastic in Agricultural SoilLinda Heerey, John O'Sullivan, Michael Bruen, Ian O'Connor, Anne Marie Mahon, Heather Lally, Sinéad Murphy, Róisín Nash, and James O'Connor
The prevalence of microplastic (MP), typically characterised as polymeric materials of particle (1 µm - 5 mm) are an increasing concern in our marine and freshwater systems. International research efforts have mainly focused on the abundance, characteristics and implications of plastic pollution in marine settings, with the transport and fate of plastics in terrestrial and freshwater systems being less well understood. The pathway from land to sea is significant in the Irish context given the widespread use of MP rich biosolids for soil conditioning in agricultural lands. Biosolids represent the treated sewage sludge produced in the wastewater treatment process, ~80% of which nationally is used in land treatment. Given the combined nature (storm and foul water conveyed and treated together) of the drainage network in many parts of Ireland, coupled with evidence that 90% of MPs in influent waters are retained in these sewage sludges, the application of sludges to agricultural lands represents a considerable MP input on these land systems. MPs can potentially be moved or transported from these terrestrial systems through atmospheric escape, and in hydrological pathways through the soil matrix and/ or in direct overland runoff.
Here we report on an experimental investigation exploring the transport potential of biosolid MPs through infiltration and percolation processes in agricultural fields. A drainage experiment was initially undertaken in loosely packed vertical sand columns. Polymers of different type (PVC, PET and LDPE), size (<150 µm, 150-300 µm) and in both virgin and weathered states were seeded on the surface of saturated sand columns and subjected to simulated rainfall of varying intensity for different durations (up to 20 hours). Each test was conducted in triplicate with columns draining under gravity and water samples were collected from their base. The results indicate limited MP mobility given all seeded MPs were recovered in the surface layers (top 5 cm). To confirm these findings, a further investigation involving the extraction of 2 m deep cores from a down-slope transect of an agricultural field was undertaken. This field had been treated with thermally dried wastewater treatment plant sludge annually for ~20 years. The dispersion and depth of MPs were observed through laboratory testing and through Itrax core scanning. Results indicated that the majority of MPs (mostly fibers) were retained in the upper c. 30 cm (plough zone) of each core with penetration of biosolid MPs to depths below this being considerably more limited. Concentrations of MPs found within the plough zone were lower than expected (0.14 to 0.03 MP per gram of soil), suggesting that vertical migration through the soil matrix of biosolid MPs is not a significant hydrological transport pathway.
How to cite: Heerey, L., O'Sullivan, J., Bruen, M., O'Connor, I., Mahon, A. M., Lally, H., Murphy, S., Nash, R., and O'Connor, J.: Investigation into the Vertical Migration of Microplastic in Agricultural Soil, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20480, https://doi.org/10.5194/egusphere-egu2020-20480, 2020.
The prevalence of microplastic (MP), typically characterised as polymeric materials of particle (1 µm - 5 mm) are an increasing concern in our marine and freshwater systems. International research efforts have mainly focused on the abundance, characteristics and implications of plastic pollution in marine settings, with the transport and fate of plastics in terrestrial and freshwater systems being less well understood. The pathway from land to sea is significant in the Irish context given the widespread use of MP rich biosolids for soil conditioning in agricultural lands. Biosolids represent the treated sewage sludge produced in the wastewater treatment process, ~80% of which nationally is used in land treatment. Given the combined nature (storm and foul water conveyed and treated together) of the drainage network in many parts of Ireland, coupled with evidence that 90% of MPs in influent waters are retained in these sewage sludges, the application of sludges to agricultural lands represents a considerable MP input on these land systems. MPs can potentially be moved or transported from these terrestrial systems through atmospheric escape, and in hydrological pathways through the soil matrix and/ or in direct overland runoff.
Here we report on an experimental investigation exploring the transport potential of biosolid MPs through infiltration and percolation processes in agricultural fields. A drainage experiment was initially undertaken in loosely packed vertical sand columns. Polymers of different type (PVC, PET and LDPE), size (<150 µm, 150-300 µm) and in both virgin and weathered states were seeded on the surface of saturated sand columns and subjected to simulated rainfall of varying intensity for different durations (up to 20 hours). Each test was conducted in triplicate with columns draining under gravity and water samples were collected from their base. The results indicate limited MP mobility given all seeded MPs were recovered in the surface layers (top 5 cm). To confirm these findings, a further investigation involving the extraction of 2 m deep cores from a down-slope transect of an agricultural field was undertaken. This field had been treated with thermally dried wastewater treatment plant sludge annually for ~20 years. The dispersion and depth of MPs were observed through laboratory testing and through Itrax core scanning. Results indicated that the majority of MPs (mostly fibers) were retained in the upper c. 30 cm (plough zone) of each core with penetration of biosolid MPs to depths below this being considerably more limited. Concentrations of MPs found within the plough zone were lower than expected (0.14 to 0.03 MP per gram of soil), suggesting that vertical migration through the soil matrix of biosolid MPs is not a significant hydrological transport pathway.
How to cite: Heerey, L., O'Sullivan, J., Bruen, M., O'Connor, I., Mahon, A. M., Lally, H., Murphy, S., Nash, R., and O'Connor, J.: Investigation into the Vertical Migration of Microplastic in Agricultural Soil, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20480, https://doi.org/10.5194/egusphere-egu2020-20480, 2020.
EGU2020-10362 | Displays | ITS2.9/SSS8.1
Microplastic enhances water repellency of soilsAndreas Cramer, Ursula Bundschuh, Pascal Bernard, Mohsen Zarebanadkouki, and Andrea Carminati
Soils are the largest sink of microplastic particles (MPP) in terrestrial ecosystems. However, there is little knowledge on the implication of MPP contaminating soils. In particular, we don’t know how MPP move and, on the other hand, how they affect soil hydraulic properties and soil moisture dynamics.
Among the expected effects of MPP on soil hydraulic properties is the likelihood that MPP enhances soil water repellency. This emerges from (1) the MPP surface chemical properties as well as (2) their surface physical properties like size and shape. Here, we tested mixtures of MPP and a model porous media. The Sessile Drop Method was applied and apparent contact angles were measured. We are able to show enlarged contact angles with rising concentrations of MPP. Already in relatively low concentrations of MPP the contact angels exhibit a steep increase and are rapidly reaching areas of super-hydrophobicity. Furthermore, we provide the physical explanation of the apparent contact angles resulting from the three-phase contact line between solid composite surfaces, water and air. The considered modes of a droplet lying on a surface are Wenzel, Cassie-Baxter and Young. The goal here was to differentiate between the involved surfaces building up the apparent contact angle and to pin down the impact of MPP in these systems.
Thinking about the implications of these results, an increased water repellency alters soil hydraulic properties towards less water content resulting in a shift in the water retention curve. Less water in soils especially at sites of high MPP concentrations leads to a limitation of degradation of MPP by hydrolysis. Additionally, microorganisms themselves and their enzymes cannot migrate in the liquid phase towards the MPP even elongating the process of natural purification.
How to cite: Cramer, A., Bundschuh, U., Bernard, P., Zarebanadkouki, M., and Carminati, A.: Microplastic enhances water repellency of soils, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10362, https://doi.org/10.5194/egusphere-egu2020-10362, 2020.
Soils are the largest sink of microplastic particles (MPP) in terrestrial ecosystems. However, there is little knowledge on the implication of MPP contaminating soils. In particular, we don’t know how MPP move and, on the other hand, how they affect soil hydraulic properties and soil moisture dynamics.
Among the expected effects of MPP on soil hydraulic properties is the likelihood that MPP enhances soil water repellency. This emerges from (1) the MPP surface chemical properties as well as (2) their surface physical properties like size and shape. Here, we tested mixtures of MPP and a model porous media. The Sessile Drop Method was applied and apparent contact angles were measured. We are able to show enlarged contact angles with rising concentrations of MPP. Already in relatively low concentrations of MPP the contact angels exhibit a steep increase and are rapidly reaching areas of super-hydrophobicity. Furthermore, we provide the physical explanation of the apparent contact angles resulting from the three-phase contact line between solid composite surfaces, water and air. The considered modes of a droplet lying on a surface are Wenzel, Cassie-Baxter and Young. The goal here was to differentiate between the involved surfaces building up the apparent contact angle and to pin down the impact of MPP in these systems.
Thinking about the implications of these results, an increased water repellency alters soil hydraulic properties towards less water content resulting in a shift in the water retention curve. Less water in soils especially at sites of high MPP concentrations leads to a limitation of degradation of MPP by hydrolysis. Additionally, microorganisms themselves and their enzymes cannot migrate in the liquid phase towards the MPP even elongating the process of natural purification.
How to cite: Cramer, A., Bundschuh, U., Bernard, P., Zarebanadkouki, M., and Carminati, A.: Microplastic enhances water repellency of soils, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10362, https://doi.org/10.5194/egusphere-egu2020-10362, 2020.
EGU2020-2548 | Displays | ITS2.9/SSS8.1
Transport and retention of nanoplastic particles in saturated columns packed with iron oxyhydroxide-coated sandTaotao Lu, Benjamin S. Gilfedder, and Sven Frei
With the increasing use of nanoplastic products in our daily life, these particles will invariably enter into the subsurface environment. It is, therefore, vital to understand the transport and retention of nanoplastic particles in groundwater systems. Surface charge heterogeneity is one of the basic chemical-physical characteristics of aquifer materials, but little research has been conducted on this topic. This study aimed to understand how the interaction between the porous media, solution chemistry, and NP surface charge influences the transport and retention of PS-NPs in the subsurface. 25 mg/L polystyrene nanoplastic particles (PS-NPs) were injected into columns packed with iron oxyhydroxide-coated sand. In addition, factors such as the content of iron oxyhydroxide-coated sand (λ), pH, ionic strength (IS), and cation valence were systematically studied. DLVO theory was used to evaluate the interactions between PS-NP and the porous media. By comparing the breakthrough curves (BTCs) of PS-NPs, it was clear that all these variables exerted a significant influence on the mobility of PS-NPs in the columns. These effects could be explained by the following: Firstly, by applying the DLVO theory, it was possible to model the electrostatic interaction between quartz sand and PS-NPs. For instance, at different IS (NaCl), the maximum energy barrier (Φmax) decreased with an increase in IS, which meant PS-NPs could more easily overcome the energy barrier to deposited on the sand surface at higher IS. Secondly, the positively charged iron oxyhydroxide coating provided additional favorable deposition sites for negatively charged PS-NPs. However, when the pH of the solution exceeded the iron oxyhydroxide pHpzc (~pH 9), the iron coating became negative and increased the mobility of PS-NPs. Finally, bridging agents, such as Ca2+ and Ba2+, resulted in the significant deposition of PS-NPs on the sand due to the bridging effect connecting the porous media and PS-NPs through the O-containing functional groups on both plastic and mineral surfaces. This study provides a better understanding of how the charge heterogeneity on aquifer materials and groundwater hydrochemistry affect the transport of PS-NPs in aquifers.
How to cite: Lu, T., Gilfedder, B. S., and Frei, S.: Transport and retention of nanoplastic particles in saturated columns packed with iron oxyhydroxide-coated sand, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2548, https://doi.org/10.5194/egusphere-egu2020-2548, 2020.
With the increasing use of nanoplastic products in our daily life, these particles will invariably enter into the subsurface environment. It is, therefore, vital to understand the transport and retention of nanoplastic particles in groundwater systems. Surface charge heterogeneity is one of the basic chemical-physical characteristics of aquifer materials, but little research has been conducted on this topic. This study aimed to understand how the interaction between the porous media, solution chemistry, and NP surface charge influences the transport and retention of PS-NPs in the subsurface. 25 mg/L polystyrene nanoplastic particles (PS-NPs) were injected into columns packed with iron oxyhydroxide-coated sand. In addition, factors such as the content of iron oxyhydroxide-coated sand (λ), pH, ionic strength (IS), and cation valence were systematically studied. DLVO theory was used to evaluate the interactions between PS-NP and the porous media. By comparing the breakthrough curves (BTCs) of PS-NPs, it was clear that all these variables exerted a significant influence on the mobility of PS-NPs in the columns. These effects could be explained by the following: Firstly, by applying the DLVO theory, it was possible to model the electrostatic interaction between quartz sand and PS-NPs. For instance, at different IS (NaCl), the maximum energy barrier (Φmax) decreased with an increase in IS, which meant PS-NPs could more easily overcome the energy barrier to deposited on the sand surface at higher IS. Secondly, the positively charged iron oxyhydroxide coating provided additional favorable deposition sites for negatively charged PS-NPs. However, when the pH of the solution exceeded the iron oxyhydroxide pHpzc (~pH 9), the iron coating became negative and increased the mobility of PS-NPs. Finally, bridging agents, such as Ca2+ and Ba2+, resulted in the significant deposition of PS-NPs on the sand due to the bridging effect connecting the porous media and PS-NPs through the O-containing functional groups on both plastic and mineral surfaces. This study provides a better understanding of how the charge heterogeneity on aquifer materials and groundwater hydrochemistry affect the transport of PS-NPs in aquifers.
How to cite: Lu, T., Gilfedder, B. S., and Frei, S.: Transport and retention of nanoplastic particles in saturated columns packed with iron oxyhydroxide-coated sand, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2548, https://doi.org/10.5194/egusphere-egu2020-2548, 2020.
EGU2020-11584 | Displays | ITS2.9/SSS8.1
Microplastic-Induced Changes in Soil Quality and Functioning: Field Scale TrialsDavey Jones and David Chadwick
Microplastics represent an emerging threat to terrestrial ecosystems, however, our understanding of the fate and behaviour of microplastics in the plant-soil system remains poor. In this replicated, field-scale study we added microplastics (low density polyethylene) to soil at different dose rates representing contamination levels ranging from 0 to 10 t ha-1. These levels were chosen to cover both agricultural and urban contamination levels. Over a 12 month period, we studied a range of chemical, physical and biological soil quality indicators and wheat productivity to evaluate the impact of microplastics on the delivery of soil-related ecosystem services. Overall, we found little evidence to suggest that microplastics affect plant growth even at high dose rates. In contrast, microplastics had a significant impact on soil quality. The use of PLFA profiling and 16S metabarcoding of the bacterial and archaeal community, revealed changes in key microbial taxa at high microplastic doses. In addition, physiological profiling of the microbial community using lipidomics, untargeted metabolomics and targeted nitrogen metabolomics (using GC-MS platform) revealed significant shifts in microbial physiology. No appreciable effect of microplastics was seen on soil N and P dynamics, earthworm abundance or greenhouse gas emissions (CO2, N2O and CH4). Overall, our results suggest that microplastics do induce changes in soil quality, but that this has little overall effect on the delivery of key soil-related ecosystem services. These results contrast strongly with experiments performed in laboratory mesocosms where microplastics negatively affected plant growth and soil quality, and highlight the need to study the impact of microplastics at the field scale over longer timescales.
How to cite: Jones, D. and Chadwick, D.: Microplastic-Induced Changes in Soil Quality and Functioning: Field Scale Trials, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11584, https://doi.org/10.5194/egusphere-egu2020-11584, 2020.
Microplastics represent an emerging threat to terrestrial ecosystems, however, our understanding of the fate and behaviour of microplastics in the plant-soil system remains poor. In this replicated, field-scale study we added microplastics (low density polyethylene) to soil at different dose rates representing contamination levels ranging from 0 to 10 t ha-1. These levels were chosen to cover both agricultural and urban contamination levels. Over a 12 month period, we studied a range of chemical, physical and biological soil quality indicators and wheat productivity to evaluate the impact of microplastics on the delivery of soil-related ecosystem services. Overall, we found little evidence to suggest that microplastics affect plant growth even at high dose rates. In contrast, microplastics had a significant impact on soil quality. The use of PLFA profiling and 16S metabarcoding of the bacterial and archaeal community, revealed changes in key microbial taxa at high microplastic doses. In addition, physiological profiling of the microbial community using lipidomics, untargeted metabolomics and targeted nitrogen metabolomics (using GC-MS platform) revealed significant shifts in microbial physiology. No appreciable effect of microplastics was seen on soil N and P dynamics, earthworm abundance or greenhouse gas emissions (CO2, N2O and CH4). Overall, our results suggest that microplastics do induce changes in soil quality, but that this has little overall effect on the delivery of key soil-related ecosystem services. These results contrast strongly with experiments performed in laboratory mesocosms where microplastics negatively affected plant growth and soil quality, and highlight the need to study the impact of microplastics at the field scale over longer timescales.
How to cite: Jones, D. and Chadwick, D.: Microplastic-Induced Changes in Soil Quality and Functioning: Field Scale Trials, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11584, https://doi.org/10.5194/egusphere-egu2020-11584, 2020.
EGU2020-18474 | Displays | ITS2.9/SSS8.1
What do we know about how the terrestrial multicellular soil fauna reacts to microplastic?Frederick Büks, Loes van Schaik, and Martin Kaupenjohann
The ubiquitous accumulation of microplastic particles across all global ecosystems comes along with the uptake into soil food webs. In this work, we evaluated studies on passive translocation, active ingestion, bioaccumulation and adverse effects within the phylogenetic tree of multicellular soil faunal life. The representativity of these studies for natural soil ecosystems was assessed using data on the type of plastic, shape, composition, concentration and time of exposure.
Available studies cover a wide range of soil organisms, with emphasis on earthworms, nematodes, springtails, beetles and lugworms, each focused on well known model organisms. Most of the studies applied microplastic concentrations similar to amounts in slightly to very heavily polluted soils. In many cases, however, polystyrene microspheres have been used, a combination of plastic type and shape, that is easily available, but do not represent the main plastic input into soil ecosystems. In turn, microplastic fibres are strongly underrepresented compared to their high abundance within contaminated soils. Further properties of plastic such as aging, coating and additives were insufficiently documented. Despite of these limitations, there is a recurring pattern of active intake followed by a population shift within the gut microbiome and adverse effects on motility, growth, metabolism, reproduction and mortality in various combinations, especially at high concentrations and small particle sizes.
For future experiments, we recommend a modus operandi that takes into account the type, shape, grade of aging and specific concentrations of microplastic fractions in natural and contaminated soils as well as long-term incubation within soil mesocosms.
How to cite: Büks, F., van Schaik, L., and Kaupenjohann, M.: What do we know about how the terrestrial multicellular soil fauna reacts to microplastic?, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18474, https://doi.org/10.5194/egusphere-egu2020-18474, 2020.
The ubiquitous accumulation of microplastic particles across all global ecosystems comes along with the uptake into soil food webs. In this work, we evaluated studies on passive translocation, active ingestion, bioaccumulation and adverse effects within the phylogenetic tree of multicellular soil faunal life. The representativity of these studies for natural soil ecosystems was assessed using data on the type of plastic, shape, composition, concentration and time of exposure.
Available studies cover a wide range of soil organisms, with emphasis on earthworms, nematodes, springtails, beetles and lugworms, each focused on well known model organisms. Most of the studies applied microplastic concentrations similar to amounts in slightly to very heavily polluted soils. In many cases, however, polystyrene microspheres have been used, a combination of plastic type and shape, that is easily available, but do not represent the main plastic input into soil ecosystems. In turn, microplastic fibres are strongly underrepresented compared to their high abundance within contaminated soils. Further properties of plastic such as aging, coating and additives were insufficiently documented. Despite of these limitations, there is a recurring pattern of active intake followed by a population shift within the gut microbiome and adverse effects on motility, growth, metabolism, reproduction and mortality in various combinations, especially at high concentrations and small particle sizes.
For future experiments, we recommend a modus operandi that takes into account the type, shape, grade of aging and specific concentrations of microplastic fractions in natural and contaminated soils as well as long-term incubation within soil mesocosms.
How to cite: Büks, F., van Schaik, L., and Kaupenjohann, M.: What do we know about how the terrestrial multicellular soil fauna reacts to microplastic?, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18474, https://doi.org/10.5194/egusphere-egu2020-18474, 2020.
EGU2020-13733 | Displays | ITS2.9/SSS8.1
Microplastics in agroecosystem – effects of plastic mulch film residues on soil-plant systemYueling Qi, Xiaomei Yang, Esperanza Huerta Lwanga, Paolina Garbeva, and Violette Geissen
In the last decades, the use of plastic mulch film in (semi-) arid agricultural regions has strongly increased. Plastic residues from mulching remain and accumulate in soil that can lead to serious environment problems. Biodegradable plastic mulch films were produced as environmentally friendly alternative for solving plastic pollution in agricultural land. However, the effects of polyethylene and biodegradable mulch film residues on soil-plant system are largely unknown.
In this PhD project, we performed a series of experiments to assess the effects of low density polyethylene (LDPE) and biodegradable plastic (Bio, made of polyethylene terephthalate, polybutylene terephthalate, pullulan) with macro- (5 mm2, Ma) and micro- (50 µm-1 mm, Mi) sizes on wheat growth, rhizosphere microbiome, soil physicochemical and hydrological properties and soil suppressiveness. The results showed that plastic residues presented negative effects on both above- and below-ground parts for both vegetative and reproductive development of wheat. We also identified significant effects of Bio and LDPE plastic residues on the rhizosphere bacterial communities and on the blend of volatiles emitted in the rhizosphere. Tested with a gradient in concentration of plastic residues (0, 0.5%, 1% and 2% w/w), soil physicochemical and hydrological properties nonmonotonically responded to residual amount of plastic debris in the soil. Lastly, although we did not observe effects of plastic residues on disease infection in our experiment, we anticipated that soil suppressiveness and other soil functions would be affected with the presence of plastics in soil.
Overall, our study provides evidence for impacts of plastic residues on the soil-plant system, suggesting urgent need for more research examining their environmental impacts on agroecosystems.
How to cite: Qi, Y., Yang, X., Huerta Lwanga, E., Garbeva, P., and Geissen, V.: Microplastics in agroecosystem – effects of plastic mulch film residues on soil-plant system, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13733, https://doi.org/10.5194/egusphere-egu2020-13733, 2020.
In the last decades, the use of plastic mulch film in (semi-) arid agricultural regions has strongly increased. Plastic residues from mulching remain and accumulate in soil that can lead to serious environment problems. Biodegradable plastic mulch films were produced as environmentally friendly alternative for solving plastic pollution in agricultural land. However, the effects of polyethylene and biodegradable mulch film residues on soil-plant system are largely unknown.
In this PhD project, we performed a series of experiments to assess the effects of low density polyethylene (LDPE) and biodegradable plastic (Bio, made of polyethylene terephthalate, polybutylene terephthalate, pullulan) with macro- (5 mm2, Ma) and micro- (50 µm-1 mm, Mi) sizes on wheat growth, rhizosphere microbiome, soil physicochemical and hydrological properties and soil suppressiveness. The results showed that plastic residues presented negative effects on both above- and below-ground parts for both vegetative and reproductive development of wheat. We also identified significant effects of Bio and LDPE plastic residues on the rhizosphere bacterial communities and on the blend of volatiles emitted in the rhizosphere. Tested with a gradient in concentration of plastic residues (0, 0.5%, 1% and 2% w/w), soil physicochemical and hydrological properties nonmonotonically responded to residual amount of plastic debris in the soil. Lastly, although we did not observe effects of plastic residues on disease infection in our experiment, we anticipated that soil suppressiveness and other soil functions would be affected with the presence of plastics in soil.
Overall, our study provides evidence for impacts of plastic residues on the soil-plant system, suggesting urgent need for more research examining their environmental impacts on agroecosystems.
How to cite: Qi, Y., Yang, X., Huerta Lwanga, E., Garbeva, P., and Geissen, V.: Microplastics in agroecosystem – effects of plastic mulch film residues on soil-plant system, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13733, https://doi.org/10.5194/egusphere-egu2020-13733, 2020.
EGU2020-5580 | Displays | ITS2.9/SSS8.1
Effect of plastic mulching on the accumulation and distribution of macro and micro plastic particles in the soil - A case study of two farming systems in North West ChinaFanrong Meng
Plastic mulching is a common farming practice in arid and semi-arid regions. Inappropriate disposal of plastic films can lead to the contamination of macroplastics (MaPs) and microplastics (MiPs) in the soil. To study the effects of plastic mulching on the contamination of soil with MaPs and MiPs and the role of farm management on this contamination, research was done on two farming systems in Northwest China, where plastic mulching is intensively used. Farming in Wutong Village (S1) is characterized by small plots and low-intensity machinery tillage while farming in Shihezi (S2) is characterized by large plots and high-intensity machinery tillage. Soils were sampled to a depth of 30 cm and analysed. The results showed that MaPs ranged from 30.3 kg·ha-1 to 82.3 kg·ha-1 in S1 and from 43.5 kg·ha-1 to 148 kg·ha-1 in S2. The main macroplastics size categories were 2-10 cm2 and 10-50 cm2 in S1 and < 2 cm2 and 2-10 cm2 in S2. In S1, we found that 6-8 years of continuous mulching practice resulted in the accumulation of more MaPs as compared to the use of intermittent mulching over the span of 30 years. For S2, 6 to 15 years of plastic mulching use led to MaPs accumulation in fields but from 15 to18 years, the MaPs number and content in soils dropped due to further fragmentation of the plastic and its dispersal into the environment. MiPs were mainly detected in fields with > 30 years of mulching use in S1 and discovered in all fields in S2, this indicated that long-term cultivation and high-intensity machinery tillage could lead to more severe microplastic pollution. These results emphasized the impacts of farm management on the accumulation and spread of MaPs and MiPs in the soil and regulations are needed to prevent further contamination of the soil.
How to cite: Meng, F.: Effect of plastic mulching on the accumulation and distribution of macro and micro plastic particles in the soil - A case study of two farming systems in North West China , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5580, https://doi.org/10.5194/egusphere-egu2020-5580, 2020.
Plastic mulching is a common farming practice in arid and semi-arid regions. Inappropriate disposal of plastic films can lead to the contamination of macroplastics (MaPs) and microplastics (MiPs) in the soil. To study the effects of plastic mulching on the contamination of soil with MaPs and MiPs and the role of farm management on this contamination, research was done on two farming systems in Northwest China, where plastic mulching is intensively used. Farming in Wutong Village (S1) is characterized by small plots and low-intensity machinery tillage while farming in Shihezi (S2) is characterized by large plots and high-intensity machinery tillage. Soils were sampled to a depth of 30 cm and analysed. The results showed that MaPs ranged from 30.3 kg·ha-1 to 82.3 kg·ha-1 in S1 and from 43.5 kg·ha-1 to 148 kg·ha-1 in S2. The main macroplastics size categories were 2-10 cm2 and 10-50 cm2 in S1 and < 2 cm2 and 2-10 cm2 in S2. In S1, we found that 6-8 years of continuous mulching practice resulted in the accumulation of more MaPs as compared to the use of intermittent mulching over the span of 30 years. For S2, 6 to 15 years of plastic mulching use led to MaPs accumulation in fields but from 15 to18 years, the MaPs number and content in soils dropped due to further fragmentation of the plastic and its dispersal into the environment. MiPs were mainly detected in fields with > 30 years of mulching use in S1 and discovered in all fields in S2, this indicated that long-term cultivation and high-intensity machinery tillage could lead to more severe microplastic pollution. These results emphasized the impacts of farm management on the accumulation and spread of MaPs and MiPs in the soil and regulations are needed to prevent further contamination of the soil.
How to cite: Meng, F.: Effect of plastic mulching on the accumulation and distribution of macro and micro plastic particles in the soil - A case study of two farming systems in North West China , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5580, https://doi.org/10.5194/egusphere-egu2020-5580, 2020.
EGU2020-16710 | Displays | ITS2.9/SSS8.1
Fate of microplastic particles in agricultural soil systems: Transport and accumulation processes in contrasting environmentsRachel Hurley, Jill Crossman, Theresa Schell, Andreu Rico, Martyn Futter, Marco Vighi, and Luca Nizzetto
There is a paucity of data regarding the sources and fate of microplastics in agricultural settings. This is despite indication that these environments may receive significant contributions of microplastics from a range of inputs. Several studies have documented the enrichment of sewage sludge by microplastic particles as a result of wastewater treatment processes. In many countries, sludge is applied to agricultural soils as a soil conditioner. Based on the extent of application and microplastic loads in sludge material, it is expected that sludge application to land represents a considerable release pathway for microplastic particles to the environment. The fate of these particles across spatial and temporal scales is, however, unknown. This includes the potential for the propagation of contamination to connected aquatic systems and beyond.
The Water JPI-funded IMPASSE project addresses significant gaps in our understanding of microplastic contamination in agricultural systems. As part of this project, two case study locations in contrasting environments were selected for study: the semi-arid Henares catchment in central Spain and the humid continental Beaver and Orillia catchments in the Lake Simcoe watershed in Ontario, Canada. Agricultural fields subjected to different sludge application treatments (timing and origin of material) were assessed for microplastic contamination through repeat soil core sampling. This was coupled with runoff experiments using modified Pinson collectors to track the mobilisation of sewage sludge-derived particles from soils. Laboratory analysis was performed according to Hurley et al. (2018). Thorough characterisation of all microplastics particles down to a lower size limit of 50 µm was achieved, including particle size, morphology, polymer type, and estimated mass. Microplastic loads in soils increased following sludge application. The dynamics of contamination from soil core analyses show complex spatio-temporal patterns of accumulation and vertical and lateral transport of particles. Through the use of experimental runoff plots, the mobilisation of microplastic particles from agricultural soils has been documented for the first time. Preferential accumulation and transport of different particle morphologies – e.g. fibres vs fragments – was also observed. These findings form the basis of innovative modelling work in the case study catchments to predict dynamics of agricultural microplastic contamination and subsequent transfer to aquatic environments.
How to cite: Hurley, R., Crossman, J., Schell, T., Rico, A., Futter, M., Vighi, M., and Nizzetto, L.: Fate of microplastic particles in agricultural soil systems: Transport and accumulation processes in contrasting environments, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-16710, https://doi.org/10.5194/egusphere-egu2020-16710, 2020.
There is a paucity of data regarding the sources and fate of microplastics in agricultural settings. This is despite indication that these environments may receive significant contributions of microplastics from a range of inputs. Several studies have documented the enrichment of sewage sludge by microplastic particles as a result of wastewater treatment processes. In many countries, sludge is applied to agricultural soils as a soil conditioner. Based on the extent of application and microplastic loads in sludge material, it is expected that sludge application to land represents a considerable release pathway for microplastic particles to the environment. The fate of these particles across spatial and temporal scales is, however, unknown. This includes the potential for the propagation of contamination to connected aquatic systems and beyond.
The Water JPI-funded IMPASSE project addresses significant gaps in our understanding of microplastic contamination in agricultural systems. As part of this project, two case study locations in contrasting environments were selected for study: the semi-arid Henares catchment in central Spain and the humid continental Beaver and Orillia catchments in the Lake Simcoe watershed in Ontario, Canada. Agricultural fields subjected to different sludge application treatments (timing and origin of material) were assessed for microplastic contamination through repeat soil core sampling. This was coupled with runoff experiments using modified Pinson collectors to track the mobilisation of sewage sludge-derived particles from soils. Laboratory analysis was performed according to Hurley et al. (2018). Thorough characterisation of all microplastics particles down to a lower size limit of 50 µm was achieved, including particle size, morphology, polymer type, and estimated mass. Microplastic loads in soils increased following sludge application. The dynamics of contamination from soil core analyses show complex spatio-temporal patterns of accumulation and vertical and lateral transport of particles. Through the use of experimental runoff plots, the mobilisation of microplastic particles from agricultural soils has been documented for the first time. Preferential accumulation and transport of different particle morphologies – e.g. fibres vs fragments – was also observed. These findings form the basis of innovative modelling work in the case study catchments to predict dynamics of agricultural microplastic contamination and subsequent transfer to aquatic environments.
How to cite: Hurley, R., Crossman, J., Schell, T., Rico, A., Futter, M., Vighi, M., and Nizzetto, L.: Fate of microplastic particles in agricultural soil systems: Transport and accumulation processes in contrasting environments, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-16710, https://doi.org/10.5194/egusphere-egu2020-16710, 2020.
EGU2020-20054 | Displays | ITS2.9/SSS8.1
Estimating regional distributions of agricultural microplastic immissions into soils - a top-down modeling approach for GermanyElke Brandes, Martin Henseler, and Peter Kreins
The topic of microplastic (MP) contamination in agricultural soils has recently gained attention in science and society. Experimental studies indicate that microplastic (i.e., plastic particles < 5mm in size) can have negative effects on soil physical properties and ecology, but an actual impairment of soil functions at current concentration levels in agricultural soils has yet to be shown. Nevertheless, the continuous production of single-use plastic and low degradation rates implicate an accumulative effect of MP in the environment that calls for more research on the amounts and impacts of this contaminant. The most discussed agricultural sources for microplastic contamination of cropland are biosolids (e.g., sewage sludge and compost) applied as soil amendment to fields, as well as tarps used in plasticulture. However, knowledge about how much microplastic is accumulating in agricultural soils is scarce. Only a few analytic quantification studies have been published so far. Existing estimates from production and consumption statistics have been performed at national level, but as of yet, spatially explicit regional quantification of microplastic immissions into agricultural soils are missing in the scientific literature.
Using data on microplastic concentrations in biosolids from the literature in combination with national and regional statistics on sewage sludge, compost and organic waste production, as well as specialty crop areas, we estimated annual microplastic immissions into agricultural soils in Germany at NUTS3 (county) resolution. This top-down modeling approach allowed us to identify hot spots where potential microplastic concentration is high.
Although these estimates are based on limited data availability, they yield information on the spatial distribution of potential microplastic contamination in agricultural soils in Germany. Our results provide first indications about locations where detailed soil analysis could be useful to investigate in situ processes and impacts. The methodology can be applied to other regions and continuously adapted when more knowledge on relevant sources, transport, accumulation, and degradation rates of microplastic in soils is gained in the future.
How to cite: Brandes, E., Henseler, M., and Kreins, P.: Estimating regional distributions of agricultural microplastic immissions into soils - a top-down modeling approach for Germany, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20054, https://doi.org/10.5194/egusphere-egu2020-20054, 2020.
The topic of microplastic (MP) contamination in agricultural soils has recently gained attention in science and society. Experimental studies indicate that microplastic (i.e., plastic particles < 5mm in size) can have negative effects on soil physical properties and ecology, but an actual impairment of soil functions at current concentration levels in agricultural soils has yet to be shown. Nevertheless, the continuous production of single-use plastic and low degradation rates implicate an accumulative effect of MP in the environment that calls for more research on the amounts and impacts of this contaminant. The most discussed agricultural sources for microplastic contamination of cropland are biosolids (e.g., sewage sludge and compost) applied as soil amendment to fields, as well as tarps used in plasticulture. However, knowledge about how much microplastic is accumulating in agricultural soils is scarce. Only a few analytic quantification studies have been published so far. Existing estimates from production and consumption statistics have been performed at national level, but as of yet, spatially explicit regional quantification of microplastic immissions into agricultural soils are missing in the scientific literature.
Using data on microplastic concentrations in biosolids from the literature in combination with national and regional statistics on sewage sludge, compost and organic waste production, as well as specialty crop areas, we estimated annual microplastic immissions into agricultural soils in Germany at NUTS3 (county) resolution. This top-down modeling approach allowed us to identify hot spots where potential microplastic concentration is high.
Although these estimates are based on limited data availability, they yield information on the spatial distribution of potential microplastic contamination in agricultural soils in Germany. Our results provide first indications about locations where detailed soil analysis could be useful to investigate in situ processes and impacts. The methodology can be applied to other regions and continuously adapted when more knowledge on relevant sources, transport, accumulation, and degradation rates of microplastic in soils is gained in the future.
How to cite: Brandes, E., Henseler, M., and Kreins, P.: Estimating regional distributions of agricultural microplastic immissions into soils - a top-down modeling approach for Germany, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20054, https://doi.org/10.5194/egusphere-egu2020-20054, 2020.
EGU2020-1954 | Displays | ITS2.9/SSS8.1
Soil erosion as pathway of microplastic transport from agricultural soils to inland watersRaphael Pinheiro Machado Rehm and Peter Fiener
Agricultural soils play a key role as sink of microplastic (MP) coming from different sources, especially via the application of sewage sludge, compost, the decay of plastic mulch, and tire ware particles along streets. However, the effectiveness of this sink might be substantially reduced in areas subjected to water erosion. The aim of this study is to determine the transport potential of MP during water erosion events on agricultural land. More specifically, we are interested if MP is preferentially transported or if it is attached or associated to soil minerals and aggregates leading to a more conservative transport behavior. The transport behavior is studied based on a series of plot rainfall simulations on a silty loam (16% sand, 59% silt, 25% clay; 1.3% OC) and a loamy sand soil (72% sand, 18% silt, 10% clay; 0.9% OC) located at experimental farms in Southern Germany. To simulate heavy rain on dry and wet soil a sequence of two simulations with a gap of 30 min was performed for 30 min each (rainfall intensity 60 mm/h) on each of the four plots (2 m x 5 m). The simulations are repeated in spring and autumn for two years. Before the beginning of the experiment all plots were prepared, adding fine (53-100 μm) and coarse (250-300 μm) microplastic (high density polyethylene) in a topsoil (< 10 cm) concentration of 10 g/m-2 and 50 g m-2. The different soils show similar mean runoff rates for the dry run (2 l min-1), whereas the wet run produced slightly higher rates on the silty loam (5.5 l min-1) compared to the loamy sand soil (4 l min-1). In contrast, MP erosion and transport under the loamy sand was more selective, leading to MP enrichment for the first set of experiments of a factor of 3 to 20, compared to MP under silty loam with an enrichment factor of 0,4 to 0,8. The results from the first set of rainfall simulations clearly underlines the selective nature of MP erosion and transport leading to a disproportionate loss of MP from eroding sites into inland waters. The degree of MP enrichment in surface runoff is heavily depending on soil texture and especially moisture status at the beginning of an erosive rainfall event. Further investigations regarding more long-term MP enrichment effects depending on MP association to soil minerals and aggregates are under preparation.
How to cite: Pinheiro Machado Rehm, R. and Fiener, P.: Soil erosion as pathway of microplastic transport from agricultural soils to inland waters, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1954, https://doi.org/10.5194/egusphere-egu2020-1954, 2020.
Agricultural soils play a key role as sink of microplastic (MP) coming from different sources, especially via the application of sewage sludge, compost, the decay of plastic mulch, and tire ware particles along streets. However, the effectiveness of this sink might be substantially reduced in areas subjected to water erosion. The aim of this study is to determine the transport potential of MP during water erosion events on agricultural land. More specifically, we are interested if MP is preferentially transported or if it is attached or associated to soil minerals and aggregates leading to a more conservative transport behavior. The transport behavior is studied based on a series of plot rainfall simulations on a silty loam (16% sand, 59% silt, 25% clay; 1.3% OC) and a loamy sand soil (72% sand, 18% silt, 10% clay; 0.9% OC) located at experimental farms in Southern Germany. To simulate heavy rain on dry and wet soil a sequence of two simulations with a gap of 30 min was performed for 30 min each (rainfall intensity 60 mm/h) on each of the four plots (2 m x 5 m). The simulations are repeated in spring and autumn for two years. Before the beginning of the experiment all plots were prepared, adding fine (53-100 μm) and coarse (250-300 μm) microplastic (high density polyethylene) in a topsoil (< 10 cm) concentration of 10 g/m-2 and 50 g m-2. The different soils show similar mean runoff rates for the dry run (2 l min-1), whereas the wet run produced slightly higher rates on the silty loam (5.5 l min-1) compared to the loamy sand soil (4 l min-1). In contrast, MP erosion and transport under the loamy sand was more selective, leading to MP enrichment for the first set of experiments of a factor of 3 to 20, compared to MP under silty loam with an enrichment factor of 0,4 to 0,8. The results from the first set of rainfall simulations clearly underlines the selective nature of MP erosion and transport leading to a disproportionate loss of MP from eroding sites into inland waters. The degree of MP enrichment in surface runoff is heavily depending on soil texture and especially moisture status at the beginning of an erosive rainfall event. Further investigations regarding more long-term MP enrichment effects depending on MP association to soil minerals and aggregates are under preparation.
How to cite: Pinheiro Machado Rehm, R. and Fiener, P.: Soil erosion as pathway of microplastic transport from agricultural soils to inland waters, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1954, https://doi.org/10.5194/egusphere-egu2020-1954, 2020.
EGU2020-9637 | Displays | ITS2.9/SSS8.1
Effects of microplastic particles on vertical water flow in soil columnsHannes Laermanns, Katharina Luise Müller, Martin Löder, Ramona Ehl, Julia Möller, and Christina Bogner
Since the introduction of synthetic polymers into the global material cycle, increasing amounts of microplastics have been deposited in soils. In contrast to their impact on marine environments, only little is known about the influence of these long-term contaminants on terrestrial ecosystems in general and on physical and chemical soil properties in particular. First studies highlight that microplastic particles might attach to and clog especially smaller than 30 µm pores which are crucial for the hydraulic conductivity and therefore the water flow of soils (Zhang et al., 2019, doi:10.1016/j.scitotenv.2019.03.149).
In our study, we analyse the effects of microplastic particles on vertical water flow in soil columns. In infiltration-drainage experiments, we contrast water flow in soil columns with and without microplastic particles. A bromide tracer is used to compare the arrival times of the wetting fronts and the tracer fronts, and water flow is characterized using the viscous flow approach (e.g. Bogner & Germann, 2019, doi:10.2136/vzj2018.09.0168). We show first results on how microplastic particles may affect the vertical water flow in soils and the breakthrough of the tracer.
How to cite: Laermanns, H., Müller, K. L., Löder, M., Ehl, R., Möller, J., and Bogner, C.: Effects of microplastic particles on vertical water flow in soil columns, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9637, https://doi.org/10.5194/egusphere-egu2020-9637, 2020.
Since the introduction of synthetic polymers into the global material cycle, increasing amounts of microplastics have been deposited in soils. In contrast to their impact on marine environments, only little is known about the influence of these long-term contaminants on terrestrial ecosystems in general and on physical and chemical soil properties in particular. First studies highlight that microplastic particles might attach to and clog especially smaller than 30 µm pores which are crucial for the hydraulic conductivity and therefore the water flow of soils (Zhang et al., 2019, doi:10.1016/j.scitotenv.2019.03.149).
In our study, we analyse the effects of microplastic particles on vertical water flow in soil columns. In infiltration-drainage experiments, we contrast water flow in soil columns with and without microplastic particles. A bromide tracer is used to compare the arrival times of the wetting fronts and the tracer fronts, and water flow is characterized using the viscous flow approach (e.g. Bogner & Germann, 2019, doi:10.2136/vzj2018.09.0168). We show first results on how microplastic particles may affect the vertical water flow in soils and the breakthrough of the tracer.
How to cite: Laermanns, H., Müller, K. L., Löder, M., Ehl, R., Möller, J., and Bogner, C.: Effects of microplastic particles on vertical water flow in soil columns, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9637, https://doi.org/10.5194/egusphere-egu2020-9637, 2020.
EGU2020-19717 | Displays | ITS2.9/SSS8.1
Transport of Nanoplastic under groundwater aquifer flow and transport conditionsSascha Müller, Tonci Balic-Zunic, and Nicole R. Posth
In terrestrial environments soils are hypothesized sinks for plastic particles. Nonetheless, due to the existence of preferential flow paths as well as a variety of geochemical and microbiological processes, this sink may only be temporary. A vertical translocation from soils to groundwater aquifers eventually occurs along different pathways. In these conditions Nanoplastic transport characteristics are similar to colloidal transport behavior. Therby the magnitude of plastic transport is eventually governed by complex interplay between the particle with its surrounding media (particle-particle, particle-solvent, particle- porous media) masked by different hydro-geochemical and microbiological conditions. The physical entrapment of particles (straining) may be significant when the particle diameter exceeds 5% of the median grain size diameter. Below that size additional electrostatic, van der Waals or steric interaction become increasingly important.
We present a preliminary dataset on the interaction between Nano-sized Polystyrene (PS) with different surface coatings and a variety of common minerals occurring in groundwater aquifers under the presence of Natural Organic Matter (NOM). The reference aquifer material is based on the Danish subsurface structure of Quaternary and Miocene aquifer material, e.g. quartz, calcite and pyrite among others. In our study, batch scale interactions are up-scaled in column flow and transport experiments, simulating different groundwater aquifer flow conditions in the presence of selected minerals and NOM.
This aims to clarify transport behavior of plastic pollutant in the subsurface environment. Furthermore, it serves as guide in qualitatively assessing and quantifying the vulnerability of groundwater aquifers to Nanoplastic pollution.
How to cite: Müller, S., Balic-Zunic, T., and Posth, N. R.: Transport of Nanoplastic under groundwater aquifer flow and transport conditions, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19717, https://doi.org/10.5194/egusphere-egu2020-19717, 2020.
In terrestrial environments soils are hypothesized sinks for plastic particles. Nonetheless, due to the existence of preferential flow paths as well as a variety of geochemical and microbiological processes, this sink may only be temporary. A vertical translocation from soils to groundwater aquifers eventually occurs along different pathways. In these conditions Nanoplastic transport characteristics are similar to colloidal transport behavior. Therby the magnitude of plastic transport is eventually governed by complex interplay between the particle with its surrounding media (particle-particle, particle-solvent, particle- porous media) masked by different hydro-geochemical and microbiological conditions. The physical entrapment of particles (straining) may be significant when the particle diameter exceeds 5% of the median grain size diameter. Below that size additional electrostatic, van der Waals or steric interaction become increasingly important.
We present a preliminary dataset on the interaction between Nano-sized Polystyrene (PS) with different surface coatings and a variety of common minerals occurring in groundwater aquifers under the presence of Natural Organic Matter (NOM). The reference aquifer material is based on the Danish subsurface structure of Quaternary and Miocene aquifer material, e.g. quartz, calcite and pyrite among others. In our study, batch scale interactions are up-scaled in column flow and transport experiments, simulating different groundwater aquifer flow conditions in the presence of selected minerals and NOM.
This aims to clarify transport behavior of plastic pollutant in the subsurface environment. Furthermore, it serves as guide in qualitatively assessing and quantifying the vulnerability of groundwater aquifers to Nanoplastic pollution.
How to cite: Müller, S., Balic-Zunic, T., and Posth, N. R.: Transport of Nanoplastic under groundwater aquifer flow and transport conditions, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19717, https://doi.org/10.5194/egusphere-egu2020-19717, 2020.
EGU2020-21826 | Displays | ITS2.9/SSS8.1
Plastic mulch debris in agriculture: accumulation and interactions with pesticides and soil microbiotaNicolas Beriot, Raul Zornoza, Paul Zomer, Onurcan Ozbolat, Eva Lloret, Isabel Miralles, Raúl Raúl Ortega, Esperanza Huerta Lwanga, and Violette Geissen
Plastic mulch is widely used in agriculture to decrease the water evaporation, increase the soil temperature, or prevent weeds. Most plastic mulches are made of highly resistant Low Density Polyethylene (LDPE). The incomplete removal of polyethylene mulch after usage causes plastic pollution. Pro-oxidant Additive Containing (PAC) and “biodegradable” (Bio) plastics were developed to avoid the need of plastic removal while preventing the plastic debris accumulation. In conventional agriculture, the use of pesticides releases substances which can be sorbed to soil particles and plastic debris. Pesticides and their residues may affect the soil microbial community. Some microbial groups are capable of using applied pesticide as a source of energy and nutrients to multiply, whereas the pesticide may be toxic to other organisms. Little is known about the long term effects of plastic debris accumulations in relation with pesticides residues. We studied 36 parcels in commercial farms, either organic or conventional, where plastic mulch has been used for 5 to 20 years in Cartagena’s country side (SE Spain). We compared the macro and micro plastic debris contents, pesticides residue levels, soil physiochemical properties in the soil surface among all parcels. Eighteen insecticides, 17 fungicides, and 6 herbicides were analysed with LC-MS/MS and GC-MS/MS systems. The ribosomal 16S and ITS DNA variable regions were sequenced to study shifts in bacterial and fungal communities, respectively. We found accumulation of plastic debris in all soil samples, plastic contents being higher in soils from organic farms. The average plastic concentration in both managements was 0.20±0.26 g/kg of plastic debris. Soils under conventional management contained on average more than 6 different pesticide residues and an overall pesticides concentration of 0.20±0.18 mg/kg. The interactions between plastic debris concentration and pesticide concentration will be presented, together with the interaction of plastic and pesticides in soil with changes in soil microbial communities, identifying the most sensitive groups which can act as bioindicators for plastic and pesticide pollution in soil.
How to cite: Beriot, N., Zornoza, R., Zomer, P., Ozbolat, O., Lloret, E., Miralles, I., Raúl Ortega, R., Huerta Lwanga, E., and Geissen, V.: Plastic mulch debris in agriculture: accumulation and interactions with pesticides and soil microbiota, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21826, https://doi.org/10.5194/egusphere-egu2020-21826, 2020.
Plastic mulch is widely used in agriculture to decrease the water evaporation, increase the soil temperature, or prevent weeds. Most plastic mulches are made of highly resistant Low Density Polyethylene (LDPE). The incomplete removal of polyethylene mulch after usage causes plastic pollution. Pro-oxidant Additive Containing (PAC) and “biodegradable” (Bio) plastics were developed to avoid the need of plastic removal while preventing the plastic debris accumulation. In conventional agriculture, the use of pesticides releases substances which can be sorbed to soil particles and plastic debris. Pesticides and their residues may affect the soil microbial community. Some microbial groups are capable of using applied pesticide as a source of energy and nutrients to multiply, whereas the pesticide may be toxic to other organisms. Little is known about the long term effects of plastic debris accumulations in relation with pesticides residues. We studied 36 parcels in commercial farms, either organic or conventional, where plastic mulch has been used for 5 to 20 years in Cartagena’s country side (SE Spain). We compared the macro and micro plastic debris contents, pesticides residue levels, soil physiochemical properties in the soil surface among all parcels. Eighteen insecticides, 17 fungicides, and 6 herbicides were analysed with LC-MS/MS and GC-MS/MS systems. The ribosomal 16S and ITS DNA variable regions were sequenced to study shifts in bacterial and fungal communities, respectively. We found accumulation of plastic debris in all soil samples, plastic contents being higher in soils from organic farms. The average plastic concentration in both managements was 0.20±0.26 g/kg of plastic debris. Soils under conventional management contained on average more than 6 different pesticide residues and an overall pesticides concentration of 0.20±0.18 mg/kg. The interactions between plastic debris concentration and pesticide concentration will be presented, together with the interaction of plastic and pesticides in soil with changes in soil microbial communities, identifying the most sensitive groups which can act as bioindicators for plastic and pesticide pollution in soil.
How to cite: Beriot, N., Zornoza, R., Zomer, P., Ozbolat, O., Lloret, E., Miralles, I., Raúl Ortega, R., Huerta Lwanga, E., and Geissen, V.: Plastic mulch debris in agriculture: accumulation and interactions with pesticides and soil microbiota, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21826, https://doi.org/10.5194/egusphere-egu2020-21826, 2020.
EGU2020-22616 | Displays | ITS2.9/SSS8.1
Field studies for detecting microplastic in environmental compartments and a novel tomography approach for analysis of undisturbed soil or sediment coresSascha Oswald, Lena Katharina Schmidt, Eva Bauer, and Christian Tötzke
In recent years we all had to realize that plastics has not only been accumulating in the oceans, but as microplastics also has entered surface waters, soils and partly organisms in large numbers. Thus, as with other pollutants in the environment in the past, we need detection and monitoring methods for quantifying their distribution, fate and pathways. By that we can better understand where they are emitted, where they are present and what are the key mechanisms they undergo. However, this means a new challenge and need for novel approaches because they are different to other pollutants. In one study we have monitored presence of microplastic particles and some of their properties in a surface water course and groundwater wells close the river banks, detecting them by a novel and fast imaging technique after processing of surface water samples. Furthermore, soil and sand samples from different places were separated by density and then manually analyzed, and the results indicated an extensive presence of microplastic particles. Finally, we have developed a tomography approach to detect microplastic particles also in undisturbed sandy soil or sediment samples. This has the advantage that cores can be taken and analyzed that show the real distribution of microplastic particles, and obtain also some information on their size and shape. Overall, this also can contribute to understand their deposition and displacement in the past. We will demonstrate how a combination of X-ray and neutron tomography could be used to identify microplastic particles non-invasively, for test samples as well as first environmental samples.
How to cite: Oswald, S., Schmidt, L. K., Bauer, E., and Tötzke, C.: Field studies for detecting microplastic in environmental compartments and a novel tomography approach for analysis of undisturbed soil or sediment cores, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22616, https://doi.org/10.5194/egusphere-egu2020-22616, 2020.
In recent years we all had to realize that plastics has not only been accumulating in the oceans, but as microplastics also has entered surface waters, soils and partly organisms in large numbers. Thus, as with other pollutants in the environment in the past, we need detection and monitoring methods for quantifying their distribution, fate and pathways. By that we can better understand where they are emitted, where they are present and what are the key mechanisms they undergo. However, this means a new challenge and need for novel approaches because they are different to other pollutants. In one study we have monitored presence of microplastic particles and some of their properties in a surface water course and groundwater wells close the river banks, detecting them by a novel and fast imaging technique after processing of surface water samples. Furthermore, soil and sand samples from different places were separated by density and then manually analyzed, and the results indicated an extensive presence of microplastic particles. Finally, we have developed a tomography approach to detect microplastic particles also in undisturbed sandy soil or sediment samples. This has the advantage that cores can be taken and analyzed that show the real distribution of microplastic particles, and obtain also some information on their size and shape. Overall, this also can contribute to understand their deposition and displacement in the past. We will demonstrate how a combination of X-ray and neutron tomography could be used to identify microplastic particles non-invasively, for test samples as well as first environmental samples.
How to cite: Oswald, S., Schmidt, L. K., Bauer, E., and Tötzke, C.: Field studies for detecting microplastic in environmental compartments and a novel tomography approach for analysis of undisturbed soil or sediment cores, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22616, https://doi.org/10.5194/egusphere-egu2020-22616, 2020.
EGU2020-3612 | Displays | ITS2.9/SSS8.1
Detection and quantification of microplastic in soils using a 3D Laser Scanning Confocal MicroscopeTabea Zeyer and Peter Fiener
There is growing concern regarding the pollution of our environment with plastic materials, whereas especially the dimension of microplastic pollution and its ecological effect is widely discussed. Most studies focus on aquatic environments, while studies in terrestrial systems (mainly soils) are rare. This partly results from the challenges arising when microplastic particles need to be separated from organic and mineral particles. Key analytic techniques for microplastic detection in aquatic and terrestrial systems include Fourier transformation-infrared (FT-IR) and micro-Raman spectroscopy, as well as thermal extraction desorption-gas chromatography-mass spectrometry (TED-GC-MS) and pyrolysis-gas chromatography-mass spectrometry (pyr-GC-MS). While the mass spectrometric methods lack to determine particle sizes, the FT-IR and micro-Raman spectroscopy are very costly and time consuming. Moreover, the latter detection methods are very sensitive to organic matter particles, which are difficult to remove fully during soil sample preparation. Hence, a faster and more robust method to determine microplastic in soils is essential for a wider analysis of this environmental problem. In this study, we combine a density separation scheme with a 3D Laser Scanning Confocal Microscope (Keyence VK-X1000, Japan) analysis to determine the number and size of microplastic particles in soil samples. For the analysis a silty loam (16% sand, 59% silt, 25% clay, 1.3% organic carbon) and a loamy sand (72% sand, 18% silt, 10% clay, 0.9% organic carbon) were spiked with different concentrations of high density Polyethylene (HDPE), low density Polyethylene (LDPE) and Polystyrene (PS) microplastic (HDPE 50 - 100 and 250 - 300 µm, LDPE <50 and 200 - 800 µm, PS <100 µm). 3D Laser Scanning Confocal Microscopy show very promising results while using differences in optical characteristic and especially surface roughness, to distinguish between plastic and mineral as well as organic particles left after density separation. Overall, the 3D Laser Scanning Confocal Microscopy is a promising tool for relatively fast detection and quantification of microplastic in soils, which could perfectly complement the also relative fast mass-spectrometric methods to determine plastic types. However, to result in an operational and automated analyzation process further research based on the 3D Laser Scanning Confocal Microscopy analysis is needed.
How to cite: Zeyer, T. and Fiener, P.: Detection and quantification of microplastic in soils using a 3D Laser Scanning Confocal Microscope, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3612, https://doi.org/10.5194/egusphere-egu2020-3612, 2020.
There is growing concern regarding the pollution of our environment with plastic materials, whereas especially the dimension of microplastic pollution and its ecological effect is widely discussed. Most studies focus on aquatic environments, while studies in terrestrial systems (mainly soils) are rare. This partly results from the challenges arising when microplastic particles need to be separated from organic and mineral particles. Key analytic techniques for microplastic detection in aquatic and terrestrial systems include Fourier transformation-infrared (FT-IR) and micro-Raman spectroscopy, as well as thermal extraction desorption-gas chromatography-mass spectrometry (TED-GC-MS) and pyrolysis-gas chromatography-mass spectrometry (pyr-GC-MS). While the mass spectrometric methods lack to determine particle sizes, the FT-IR and micro-Raman spectroscopy are very costly and time consuming. Moreover, the latter detection methods are very sensitive to organic matter particles, which are difficult to remove fully during soil sample preparation. Hence, a faster and more robust method to determine microplastic in soils is essential for a wider analysis of this environmental problem. In this study, we combine a density separation scheme with a 3D Laser Scanning Confocal Microscope (Keyence VK-X1000, Japan) analysis to determine the number and size of microplastic particles in soil samples. For the analysis a silty loam (16% sand, 59% silt, 25% clay, 1.3% organic carbon) and a loamy sand (72% sand, 18% silt, 10% clay, 0.9% organic carbon) were spiked with different concentrations of high density Polyethylene (HDPE), low density Polyethylene (LDPE) and Polystyrene (PS) microplastic (HDPE 50 - 100 and 250 - 300 µm, LDPE <50 and 200 - 800 µm, PS <100 µm). 3D Laser Scanning Confocal Microscopy show very promising results while using differences in optical characteristic and especially surface roughness, to distinguish between plastic and mineral as well as organic particles left after density separation. Overall, the 3D Laser Scanning Confocal Microscopy is a promising tool for relatively fast detection and quantification of microplastic in soils, which could perfectly complement the also relative fast mass-spectrometric methods to determine plastic types. However, to result in an operational and automated analyzation process further research based on the 3D Laser Scanning Confocal Microscopy analysis is needed.
How to cite: Zeyer, T. and Fiener, P.: Detection and quantification of microplastic in soils using a 3D Laser Scanning Confocal Microscope, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3612, https://doi.org/10.5194/egusphere-egu2020-3612, 2020.
EGU2020-4614 | Displays | ITS2.9/SSS8.1
Interactions between agricultural mulching plastic debris and pesticidesNicolas Beriot, Paul Zomer, Raul Zornoza, and Violette Geissen Geissen
In semi-arid regions, the use of plastic mulch and pesticides in conventional agriculture is nearly ubiquitous. The use of plastics and pesticides lead both to the release of residues in the soils. The degradation of plastic and pesticide residues in the soil have been previously studied, but not together despite the fact that pesticides may be sorbed to plastics and that the sorption may change the degradation rate. In fact, the sorption of pesticides on Low Density Polyethylene (LDPE) has been previously studied, but no data is available for other plastics such as Pro-oxidant Additive Containing (PAC) plastics or “biodegradable” (Bio) plastics. The aim of this research was to measure the sorption pattern of active substances from 38 pesticides on LDPE, PAC and Bio plastic mulches and to compare the decay of the active substances in the presence and absence of plastic debris. For this purpose, 38 active substances from 17 insecticides, 15 fungicides and 6 herbicides commonly applied with plastic mulching in South-east Spain were incubated at 35°C for 15 days with a 3×3 cm² square of plastic mulch (LDPE, PAC and Bio). The QuEChERS (Quick Easy Cheap Effective Rugged Safe) approach was adapted to extract the pesticides. The sorption behaviour depended on both, the pesticide and the plastic mulch type. On average, the sorption percentage was ~23% on LDPE and PAC, and ~50% on Bio. The decay of active substances in the presence of plastic was, on average, 30% lower than the decay of active substances in solution alone. Therefore, efficacy, transport, degradability and/or eco-toxicity of active substances from pesticides may be affected by sorption on plastics. Additionally the sorption of pesticides on plastic debris may affect the plastic degradability due to the toxicity of pesticides to some soil organisms.
How to cite: Beriot, N., Zomer, P., Zornoza, R., and Geissen, V. G.: Interactions between agricultural mulching plastic debris and pesticides , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4614, https://doi.org/10.5194/egusphere-egu2020-4614, 2020.
In semi-arid regions, the use of plastic mulch and pesticides in conventional agriculture is nearly ubiquitous. The use of plastics and pesticides lead both to the release of residues in the soils. The degradation of plastic and pesticide residues in the soil have been previously studied, but not together despite the fact that pesticides may be sorbed to plastics and that the sorption may change the degradation rate. In fact, the sorption of pesticides on Low Density Polyethylene (LDPE) has been previously studied, but no data is available for other plastics such as Pro-oxidant Additive Containing (PAC) plastics or “biodegradable” (Bio) plastics. The aim of this research was to measure the sorption pattern of active substances from 38 pesticides on LDPE, PAC and Bio plastic mulches and to compare the decay of the active substances in the presence and absence of plastic debris. For this purpose, 38 active substances from 17 insecticides, 15 fungicides and 6 herbicides commonly applied with plastic mulching in South-east Spain were incubated at 35°C for 15 days with a 3×3 cm² square of plastic mulch (LDPE, PAC and Bio). The QuEChERS (Quick Easy Cheap Effective Rugged Safe) approach was adapted to extract the pesticides. The sorption behaviour depended on both, the pesticide and the plastic mulch type. On average, the sorption percentage was ~23% on LDPE and PAC, and ~50% on Bio. The decay of active substances in the presence of plastic was, on average, 30% lower than the decay of active substances in solution alone. Therefore, efficacy, transport, degradability and/or eco-toxicity of active substances from pesticides may be affected by sorption on plastics. Additionally the sorption of pesticides on plastic debris may affect the plastic degradability due to the toxicity of pesticides to some soil organisms.
How to cite: Beriot, N., Zomer, P., Zornoza, R., and Geissen, V. G.: Interactions between agricultural mulching plastic debris and pesticides , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4614, https://doi.org/10.5194/egusphere-egu2020-4614, 2020.
EGU2020-4736 | Displays | ITS2.9/SSS8.1
Stratigraphic relevance of macro- and microplastics in alluvial sediments – a first assessmentCollin J. Weber
Today it seems that we are living in the “plastic age”. But plastics as an anthropogenic material element and environmental pollutant has only been in widespread use for about seven decades. The occurrence of both macro- and microplastics in different marine and terrestrial environments provides the possibility to consider plastics as stratigraphic markers. The young age of plastic polymers, the global increase in plastic production since the 1950s and their resistance against environmental degradation, could turn plastics to a useful stratal component. This applies for stratigraphic consideration and also for geoarchaeological issues.
First results from the “Microplastics in floodplain soils” (MiFS) project, investigating the spatial dynamics of microplastic in floodplain soils, allow know a first assessment about the stratigraphic relevance of plastics in alluvial sediments. Alluvial sediments in floodplain areas are known as dynamic chemical and physical sinks as well as spatial transport corridors for sediments and pollutants. Therefore, floodplain soils could also act as an accumulation area for macro- and microplastics.
Four transects in the floodplain cross section distributed in the catchment area of the Lahn river, located in the central German low mountain range, were sampled to a depth of two meters. Samples were dried, sieved and the fractions ˃ 2 mm were analyzed visually using a stereomicroscope and identification criteria. In order to prevent an overestimation, the supposed plastic objects were analyzed using ATR-FTIR spectroscope. The larger microplastic fraction analyzed here seems to be particularly suitable for stratigraphic considerations, since this fraction is less suitable for in-situ displacements by natural processes. The macro- and microplastic data was compared with sediment ages and sedimentation rates from a literature enquiry.
The results of macroplastics (˃ 5 mm) and larger microplastic (˃ 2 mm) contents show that plastic is detectable down to a depth of 70 cm. Common polymer types like PE-LD, PE-HD, PP, PS, PMMA, PVC, PET and others could be identified. At the surface and topsoils, macroplastic accumulations are found on a) river banks (superficial in vegetation or young sandy river bank depositions) and on b) fields under agricultural use. In subsoil samples 75,75 % of identified plastic particles are found in near channel samples located at the river embankment.
Comparing the distribution of macro- and larger microplastics in floodplain soils with sediment ages, sedimentation rates and floodplain morphology, it can be concluded that a deposition of the plastic particles in the natural sedimentation process could only be expected for near channel embankments. In other areas of the floodplain, an in-situ vertical displacement of the plastic particles by tillage or natural processes appears most probable, as the sediments must be significantly older. The application of plastics and especially microplastics as a stratal component in alluvial sediments must therefore be further discussed and investigated.
How to cite: Weber, C. J.: Stratigraphic relevance of macro- and microplastics in alluvial sediments – a first assessment , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4736, https://doi.org/10.5194/egusphere-egu2020-4736, 2020.
Today it seems that we are living in the “plastic age”. But plastics as an anthropogenic material element and environmental pollutant has only been in widespread use for about seven decades. The occurrence of both macro- and microplastics in different marine and terrestrial environments provides the possibility to consider plastics as stratigraphic markers. The young age of plastic polymers, the global increase in plastic production since the 1950s and their resistance against environmental degradation, could turn plastics to a useful stratal component. This applies for stratigraphic consideration and also for geoarchaeological issues.
First results from the “Microplastics in floodplain soils” (MiFS) project, investigating the spatial dynamics of microplastic in floodplain soils, allow know a first assessment about the stratigraphic relevance of plastics in alluvial sediments. Alluvial sediments in floodplain areas are known as dynamic chemical and physical sinks as well as spatial transport corridors for sediments and pollutants. Therefore, floodplain soils could also act as an accumulation area for macro- and microplastics.
Four transects in the floodplain cross section distributed in the catchment area of the Lahn river, located in the central German low mountain range, were sampled to a depth of two meters. Samples were dried, sieved and the fractions ˃ 2 mm were analyzed visually using a stereomicroscope and identification criteria. In order to prevent an overestimation, the supposed plastic objects were analyzed using ATR-FTIR spectroscope. The larger microplastic fraction analyzed here seems to be particularly suitable for stratigraphic considerations, since this fraction is less suitable for in-situ displacements by natural processes. The macro- and microplastic data was compared with sediment ages and sedimentation rates from a literature enquiry.
The results of macroplastics (˃ 5 mm) and larger microplastic (˃ 2 mm) contents show that plastic is detectable down to a depth of 70 cm. Common polymer types like PE-LD, PE-HD, PP, PS, PMMA, PVC, PET and others could be identified. At the surface and topsoils, macroplastic accumulations are found on a) river banks (superficial in vegetation or young sandy river bank depositions) and on b) fields under agricultural use. In subsoil samples 75,75 % of identified plastic particles are found in near channel samples located at the river embankment.
Comparing the distribution of macro- and larger microplastics in floodplain soils with sediment ages, sedimentation rates and floodplain morphology, it can be concluded that a deposition of the plastic particles in the natural sedimentation process could only be expected for near channel embankments. In other areas of the floodplain, an in-situ vertical displacement of the plastic particles by tillage or natural processes appears most probable, as the sediments must be significantly older. The application of plastics and especially microplastics as a stratal component in alluvial sediments must therefore be further discussed and investigated.
How to cite: Weber, C. J.: Stratigraphic relevance of macro- and microplastics in alluvial sediments – a first assessment , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4736, https://doi.org/10.5194/egusphere-egu2020-4736, 2020.
EGU2020-4829 | Displays | ITS2.9/SSS8.1
Microplastic of polystyrene in soil and water: fluxes study from industrial siteTamara Kukharchyk and Vladimir Chernyuk
In the paper the experience of investigation of polystyrene content in soil and its distribution from industrial enterprise, where expanded polystyrene foam insulation is produced more than 40 years, is presented. Polystyrene belongs to the one of the most widely produced and used polymer. Once being in the environment, this type of plastic breaks easily and crumbles, and is dispersed by wind and water. Moreover, the danger of environmental pollution by polystyrene may be very serious because of hexabromocyclododecane that can be present in polystyrene as flame retardant additive. Unfortunately, the level of study of environmental pollution with polystyrene and his behavior in soil and water is very poor.
Methodological approaches of sampling and polystyrene identification are shown. Since the enterprise is located on the elevated area close (500-700 m) to a small river with temporary stream, the direct flow of pollutants into the floodplain is possible. Therefore, soil and technogenic deposits at industrial site as well as soil and groundwater within floodplain were collected for study.
In order to identify plastic in solid samples, multiple stages were applied including visual detection, drying, sieving (using mesh widths from 1 to 5 mm), flotation (with heating for the fractions with the size of 1-2 mm and less than 1 mm), and natural organic matter removal. Method of water filtration was used.
Polystyrene was revealed in all solid (12) and liquid (4) samples. High amounts of polystyrene particles with a size less than 5 mm were recorded in technogenic deposits (up to 16700 units/kg) and in soils (up to 1700 units/kg). Particles of microplastic (less than 1 mm) were detected not only in surface layer of soil (0-5 cm) but at the depth of 10-15 cm. Discharges of small granules (less than 1 mm) of raw materials (expanded polystyrene) into environment and its distribution with runoff away from its sources were revealed.
Necessity of further investigation of plastic and microplastic pollution in terrestrial ecosystems in impact zones, including estimates of plastic volume discharges from industrial area with waste, surface runoff and via runoff collector, in order to prevent aquatic ecosystem pollution is discussed.
How to cite: Kukharchyk, T. and Chernyuk, V.: Microplastic of polystyrene in soil and water: fluxes study from industrial site , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4829, https://doi.org/10.5194/egusphere-egu2020-4829, 2020.
In the paper the experience of investigation of polystyrene content in soil and its distribution from industrial enterprise, where expanded polystyrene foam insulation is produced more than 40 years, is presented. Polystyrene belongs to the one of the most widely produced and used polymer. Once being in the environment, this type of plastic breaks easily and crumbles, and is dispersed by wind and water. Moreover, the danger of environmental pollution by polystyrene may be very serious because of hexabromocyclododecane that can be present in polystyrene as flame retardant additive. Unfortunately, the level of study of environmental pollution with polystyrene and his behavior in soil and water is very poor.
Methodological approaches of sampling and polystyrene identification are shown. Since the enterprise is located on the elevated area close (500-700 m) to a small river with temporary stream, the direct flow of pollutants into the floodplain is possible. Therefore, soil and technogenic deposits at industrial site as well as soil and groundwater within floodplain were collected for study.
In order to identify plastic in solid samples, multiple stages were applied including visual detection, drying, sieving (using mesh widths from 1 to 5 mm), flotation (with heating for the fractions with the size of 1-2 mm and less than 1 mm), and natural organic matter removal. Method of water filtration was used.
Polystyrene was revealed in all solid (12) and liquid (4) samples. High amounts of polystyrene particles with a size less than 5 mm were recorded in technogenic deposits (up to 16700 units/kg) and in soils (up to 1700 units/kg). Particles of microplastic (less than 1 mm) were detected not only in surface layer of soil (0-5 cm) but at the depth of 10-15 cm. Discharges of small granules (less than 1 mm) of raw materials (expanded polystyrene) into environment and its distribution with runoff away from its sources were revealed.
Necessity of further investigation of plastic and microplastic pollution in terrestrial ecosystems in impact zones, including estimates of plastic volume discharges from industrial area with waste, surface runoff and via runoff collector, in order to prevent aquatic ecosystem pollution is discussed.
How to cite: Kukharchyk, T. and Chernyuk, V.: Microplastic of polystyrene in soil and water: fluxes study from industrial site , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4829, https://doi.org/10.5194/egusphere-egu2020-4829, 2020.
EGU2020-6631 | Displays | ITS2.9/SSS8.1
Plastic contamination of soil: is compost the source?Melanie Braun, Aylin Krupp, Rene Heyse, Matthias Mail, and Wulf Amelung
Plastic contamination is a major environmental topic, however, only little knowledge exists about plastic contamination of agroecosystems. Especially the prevalence of plastic in soil and potential entry paths remain largely unknown. Consequently, this study aims at evaluating to what degree compost application is a source of plastic for soil. To do so, we analyzed plastic in 8 different municipal and commercial composts and in topsoil (0-30 cm) of a 12-year compost fertilizer trial with 0, 5, 10 and 20 t compost per hectare. After method testing and adjustment (yielding 76-100% recovery of spiked plastic particles), plastic was analyzed via density separation (ZnCl2) and light microscopy. We found 12±8 to 46±8 plastic items kg-1 compost; concentrations of plastic items > 5 mm were highly variable and ranged between 0.04±0.08 to 1.35±0.53 g kg-1 compost. In contrast to sewage sludge, which contains mostly fibers, in compost particles were dominant. In soil we found 0 to 66±8.5 plastic items kg-1 soil, with highest plastic concentrations in variants with highest compost application, i.e. soils with compost application had 2 to 2.5 higher plastic concentrations than control variants. However, we also could detect additional plastic sources as fields on the border of the trial (near a road) had 3 times higher plastic concentrations than inner fields, leading to a plastic contamination of up to 23 items kg-1. Consequently, we could confirm compost application as an entry path for plastic into soil, leading to a twofold increased plastic contamination of agricultural soil. The determined plastic input via compost might be a minimum estimate since small plastic items like nanoplastics were not included, which warrants further attention.
How to cite: Braun, M., Krupp, A., Heyse, R., Mail, M., and Amelung, W.: Plastic contamination of soil: is compost the source?, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6631, https://doi.org/10.5194/egusphere-egu2020-6631, 2020.
Plastic contamination is a major environmental topic, however, only little knowledge exists about plastic contamination of agroecosystems. Especially the prevalence of plastic in soil and potential entry paths remain largely unknown. Consequently, this study aims at evaluating to what degree compost application is a source of plastic for soil. To do so, we analyzed plastic in 8 different municipal and commercial composts and in topsoil (0-30 cm) of a 12-year compost fertilizer trial with 0, 5, 10 and 20 t compost per hectare. After method testing and adjustment (yielding 76-100% recovery of spiked plastic particles), plastic was analyzed via density separation (ZnCl2) and light microscopy. We found 12±8 to 46±8 plastic items kg-1 compost; concentrations of plastic items > 5 mm were highly variable and ranged between 0.04±0.08 to 1.35±0.53 g kg-1 compost. In contrast to sewage sludge, which contains mostly fibers, in compost particles were dominant. In soil we found 0 to 66±8.5 plastic items kg-1 soil, with highest plastic concentrations in variants with highest compost application, i.e. soils with compost application had 2 to 2.5 higher plastic concentrations than control variants. However, we also could detect additional plastic sources as fields on the border of the trial (near a road) had 3 times higher plastic concentrations than inner fields, leading to a plastic contamination of up to 23 items kg-1. Consequently, we could confirm compost application as an entry path for plastic into soil, leading to a twofold increased plastic contamination of agricultural soil. The determined plastic input via compost might be a minimum estimate since small plastic items like nanoplastics were not included, which warrants further attention.
How to cite: Braun, M., Krupp, A., Heyse, R., Mail, M., and Amelung, W.: Plastic contamination of soil: is compost the source?, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6631, https://doi.org/10.5194/egusphere-egu2020-6631, 2020.
EGU2020-7430 | Displays | ITS2.9/SSS8.1
Microplastics and earthworms in soils: A case study on translocation, toxicity and fateNils Dietrich, Daniel Wilkinson, Florian Hirsch, Magdalena Sut-Lohmann, Antonia Geschke, and Thomas Raab
Microplastics are not only found in marine and lacustrine environments but also in soils. Microplastics enter natural soil environments from legal or illegal waste deposition. In arable soils, microplastics often stem from the decomposition of plastic sheeting. The accumulation of (micro-)plastic from garbage bags in which biological waste is often disposed, is also a significant problem for the recycling and composting of organic waste. Commercially available compostable bags are advertised as degradable. Thus, these compostable bags ought to accumulate less in soils than non-compostable bags. We present a pilot study to determine the preference of earthworms (Lumbricus terrestris and Eisenia hortensis) for taking up and translocating different types of microplastic in soils. Our initial findings from the soil column experiment suggest that the earthworms show a strong tendency for the uptake of microplastic. We also observed direct and indirect transport of microplastic by earthworms from the surface to deeper parts of the soil columns.
How to cite: Dietrich, N., Wilkinson, D., Hirsch, F., Sut-Lohmann, M., Geschke, A., and Raab, T.: Microplastics and earthworms in soils: A case study on translocation, toxicity and fate , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7430, https://doi.org/10.5194/egusphere-egu2020-7430, 2020.
Microplastics are not only found in marine and lacustrine environments but also in soils. Microplastics enter natural soil environments from legal or illegal waste deposition. In arable soils, microplastics often stem from the decomposition of plastic sheeting. The accumulation of (micro-)plastic from garbage bags in which biological waste is often disposed, is also a significant problem for the recycling and composting of organic waste. Commercially available compostable bags are advertised as degradable. Thus, these compostable bags ought to accumulate less in soils than non-compostable bags. We present a pilot study to determine the preference of earthworms (Lumbricus terrestris and Eisenia hortensis) for taking up and translocating different types of microplastic in soils. Our initial findings from the soil column experiment suggest that the earthworms show a strong tendency for the uptake of microplastic. We also observed direct and indirect transport of microplastic by earthworms from the surface to deeper parts of the soil columns.
How to cite: Dietrich, N., Wilkinson, D., Hirsch, F., Sut-Lohmann, M., Geschke, A., and Raab, T.: Microplastics and earthworms in soils: A case study on translocation, toxicity and fate , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7430, https://doi.org/10.5194/egusphere-egu2020-7430, 2020.
EGU2020-7847 | Displays | ITS2.9/SSS8.1
Wind erosion of microplastics from soils: implications for microplastic dispersal and distributionJoanna Bullard, Annie Ockelford, Cheryl McKenna Neuman, and Patrick O'Brien
Microplastics have potentially deleterious effects on environments and ecosystems. The main research focus for translocation of microplastics has been via water, however recent studies of soils in the Alps and Middle East indicate airborne transport following wind erosion may also be significant. This paper reports wind tunnel studies to determine the extent to which two types of low density microplastic (microbeads and fibres) may be preferentially transported from different substrates – a well-sorted quartz sand and a poorly-sorted soil containing 13% organics. The polyethylene microbeads had a size range of 212-250 microns and density of 1.2 g cm3. The polyester fibres were 5000 microns long and 500-1000 microns in width with a density of 1.38 g cm3. Concentrations of microplastics in the initial wind tunnel bed ranged from 40-1040 mg kg-1 and the wind tunnel was used to determine the wind speeds at which intermittent and continuous saltation occurred using 0.25 m s-1 increments. Microplastics were entrained for all experiments regardless of the type of microplastic or substrate but the threshold for entrainment was higher for soils (>10.8 m s-1) than for the sand bed (>6.9 m s-1). The lowest enrichment ratios (ER) for microplastics were associated with the entrainment of beads from the soil bed (ER = 0.5-7) whilst the highest ERs were found for fibres entrained from the soil bed (ER 100 - >1000). Overall fibres were more likely to be entrained by wind than beads. The data will subsequently be used to explore the microplastic concentrations and emissions at source required to account for reported microplastic deposition at sink locations.
How to cite: Bullard, J., Ockelford, A., McKenna Neuman, C., and O'Brien, P.: Wind erosion of microplastics from soils: implications for microplastic dispersal and distribution, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7847, https://doi.org/10.5194/egusphere-egu2020-7847, 2020.
Microplastics have potentially deleterious effects on environments and ecosystems. The main research focus for translocation of microplastics has been via water, however recent studies of soils in the Alps and Middle East indicate airborne transport following wind erosion may also be significant. This paper reports wind tunnel studies to determine the extent to which two types of low density microplastic (microbeads and fibres) may be preferentially transported from different substrates – a well-sorted quartz sand and a poorly-sorted soil containing 13% organics. The polyethylene microbeads had a size range of 212-250 microns and density of 1.2 g cm3. The polyester fibres were 5000 microns long and 500-1000 microns in width with a density of 1.38 g cm3. Concentrations of microplastics in the initial wind tunnel bed ranged from 40-1040 mg kg-1 and the wind tunnel was used to determine the wind speeds at which intermittent and continuous saltation occurred using 0.25 m s-1 increments. Microplastics were entrained for all experiments regardless of the type of microplastic or substrate but the threshold for entrainment was higher for soils (>10.8 m s-1) than for the sand bed (>6.9 m s-1). The lowest enrichment ratios (ER) for microplastics were associated with the entrainment of beads from the soil bed (ER = 0.5-7) whilst the highest ERs were found for fibres entrained from the soil bed (ER 100 - >1000). Overall fibres were more likely to be entrained by wind than beads. The data will subsequently be used to explore the microplastic concentrations and emissions at source required to account for reported microplastic deposition at sink locations.
How to cite: Bullard, J., Ockelford, A., McKenna Neuman, C., and O'Brien, P.: Wind erosion of microplastics from soils: implications for microplastic dispersal and distribution, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7847, https://doi.org/10.5194/egusphere-egu2020-7847, 2020.
EGU2020-9337 | Displays | ITS2.9/SSS8.1
The influence of microplastic on soil hydraulic propertiesPatrizia Hangele, Katharina Luise Müller, Hannes Laermanns, and Christina Bogner
The need to study the occurrence and effects of microplastic (MP) in different ecosystems has become apparent by a variety of studies in the past years. Until recently, research regarding MP in the environment has mainly focused on marine systems. Within terrestrial systems, studies suggest soils to be the biggest sink for MP. Some studies now started to explore the presence of MP in soils. However, there is a substantial lack of the basic mechanistic understanding of the behaviour of MP particles within soils.
This study investigates how the presence of MP in soils affects their hydraulic properties. In order to understand these processes, experiments are set up under controlled laboratory conditions as to set unknown influencing variables to a minimum. Different substrates, from simple sands to undisturbed soils, are investigated in soil cylinders. MP particles of different sizes and forms of the most common plastic types are applied to the surface of the soil cylinders and undergo an irrigation for the MP particles to infiltrate. Soil-water retention curves and soil hydraulic conductivity are measured before and after the application of MP particles. It is hypothesised that the infiltrated MP particles clog a part of the pore space and should thus reduce soil hydraulic conductivity and change the soil-water retention curve of the sample. Knowledge about the influence of MP on soil hydraulic properties are crucial to understand transport and retention of MP in soils.
How to cite: Hangele, P., Müller, K. L., Laermanns, H., and Bogner, C.: The influence of microplastic on soil hydraulic properties, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9337, https://doi.org/10.5194/egusphere-egu2020-9337, 2020.
The need to study the occurrence and effects of microplastic (MP) in different ecosystems has become apparent by a variety of studies in the past years. Until recently, research regarding MP in the environment has mainly focused on marine systems. Within terrestrial systems, studies suggest soils to be the biggest sink for MP. Some studies now started to explore the presence of MP in soils. However, there is a substantial lack of the basic mechanistic understanding of the behaviour of MP particles within soils.
This study investigates how the presence of MP in soils affects their hydraulic properties. In order to understand these processes, experiments are set up under controlled laboratory conditions as to set unknown influencing variables to a minimum. Different substrates, from simple sands to undisturbed soils, are investigated in soil cylinders. MP particles of different sizes and forms of the most common plastic types are applied to the surface of the soil cylinders and undergo an irrigation for the MP particles to infiltrate. Soil-water retention curves and soil hydraulic conductivity are measured before and after the application of MP particles. It is hypothesised that the infiltrated MP particles clog a part of the pore space and should thus reduce soil hydraulic conductivity and change the soil-water retention curve of the sample. Knowledge about the influence of MP on soil hydraulic properties are crucial to understand transport and retention of MP in soils.
How to cite: Hangele, P., Müller, K. L., Laermanns, H., and Bogner, C.: The influence of microplastic on soil hydraulic properties, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9337, https://doi.org/10.5194/egusphere-egu2020-9337, 2020.
EGU2020-9862 | Displays | ITS2.9/SSS8.1
Visualizing transport of microplastic particles on soil surfaces with an advanced-imaging sCMOS cameraMarcel Klee, Hannes Laermanns, Katharina Luise Müller, Florian Steininger, and Christina Bogner
The impact of microplastics in different ecosystems has recently become subject of numerous studies. However, the research of the last years has focused mainly on marine ecosystems and neglected terrestrial environments so far. This has led to a substantial lack of knowledge of the transport mechanisms of microplastic in soils and sediments. While first studies in this field investigate the abundance of microplastic in soils, only little is known about surface transport of microplastic particles.
The new approach of time-series analysis acquired by advanced scientific complementary metal–oxide–semiconductor (sCMOS) high-resolution cameras (Hardy et al., 2017, doi:10.1016/ j.catena.2016.11.005) could enhance the understanding of surface transport mechanisms of microplastic. We used a flume-box filled with different materials to trace the movements of fluorescent microplastic particles of 100 µm diameter under artificial irrigation. Furthermore, soil material from the German Wadden Sea was used to trace the run-off transport of microplastic in natural sediments. Here, we present first results on microplastic particle distribution, transport and accumulation as well on macroscopic as on microscopic scales.
How to cite: Klee, M., Laermanns, H., Müller, K. L., Steininger, F., and Bogner, C.: Visualizing transport of microplastic particles on soil surfaces with an advanced-imaging sCMOS camera, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9862, https://doi.org/10.5194/egusphere-egu2020-9862, 2020.
The impact of microplastics in different ecosystems has recently become subject of numerous studies. However, the research of the last years has focused mainly on marine ecosystems and neglected terrestrial environments so far. This has led to a substantial lack of knowledge of the transport mechanisms of microplastic in soils and sediments. While first studies in this field investigate the abundance of microplastic in soils, only little is known about surface transport of microplastic particles.
The new approach of time-series analysis acquired by advanced scientific complementary metal–oxide–semiconductor (sCMOS) high-resolution cameras (Hardy et al., 2017, doi:10.1016/ j.catena.2016.11.005) could enhance the understanding of surface transport mechanisms of microplastic. We used a flume-box filled with different materials to trace the movements of fluorescent microplastic particles of 100 µm diameter under artificial irrigation. Furthermore, soil material from the German Wadden Sea was used to trace the run-off transport of microplastic in natural sediments. Here, we present first results on microplastic particle distribution, transport and accumulation as well on macroscopic as on microscopic scales.
How to cite: Klee, M., Laermanns, H., Müller, K. L., Steininger, F., and Bogner, C.: Visualizing transport of microplastic particles on soil surfaces with an advanced-imaging sCMOS camera, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9862, https://doi.org/10.5194/egusphere-egu2020-9862, 2020.
EGU2020-17818 | Displays | ITS2.9/SSS8.1
Wind erosion of microplastics from soils: linking soil surface properties with microplastic fluxAnnie Ockelford, Joanna Bulalrd, Cheryl McKenna-Neuman, and Patrick O'Brien
Recent studies of soils in the Alps and Middle East indicate airborne transport of microplastics following wind erosion may be significant. Where microplastics have been entrained by wind they show substantial enrichment ratios compared to mineral particle erosion. Further, microplastic shape affects enrichment ratios with those for fibres greater than for microbeads which may reflect the lower density and asymmetric shape of microplastics compared to soil particles. This suggests that terrestrial to atmospheric transfer of microplastics could be a significant environmental transport pathway. However, currently we have very little understanding of how the properties, in particular the surface characteristics, of the sediment which they are being eroded from affects their entrainment potential.
This paper reports wind tunnel studies run to explore the impacts of soil surface characteristics on microplastic flux by wind erosion. Experiments were performed in a boundary layer simulation wind tunnel with an open-loop suction design. The tunnel has a working section of 12.5m x 0.7m x 0.76m and is housed in an environmental chamber which, for this study, was held constant at 20 oC and 20% RH. In experiments two types of low density microplastic (microbeads and fibres) were mixed into a poorly-sorted soil containing 13% organics. The polyethylene microbeads had a size range of 212-250 microns and density of 1.2 g cm3 and the polyester fibres were 5000 microns long and 500-1000 microns in width with a density of 1.38 g cm3. Microplastics were mixed into the sediment in concentrations ranging from 40-1040 mg kg-1. For each experiment, test surfaces were prepared by filling a 1.0m x 0.35m x 0.025m metal tray with the given mixture of test material which was lowered into the wind tunnel such that it was flush with the tunnel floor and levelled. The wind tunnel was then switched on and run with increasing wind speeds using 0.25 m s-1 increments until continuous saltation occurred. Soil surface roughness was scanned prior to and after each experiment using a high resolution laser scanner (0.5mm resolution over the entire test section). Transported soil and microplastic particles were captured in bulk using a 2 cm wide by 40 cm tall Guelph-Trent wedge trap that was positioned 2 m downwind of the test bed.
Discussion concentrates on linking the changes in soil surface topography to the magnitude of microplastic flux where data shows that there is a correlation between the development of the soil surfaces and overall microplastic flux. Specifically, soil surface roughness is seen as a significant control on microplastic flux where it has a greater overall effect on microplastic fibre flux as compared to the microplastic beads. The outcome of this research is pertinent to developing understanding surrounding the likely controls and hence propensity of microplastics to be entrained from soil by wind erosion.
How to cite: Ockelford, A., Bulalrd, J., McKenna-Neuman, C., and O'Brien, P.: Wind erosion of microplastics from soils: linking soil surface properties with microplastic flux , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-17818, https://doi.org/10.5194/egusphere-egu2020-17818, 2020.
Recent studies of soils in the Alps and Middle East indicate airborne transport of microplastics following wind erosion may be significant. Where microplastics have been entrained by wind they show substantial enrichment ratios compared to mineral particle erosion. Further, microplastic shape affects enrichment ratios with those for fibres greater than for microbeads which may reflect the lower density and asymmetric shape of microplastics compared to soil particles. This suggests that terrestrial to atmospheric transfer of microplastics could be a significant environmental transport pathway. However, currently we have very little understanding of how the properties, in particular the surface characteristics, of the sediment which they are being eroded from affects their entrainment potential.
This paper reports wind tunnel studies run to explore the impacts of soil surface characteristics on microplastic flux by wind erosion. Experiments were performed in a boundary layer simulation wind tunnel with an open-loop suction design. The tunnel has a working section of 12.5m x 0.7m x 0.76m and is housed in an environmental chamber which, for this study, was held constant at 20 oC and 20% RH. In experiments two types of low density microplastic (microbeads and fibres) were mixed into a poorly-sorted soil containing 13% organics. The polyethylene microbeads had a size range of 212-250 microns and density of 1.2 g cm3 and the polyester fibres were 5000 microns long and 500-1000 microns in width with a density of 1.38 g cm3. Microplastics were mixed into the sediment in concentrations ranging from 40-1040 mg kg-1. For each experiment, test surfaces were prepared by filling a 1.0m x 0.35m x 0.025m metal tray with the given mixture of test material which was lowered into the wind tunnel such that it was flush with the tunnel floor and levelled. The wind tunnel was then switched on and run with increasing wind speeds using 0.25 m s-1 increments until continuous saltation occurred. Soil surface roughness was scanned prior to and after each experiment using a high resolution laser scanner (0.5mm resolution over the entire test section). Transported soil and microplastic particles were captured in bulk using a 2 cm wide by 40 cm tall Guelph-Trent wedge trap that was positioned 2 m downwind of the test bed.
Discussion concentrates on linking the changes in soil surface topography to the magnitude of microplastic flux where data shows that there is a correlation between the development of the soil surfaces and overall microplastic flux. Specifically, soil surface roughness is seen as a significant control on microplastic flux where it has a greater overall effect on microplastic fibre flux as compared to the microplastic beads. The outcome of this research is pertinent to developing understanding surrounding the likely controls and hence propensity of microplastics to be entrained from soil by wind erosion.
How to cite: Ockelford, A., Bulalrd, J., McKenna-Neuman, C., and O'Brien, P.: Wind erosion of microplastics from soils: linking soil surface properties with microplastic flux , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-17818, https://doi.org/10.5194/egusphere-egu2020-17818, 2020.
EGU2020-18272 | Displays | ITS2.9/SSS8.1
Plastics in Agriculture: Sources, mass balance and transport to local aquatic environmentsNina Buenaventura, Sissel Brit Ranneklev, Rachel Hurley, Inger Lise Nerland Bråte, and Christian Vogelsang
The properties of plastic products have become important features in everyday lives, including in Norwegian agriculture. According to Grønn Punkt Norway’s statistics, agriculture is the third largest sector for plastic consumption, after domestic use and industry. Most of the plastic used by agriculture is used for production of round hay bales and as agricultural films to protect and improve crops, known as mulching. The films may be subjected to weathering through mechanical stress, oxidation and photodegradation, leading to fragmentation. These plastic particles can be dispersed into the soil and pass via drainage networks from agricultural soil into local aquatic environments. Additional agricultural plastic sources may come from fertilisers, pesticides, and sewage sludge application to land. This preliminary study investigated soil and runoff-water from agricultural fields in Morsa catchment. Concentrations of plastic (number of plastic particles) and types of plastic were determined in the soil and water samples. The selected sites had berries, grass, and cereal crops. Several fields were selected to represent sludge application: two fields selected had sludge applied recently in 2018, and two had received historical application, 7-8 years ago. In one area, plastic film was used to cover berry crops, for protection and cover. Non-biobased PBAT biodegradable plastic was used as mulching film in one of the vegetable areas. A further type of mulching, which blocks solar insulation to adjust soil temperature and restrict the wavelengths that encourage weed growth, was used in a different vegetable field. One vegetable field that has not used plastic products (including sludge application) in the past and was considered as a reference field in this study.
Globally, there are only a few studies that have measured microplastics in agricultural soils, and none in Norway to date. The concentrations of plastics above 50 µm found in the samples from Morsa were low, except from were mulching occurred with the plastic film PBAT. Microplastic PBAT concentrations were considered to be high, and the soil contamination was comparable with other values reported for soil undergoing intense agricultural production in other parts of the world. In runoff-water from a field where cereals and grass were grown, the concentrations of microplastics were considered high compared to other reported values from freshwater systems. This indicates that plastics can be mobilised from agricultural soils to the aquatic environment, and films from agricultural use may represent an important source. Polyethylene fragments were the dominant particle type found in the runoff-water, which may have originated from the soil as they represent the most dominant particle type in the corresponding field. A total of 14 different plastic polymer types were found in the soil samples, but, there was little agreement between the use of plastics (e.g. agricultural film) and what type was found in the soil. Samples from areas where neither sludge nor film was used also contained microplastics. The overall dominant particle morphologies were fragments, fibres and films. These data represent the first baseline assessment of microplastic contamination in agricultural soils undergoing a range of different plastic application types in Norway.
How to cite: Buenaventura, N., Ranneklev, S. B., Hurley, R., Nerland Bråte, I. L., and Vogelsang, C.: Plastics in Agriculture: Sources, mass balance and transport to local aquatic environments , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18272, https://doi.org/10.5194/egusphere-egu2020-18272, 2020.
The properties of plastic products have become important features in everyday lives, including in Norwegian agriculture. According to Grønn Punkt Norway’s statistics, agriculture is the third largest sector for plastic consumption, after domestic use and industry. Most of the plastic used by agriculture is used for production of round hay bales and as agricultural films to protect and improve crops, known as mulching. The films may be subjected to weathering through mechanical stress, oxidation and photodegradation, leading to fragmentation. These plastic particles can be dispersed into the soil and pass via drainage networks from agricultural soil into local aquatic environments. Additional agricultural plastic sources may come from fertilisers, pesticides, and sewage sludge application to land. This preliminary study investigated soil and runoff-water from agricultural fields in Morsa catchment. Concentrations of plastic (number of plastic particles) and types of plastic were determined in the soil and water samples. The selected sites had berries, grass, and cereal crops. Several fields were selected to represent sludge application: two fields selected had sludge applied recently in 2018, and two had received historical application, 7-8 years ago. In one area, plastic film was used to cover berry crops, for protection and cover. Non-biobased PBAT biodegradable plastic was used as mulching film in one of the vegetable areas. A further type of mulching, which blocks solar insulation to adjust soil temperature and restrict the wavelengths that encourage weed growth, was used in a different vegetable field. One vegetable field that has not used plastic products (including sludge application) in the past and was considered as a reference field in this study.
Globally, there are only a few studies that have measured microplastics in agricultural soils, and none in Norway to date. The concentrations of plastics above 50 µm found in the samples from Morsa were low, except from were mulching occurred with the plastic film PBAT. Microplastic PBAT concentrations were considered to be high, and the soil contamination was comparable with other values reported for soil undergoing intense agricultural production in other parts of the world. In runoff-water from a field where cereals and grass were grown, the concentrations of microplastics were considered high compared to other reported values from freshwater systems. This indicates that plastics can be mobilised from agricultural soils to the aquatic environment, and films from agricultural use may represent an important source. Polyethylene fragments were the dominant particle type found in the runoff-water, which may have originated from the soil as they represent the most dominant particle type in the corresponding field. A total of 14 different plastic polymer types were found in the soil samples, but, there was little agreement between the use of plastics (e.g. agricultural film) and what type was found in the soil. Samples from areas where neither sludge nor film was used also contained microplastics. The overall dominant particle morphologies were fragments, fibres and films. These data represent the first baseline assessment of microplastic contamination in agricultural soils undergoing a range of different plastic application types in Norway.
How to cite: Buenaventura, N., Ranneklev, S. B., Hurley, R., Nerland Bråte, I. L., and Vogelsang, C.: Plastics in Agriculture: Sources, mass balance and transport to local aquatic environments , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18272, https://doi.org/10.5194/egusphere-egu2020-18272, 2020.
EGU2020-19200 | Displays | ITS2.9/SSS8.1
Quantification of Microplastics in environmental samples using pressurized liquid extraction and Pyr-GC/MSCorinna Földi
Fast and reliable quantification of microplastics in environmental samples is currently a challenging task. To enable monitoring of microplastics, a fast and robust method in sample preparation and subsequent analysis is of extraordinary need and urgency. Therefore, the combination of pressurized liquid extraction and Pyr-GC/MS has been developed. The fully automated extraction includes a pre-extraction via methanol for matrix elimination and a subsequent main extraction for microplastics using tetrahydrofuran to enrich microplastics on silica gel which is hence analyzed by means of Pyr-GC/MS.
Several commonly occurring organic matrices known to result in GC interferences were tested to be eliminated by pressurized liquid extraction. For the most frequently used synthetic polymers PE, PP, and PS extraction efficiencies of 113-131, 80-98, and 70-118 %, respectively, and limits of quantification down to 0.005 mg/g were achieved.
The developed method was validated and applied to environmental samples with complex matrices such as roadside soils, potting soils, and sewage sludge. In all these matrices PE, PP, and PS were detected with contents ranging from 0.8 to 3.3, 0.01 to 0.36, and 0.06 to 0.61 mg/g. However, calcined sea sand spiked with wood, leaves, and humic acids, respectively, were found to interfere with PE quantification (0.140, 0.210, and 0.050 mg/g). Reduction of these interferences will be further evaluated.
How to cite: Földi, C.: Quantification of Microplastics in environmental samples using pressurized liquid extraction and Pyr-GC/MS, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19200, https://doi.org/10.5194/egusphere-egu2020-19200, 2020.
Fast and reliable quantification of microplastics in environmental samples is currently a challenging task. To enable monitoring of microplastics, a fast and robust method in sample preparation and subsequent analysis is of extraordinary need and urgency. Therefore, the combination of pressurized liquid extraction and Pyr-GC/MS has been developed. The fully automated extraction includes a pre-extraction via methanol for matrix elimination and a subsequent main extraction for microplastics using tetrahydrofuran to enrich microplastics on silica gel which is hence analyzed by means of Pyr-GC/MS.
Several commonly occurring organic matrices known to result in GC interferences were tested to be eliminated by pressurized liquid extraction. For the most frequently used synthetic polymers PE, PP, and PS extraction efficiencies of 113-131, 80-98, and 70-118 %, respectively, and limits of quantification down to 0.005 mg/g were achieved.
The developed method was validated and applied to environmental samples with complex matrices such as roadside soils, potting soils, and sewage sludge. In all these matrices PE, PP, and PS were detected with contents ranging from 0.8 to 3.3, 0.01 to 0.36, and 0.06 to 0.61 mg/g. However, calcined sea sand spiked with wood, leaves, and humic acids, respectively, were found to interfere with PE quantification (0.140, 0.210, and 0.050 mg/g). Reduction of these interferences will be further evaluated.
How to cite: Földi, C.: Quantification of Microplastics in environmental samples using pressurized liquid extraction and Pyr-GC/MS, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19200, https://doi.org/10.5194/egusphere-egu2020-19200, 2020.
EGU2020-22456 | Displays | ITS2.9/SSS8.1
Pathways of microplastics in soils - Detection of microplastic contents in compost using a thermal decomposition methodYosri-Kamal Wiesner, Axel Müller, Claus Gerhard Bannick, Marius Bednarz, and Ulrike Braun
The ubiquitous presence of unintended plastics in the environment has been an issue in scientific studies and public debate in the last years. It is well known that oxidative degradation and subsequent fragmentation, caused by UV-radiation, aging and abrasion lead to the decomposition of larger plastic products into microplastics (MP). Possible effects of these MP on ecosystems are still unclear. Recent studies on MP findings are focused mainly on aquatic systems, while little is known about MP in terrestrial ecosystems. Fermentation residues, sewage sludge and compost represent an input path of plastics in soils through targeted application in agriculture. For this reason, analysis of the total content of plastic in organic fertilizers as a sink and source of MP in ecosystems is of high interest.
In 2017, approximately 14.2 million tons of biodegradable waste were collected, from which 3.9 million tons of compost was produced. Improper waste separation result in plastic fragments in the biowaste, some of which end up in the compost and might be degrade to MP. In Germany, compost is used as fertilizer in agriculture and landscape design, hence MP could enter the soil by this pathway. Spectroscopic methods such as Raman or FTIR are not suitable for determining the mass content of microplastic, as these output a particle number.
Therefore, we show the application of ThermoExtractionDesorption-GasChromatography-MassSpectrometry (TED-GC-MS) as a fast, integral analytical technique, which in contrast to the spectroscopic methods does not measure the number of particles but a mass content. The sample is pyrolyzed to 600°C in a nitrogen atmosphere and an excerpt of the pyrolysis gases is collected on a solid phase adsorber. Afterwards, the decomposition gases are desorbed and measured in a GC-MS system. Characteristic pyrolysis products of each polymer can be used to identify the polymer type and determine the mass contents in the present sample. This method is well established for the analysis of MP in water filtrate samples. Here, we will first demonstrate the use of TED-GC-MS for compost.
This current study will also give inside in various important aspects of sample preparation, which include a meaningful size fractionation, a necessary density separation regarding the removal of inorganic contents and at finally a homogenization.
How to cite: Wiesner, Y.-K., Müller, A., Bannick, C. G., Bednarz, M., and Braun, U.: Pathways of microplastics in soils - Detection of microplastic contents in compost using a thermal decomposition method, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22456, https://doi.org/10.5194/egusphere-egu2020-22456, 2020.
The ubiquitous presence of unintended plastics in the environment has been an issue in scientific studies and public debate in the last years. It is well known that oxidative degradation and subsequent fragmentation, caused by UV-radiation, aging and abrasion lead to the decomposition of larger plastic products into microplastics (MP). Possible effects of these MP on ecosystems are still unclear. Recent studies on MP findings are focused mainly on aquatic systems, while little is known about MP in terrestrial ecosystems. Fermentation residues, sewage sludge and compost represent an input path of plastics in soils through targeted application in agriculture. For this reason, analysis of the total content of plastic in organic fertilizers as a sink and source of MP in ecosystems is of high interest.
In 2017, approximately 14.2 million tons of biodegradable waste were collected, from which 3.9 million tons of compost was produced. Improper waste separation result in plastic fragments in the biowaste, some of which end up in the compost and might be degrade to MP. In Germany, compost is used as fertilizer in agriculture and landscape design, hence MP could enter the soil by this pathway. Spectroscopic methods such as Raman or FTIR are not suitable for determining the mass content of microplastic, as these output a particle number.
Therefore, we show the application of ThermoExtractionDesorption-GasChromatography-MassSpectrometry (TED-GC-MS) as a fast, integral analytical technique, which in contrast to the spectroscopic methods does not measure the number of particles but a mass content. The sample is pyrolyzed to 600°C in a nitrogen atmosphere and an excerpt of the pyrolysis gases is collected on a solid phase adsorber. Afterwards, the decomposition gases are desorbed and measured in a GC-MS system. Characteristic pyrolysis products of each polymer can be used to identify the polymer type and determine the mass contents in the present sample. This method is well established for the analysis of MP in water filtrate samples. Here, we will first demonstrate the use of TED-GC-MS for compost.
This current study will also give inside in various important aspects of sample preparation, which include a meaningful size fractionation, a necessary density separation regarding the removal of inorganic contents and at finally a homogenization.
How to cite: Wiesner, Y.-K., Müller, A., Bannick, C. G., Bednarz, M., and Braun, U.: Pathways of microplastics in soils - Detection of microplastic contents in compost using a thermal decomposition method, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22456, https://doi.org/10.5194/egusphere-egu2020-22456, 2020.
EGU2020-22555 | Displays | ITS2.9/SSS8.1
Density separation of soils as sample preparation for the determination of plasticsMarius Bednarz, Nathan Obermaier, and Claus Gerhard Bannick
Plastics are found ubiquitously in all environmental media. Evidence of microplastic occurrence was also provided for various biota. At the beginning of the scientific debate, the oceans as final plastics sinks were in the foreground, whereas current research work focuses on the sources of input, including
surface waters. The water content of these surface waters are influenced by urban and rural areas, including the adjoining soils.
Like oceans, soils are a final sink for many substances, including plastics. Sources of plastics are diverse and depend on use and management. With respect to analytics, soil material is much more complex than suspended solids in water. Therefore the type of soil, grain size, the organic content as well as containing metal ions are important parameters.
For the detection of plastics, there are different analytical methods. Spectroscopic methods determine the particle numbers, sizes, and shapes. Pyrolytic methods return the total contents of plastics within the sample. These include the Thermo-Extraction-Desorption-Gas-Chromatography-MassSpectrometry (TED-GC-MS).
In many environmental samples, there are substances that interfere with both the sample detection and sample preparation. Thus, mineral components must be removed in order to be able to grind better. For their removal, density separation is suitable. In this article, experiments with density separation will be presented.
There are different options to prepare solid samples with density separation, including major methodological differences in the selection of the separation solution and the phase separation.
Various plastic spiked solid samples (terrestrial and sub hydric soils) were biologized. Subsequently, recovery tests were carried out using a density separation method with different separation solutions.
Ultrasound was used to destroy soil agglomerates and release occluded plastics. The separated floated material was sucked off through a 6 μm stainless steel filter. The plastic content in the rinsed organic material was quantified with a TED-GC-MS analysis.
The presented method shows medium (PE: 47 – 82 %) to high (PS: 89 – 100 %) recovery rates depending on the separation solution used and the environmental sample examined.
How to cite: Bednarz, M., Obermaier, N., and Bannick, C. G.: Density separation of soils as sample preparation for the determination of plastics, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22555, https://doi.org/10.5194/egusphere-egu2020-22555, 2020.
Plastics are found ubiquitously in all environmental media. Evidence of microplastic occurrence was also provided for various biota. At the beginning of the scientific debate, the oceans as final plastics sinks were in the foreground, whereas current research work focuses on the sources of input, including
surface waters. The water content of these surface waters are influenced by urban and rural areas, including the adjoining soils.
Like oceans, soils are a final sink for many substances, including plastics. Sources of plastics are diverse and depend on use and management. With respect to analytics, soil material is much more complex than suspended solids in water. Therefore the type of soil, grain size, the organic content as well as containing metal ions are important parameters.
For the detection of plastics, there are different analytical methods. Spectroscopic methods determine the particle numbers, sizes, and shapes. Pyrolytic methods return the total contents of plastics within the sample. These include the Thermo-Extraction-Desorption-Gas-Chromatography-MassSpectrometry (TED-GC-MS).
In many environmental samples, there are substances that interfere with both the sample detection and sample preparation. Thus, mineral components must be removed in order to be able to grind better. For their removal, density separation is suitable. In this article, experiments with density separation will be presented.
There are different options to prepare solid samples with density separation, including major methodological differences in the selection of the separation solution and the phase separation.
Various plastic spiked solid samples (terrestrial and sub hydric soils) were biologized. Subsequently, recovery tests were carried out using a density separation method with different separation solutions.
Ultrasound was used to destroy soil agglomerates and release occluded plastics. The separated floated material was sucked off through a 6 μm stainless steel filter. The plastic content in the rinsed organic material was quantified with a TED-GC-MS analysis.
The presented method shows medium (PE: 47 – 82 %) to high (PS: 89 – 100 %) recovery rates depending on the separation solution used and the environmental sample examined.
How to cite: Bednarz, M., Obermaier, N., and Bannick, C. G.: Density separation of soils as sample preparation for the determination of plastics, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22555, https://doi.org/10.5194/egusphere-egu2020-22555, 2020.
EGU2020-22624 | Displays | ITS2.9/SSS8.1
Microbial life on waste: fungal communities on plastic debris from dumpsites in East AfricaGerasimos Gkoutselis and Gerhard Rambold
The plastic waste input into terrestrial ecosystems is a serious and ongoing problem, particularly in developing countries due to deficient or non-existent recycling management. A so-called 'plastic ban' has been proclaimed in Kenya in 2017. Despite the ban, waste of all kinds of plastics, mainly polyethylene (PE) still exists at large amounts, particularly in the municipal environment of the country, where plastic solid waste (PSW) permeates the upper layers of the soil. Microorganisms are the key players in the decomposition of (polymeric) materials. Landfills (dumpsites) are designated hot spots of environmental pollution with plastics. Therefore, landfills and plastic-contaminated sites in the town of Siaya (Western Kenya) are considered suitable locations to discover with a high probability so-called soil-borne, 'plasticophilic' microorganisms. Since microfungal diversity in these regions is virtually unknown, a high-throughput method was applied to obtain a first overview on potential fungal plastic degraders and the composition of their respective communities. The focus of the screening was laid on the distinction between directly plastic-associated and generally soil-dwelling fungi. In other words, it was the aim to characterise via community barcoding associations of specifically plastic-colonizing species or OTUs in comparative analyses of both substrates, i.e. bulk soil and (macro)plastic. Ultimately, the aim of this study was to identify those 'key species' that contribute most to β̞-diversity, by far-reaching adaptations to this anthropogenic trophic niche. Eventually, this investigation marks an initiation point to a comprehensive screening in equatorial Africa for the isolation of fungi capable of plastic biodegradation.
How to cite: Gkoutselis, G. and Rambold, G.: Microbial life on waste: fungal communities on plastic debris from dumpsites in East Africa, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22624, https://doi.org/10.5194/egusphere-egu2020-22624, 2020.
The plastic waste input into terrestrial ecosystems is a serious and ongoing problem, particularly in developing countries due to deficient or non-existent recycling management. A so-called 'plastic ban' has been proclaimed in Kenya in 2017. Despite the ban, waste of all kinds of plastics, mainly polyethylene (PE) still exists at large amounts, particularly in the municipal environment of the country, where plastic solid waste (PSW) permeates the upper layers of the soil. Microorganisms are the key players in the decomposition of (polymeric) materials. Landfills (dumpsites) are designated hot spots of environmental pollution with plastics. Therefore, landfills and plastic-contaminated sites in the town of Siaya (Western Kenya) are considered suitable locations to discover with a high probability so-called soil-borne, 'plasticophilic' microorganisms. Since microfungal diversity in these regions is virtually unknown, a high-throughput method was applied to obtain a first overview on potential fungal plastic degraders and the composition of their respective communities. The focus of the screening was laid on the distinction between directly plastic-associated and generally soil-dwelling fungi. In other words, it was the aim to characterise via community barcoding associations of specifically plastic-colonizing species or OTUs in comparative analyses of both substrates, i.e. bulk soil and (macro)plastic. Ultimately, the aim of this study was to identify those 'key species' that contribute most to β̞-diversity, by far-reaching adaptations to this anthropogenic trophic niche. Eventually, this investigation marks an initiation point to a comprehensive screening in equatorial Africa for the isolation of fungi capable of plastic biodegradation.
How to cite: Gkoutselis, G. and Rambold, G.: Microbial life on waste: fungal communities on plastic debris from dumpsites in East Africa, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22624, https://doi.org/10.5194/egusphere-egu2020-22624, 2020.
ITS2.10/NP3.3 – Urban Geoscience Complexity: Transdisciplinarity for the Urban Transition
EGU2020-20466 | Displays | ITS2.10/NP3.3
The role of climate information for the urban transition towards sustainabilityGaby Langendijk, Diana Rechid, and Daniela Jacob
Urban areas are prone to climate change impacts. A transition towards sustainable urban systems is relying heavily on useful, evidence-based climate information on urban scales.
However, many of the urban climate models and regional climate models are currently either not scale compliant for cities, or do not cover essential parameters and/or urban-rural interactions under climate change conditions. Furthermore, although e.g. the urban heat island may be better understood, other phenomena, such as moisture change, are little researched. Our research shows the potential of regional climate models, within the EURO-CORDEX framework, to provide climate information on urban scales for 11km and 3km grid size. The city of Berlin is taken as a case-study. The results show that the regional climate models simulate a difference between Berlin and its surroundings for temperature and humidity related variables. There is an increasing urban dry island in Berlin towards the end of the century, as well as an increasing urban heat island. The study shows the potential of regional climate models to provide climate change information on urban scales.
For climate information to underpin the urban transition this information will need to be put in a decision-making context. As an example, the research aims to understand connections to the health sector on how to integrate the information in order to manage e.g. the dispersion of pollen in cities, assisting in mitigating pollen allergies. The research showcases an interdisciplinary way forward to firstly produce climate information on urban scales and secondly how to connect it to city sectors in a suitable manner to underpin the transition to sustainable urban systems.
How to cite: Langendijk, G., Rechid, D., and Jacob, D.: The role of climate information for the urban transition towards sustainability , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20466, https://doi.org/10.5194/egusphere-egu2020-20466, 2020.
Urban areas are prone to climate change impacts. A transition towards sustainable urban systems is relying heavily on useful, evidence-based climate information on urban scales.
However, many of the urban climate models and regional climate models are currently either not scale compliant for cities, or do not cover essential parameters and/or urban-rural interactions under climate change conditions. Furthermore, although e.g. the urban heat island may be better understood, other phenomena, such as moisture change, are little researched. Our research shows the potential of regional climate models, within the EURO-CORDEX framework, to provide climate information on urban scales for 11km and 3km grid size. The city of Berlin is taken as a case-study. The results show that the regional climate models simulate a difference between Berlin and its surroundings for temperature and humidity related variables. There is an increasing urban dry island in Berlin towards the end of the century, as well as an increasing urban heat island. The study shows the potential of regional climate models to provide climate change information on urban scales.
For climate information to underpin the urban transition this information will need to be put in a decision-making context. As an example, the research aims to understand connections to the health sector on how to integrate the information in order to manage e.g. the dispersion of pollen in cities, assisting in mitigating pollen allergies. The research showcases an interdisciplinary way forward to firstly produce climate information on urban scales and secondly how to connect it to city sectors in a suitable manner to underpin the transition to sustainable urban systems.
How to cite: Langendijk, G., Rechid, D., and Jacob, D.: The role of climate information for the urban transition towards sustainability , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20466, https://doi.org/10.5194/egusphere-egu2020-20466, 2020.
EGU2020-20805 | Displays | ITS2.10/NP3.3 | Highlight
Analysis of heat wave features and urban heat island effect under climate changeYe Tian, Klaus Fraedrich, and Feng Ma
Extreme events such as heat waves occurred in urban have a large influence on human life due to population density. For urban areas, the urban heat island effect could further exacerbate the heat stress of heat waves. Meanwhile, the global climate change over the last few decades has changed the pattern and spatial distribution of local-scale extreme events. Commonly used climate models could capture broad-scale spatial changes in climate phenomena, but representing extreme events on local scales requires data with finer resolution. Here we present a deep learning based downscaling method to capture the localized near surface temperature features from climate models in the Coupled Model Intercomparison Project 6 (CMIP6) framework. The downscaling is based on super-resolution image processing methods which could build relationships between coarse and fine resolution. This downscaling framework will then be applied to future emission scenarios over the period 2030 to 2100. The influence of future climate change on the occurrence of heat waves in urban and its interaction with urban heat island effect for ten most densely populated cities in China are studied. The heat waves are defined based on air temperature and the urban heat island is measured by the urban-rural difference in 2m-height air temperature. Improvements in data resolution enhanced the utility for assessing the surface air temperature record. Comparisons of urban heat waves from multiple climate models suggest that near-surface temperature trends and heat island effects are greatly affected by global warming. High resolution climate data offer the potential for further assessment of worldwide urban warming influences.
How to cite: Tian, Y., Fraedrich, K., and Ma, F.: Analysis of heat wave features and urban heat island effect under climate change, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20805, https://doi.org/10.5194/egusphere-egu2020-20805, 2020.
Extreme events such as heat waves occurred in urban have a large influence on human life due to population density. For urban areas, the urban heat island effect could further exacerbate the heat stress of heat waves. Meanwhile, the global climate change over the last few decades has changed the pattern and spatial distribution of local-scale extreme events. Commonly used climate models could capture broad-scale spatial changes in climate phenomena, but representing extreme events on local scales requires data with finer resolution. Here we present a deep learning based downscaling method to capture the localized near surface temperature features from climate models in the Coupled Model Intercomparison Project 6 (CMIP6) framework. The downscaling is based on super-resolution image processing methods which could build relationships between coarse and fine resolution. This downscaling framework will then be applied to future emission scenarios over the period 2030 to 2100. The influence of future climate change on the occurrence of heat waves in urban and its interaction with urban heat island effect for ten most densely populated cities in China are studied. The heat waves are defined based on air temperature and the urban heat island is measured by the urban-rural difference in 2m-height air temperature. Improvements in data resolution enhanced the utility for assessing the surface air temperature record. Comparisons of urban heat waves from multiple climate models suggest that near-surface temperature trends and heat island effects are greatly affected by global warming. High resolution climate data offer the potential for further assessment of worldwide urban warming influences.
How to cite: Tian, Y., Fraedrich, K., and Ma, F.: Analysis of heat wave features and urban heat island effect under climate change, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20805, https://doi.org/10.5194/egusphere-egu2020-20805, 2020.
EGU2020-11820 | Displays | ITS2.10/NP3.3
Agent-based Modeling of Human Exposure to Urban Environmental Stressors – A Docking StudySascha Hokamp, Sven Rühe, and Jürgen Scheffran
The goal of environmental exposure modelling is to link fundamental human activities with stress via the environment. Stress is here defined as environmental conditions negatively affecting human health and well-being. Especially in urban areas, humans can be exposed to multiple stressors such as air pollution, noise (e.g. traffic), and heat. The importance of being able to predict the exposure level in urban areas is increasing due to ongoing urbanization and global climate change. For instance, in Germany annual Greenhouse Gas (GHG) emissions have been reduced by 28% from 1990 to 2014 but contributions by the transport sector have been quite stable (from 0.163 GtCO2Equivalents in 1990 to 0.160 GtCO2Equivalents in 2014 (Umweltbundesamt, 2016). Yang et al. (2018) provides a stylized agent-based model of human exposure to environmental stressors (heat, rain, NO2) for Hamburg, Germany. Within this ABM, the changing exposure to environmental stressors is analyzed for citizens as a function of time and location. The population is classified into different archetypes; they range from young, single students to families with children to old, rich and single persons. While their choice of transportation is a function of exposure, commuting time and costs, each agent has different preferences and different rates to adapt to changing environmental conditions. The agents are moving in multiple layers of housing (e.g. residential buildings) and infrastructure (e.g. streets, subway). Depending on the agent types, bike, car or public transport is chosen as the preferred mean of transport. However, Yang et al. (2018) consider stylized agent-based dynamics without any interaction among the agents. We provide a multi-agent docking study of human exposure to environmental stressors implemented in Netlogo and find distributional and relational equivalence (Axtell et al., 1996, Hokamp et al. 2018) to Yang et al. (2018). To put it differently, we analyze interacting individual heterogeneous agents in an actual urban environment. Results give information about the mean of transportation with the lowest exposure and how very low costs for public transport affect choices of transportation and so the road traffic. Further, the results may be used by policy makers and citizens (e.g. via mobile devices using an app) to improve environmental quality of life.
References
Axtell, R., Axelrod, R., Epstein, J.M., and Cohen, M.D. (1996) Aligning simulation models: a case study and results. Computational & Mathematical Organization Theory, 1 (2), 123–141.
Hokamp, S., Gulyas, L., Koehler, M. and Wijesinghe, S. (2018), Agent-based Modelling and Tax Evasion: Theory and Application, 3-35, Hoboken, NJ, John Wiley & Sons Ltd.
Umweltbundesamt (2014) Submission under the United Nations Framework Convention on Climate Change and the Kyoto Protocol 2016 – National Inventory Report for the German Greenhouse Gas Inventory 1990-2014.
Yang, L. E., Hoffmann, P., Scheffran, J. , Rühe, S. , Fischereit, J. and Gasser, I. (2018), An Agent-Based Modeling Framework for Simulating Human Exposure to Environmental Stresses in Urban Areas, Urban Science, 2, 36.
How to cite: Hokamp, S., Rühe, S., and Scheffran, J.: Agent-based Modeling of Human Exposure to Urban Environmental Stressors – A Docking Study, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11820, https://doi.org/10.5194/egusphere-egu2020-11820, 2020.
The goal of environmental exposure modelling is to link fundamental human activities with stress via the environment. Stress is here defined as environmental conditions negatively affecting human health and well-being. Especially in urban areas, humans can be exposed to multiple stressors such as air pollution, noise (e.g. traffic), and heat. The importance of being able to predict the exposure level in urban areas is increasing due to ongoing urbanization and global climate change. For instance, in Germany annual Greenhouse Gas (GHG) emissions have been reduced by 28% from 1990 to 2014 but contributions by the transport sector have been quite stable (from 0.163 GtCO2Equivalents in 1990 to 0.160 GtCO2Equivalents in 2014 (Umweltbundesamt, 2016). Yang et al. (2018) provides a stylized agent-based model of human exposure to environmental stressors (heat, rain, NO2) for Hamburg, Germany. Within this ABM, the changing exposure to environmental stressors is analyzed for citizens as a function of time and location. The population is classified into different archetypes; they range from young, single students to families with children to old, rich and single persons. While their choice of transportation is a function of exposure, commuting time and costs, each agent has different preferences and different rates to adapt to changing environmental conditions. The agents are moving in multiple layers of housing (e.g. residential buildings) and infrastructure (e.g. streets, subway). Depending on the agent types, bike, car or public transport is chosen as the preferred mean of transport. However, Yang et al. (2018) consider stylized agent-based dynamics without any interaction among the agents. We provide a multi-agent docking study of human exposure to environmental stressors implemented in Netlogo and find distributional and relational equivalence (Axtell et al., 1996, Hokamp et al. 2018) to Yang et al. (2018). To put it differently, we analyze interacting individual heterogeneous agents in an actual urban environment. Results give information about the mean of transportation with the lowest exposure and how very low costs for public transport affect choices of transportation and so the road traffic. Further, the results may be used by policy makers and citizens (e.g. via mobile devices using an app) to improve environmental quality of life.
References
Axtell, R., Axelrod, R., Epstein, J.M., and Cohen, M.D. (1996) Aligning simulation models: a case study and results. Computational & Mathematical Organization Theory, 1 (2), 123–141.
Hokamp, S., Gulyas, L., Koehler, M. and Wijesinghe, S. (2018), Agent-based Modelling and Tax Evasion: Theory and Application, 3-35, Hoboken, NJ, John Wiley & Sons Ltd.
Umweltbundesamt (2014) Submission under the United Nations Framework Convention on Climate Change and the Kyoto Protocol 2016 – National Inventory Report for the German Greenhouse Gas Inventory 1990-2014.
Yang, L. E., Hoffmann, P., Scheffran, J. , Rühe, S. , Fischereit, J. and Gasser, I. (2018), An Agent-Based Modeling Framework for Simulating Human Exposure to Environmental Stresses in Urban Areas, Urban Science, 2, 36.
How to cite: Hokamp, S., Rühe, S., and Scheffran, J.: Agent-based Modeling of Human Exposure to Urban Environmental Stressors – A Docking Study, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11820, https://doi.org/10.5194/egusphere-egu2020-11820, 2020.
EGU2020-13031 | Displays | ITS2.10/NP3.3
Non-linear interactions of urban and freshwater systems: Exploring implications for sustainability and water planning and managementHéctor Angarita, Vishal Mehta, and Efraín Domínguez
Human population is progressing into a predominantly urban configuration. Currently, 3.5 billion people – 55% of the total human population – live in urban areas, with an increase to 6.68 billion (68%) projected by 2050. In this progressively more populated world, a central issue of sustainability assessments is understanding the role of cities as entities that, despite their comparatively small physical footprint (less than 0.5% of the global area) demand resources at regional and global scales.
Many of the resources that sustain urban population directly depend on the freshwater system: from direct fluxes from/to the immediate environment of cities for water supply or waste elimination, to water-dependent activities like biomass (food, biofuels, fibers) and energy production. Urban and freshwater system interactions are subject to multiple sources of non-linearity. Factors like the patterns of size or spatial distribution and interconnection of groups of cities; or the nested and hierarchical character of freshwater systems, can vastly influence the amount of resources required to sustain and grow urban population; likewise, equivalent resource demands can be met through different management strategies that vary substantially in their cumulative pressure exerted on the freshwater system.
Here we explore the non-linear character of those interactions, to i. identify water management options to avoid, minimize or offset regional impacts of growing urban populations, and ii. explore long term implications of such non-linearities in sustained resource base of urban areas. We propose a framework integrating three elements: 1. properties of the size and spatial distribution of urban center sizes, 2. scaling regime of urban energy resource dependencies, and 3. scaling regime of associated physical and ecological impacts in freshwater systems.
An example of this approach is presented in a case study in the Magdalena River Basin – MRB (Colombia). The basin covers nearly one quarter of Colombia’s national territory and provides sustenance to 36 million people, with three quarters of basin inhabitants living in medium to large urban settlements of populations of 12 000 or more inhabitants and 50% concentrated in the 15 largest cities. The case study results indicate that freshwater-mediated resource dependencies of urban population are described by a linear or super-linear regime that indicates a lack of scale economies, however, freshwater systems’ capacity to assimilate those resource demands is characterized by a sublinear regime. As a result, current practices and technological approaches to couple freshwater and urban systems will not be able to withstand the resource demands of mid-term future population scenarios. Our approach allows to quantify the projected gaps to achieve a sustained resource base for urban systems in MRB.
How to cite: Angarita, H., Mehta, V., and Domínguez, E.: Non-linear interactions of urban and freshwater systems: Exploring implications for sustainability and water planning and management, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13031, https://doi.org/10.5194/egusphere-egu2020-13031, 2020.
Human population is progressing into a predominantly urban configuration. Currently, 3.5 billion people – 55% of the total human population – live in urban areas, with an increase to 6.68 billion (68%) projected by 2050. In this progressively more populated world, a central issue of sustainability assessments is understanding the role of cities as entities that, despite their comparatively small physical footprint (less than 0.5% of the global area) demand resources at regional and global scales.
Many of the resources that sustain urban population directly depend on the freshwater system: from direct fluxes from/to the immediate environment of cities for water supply or waste elimination, to water-dependent activities like biomass (food, biofuels, fibers) and energy production. Urban and freshwater system interactions are subject to multiple sources of non-linearity. Factors like the patterns of size or spatial distribution and interconnection of groups of cities; or the nested and hierarchical character of freshwater systems, can vastly influence the amount of resources required to sustain and grow urban population; likewise, equivalent resource demands can be met through different management strategies that vary substantially in their cumulative pressure exerted on the freshwater system.
Here we explore the non-linear character of those interactions, to i. identify water management options to avoid, minimize or offset regional impacts of growing urban populations, and ii. explore long term implications of such non-linearities in sustained resource base of urban areas. We propose a framework integrating three elements: 1. properties of the size and spatial distribution of urban center sizes, 2. scaling regime of urban energy resource dependencies, and 3. scaling regime of associated physical and ecological impacts in freshwater systems.
An example of this approach is presented in a case study in the Magdalena River Basin – MRB (Colombia). The basin covers nearly one quarter of Colombia’s national territory and provides sustenance to 36 million people, with three quarters of basin inhabitants living in medium to large urban settlements of populations of 12 000 or more inhabitants and 50% concentrated in the 15 largest cities. The case study results indicate that freshwater-mediated resource dependencies of urban population are described by a linear or super-linear regime that indicates a lack of scale economies, however, freshwater systems’ capacity to assimilate those resource demands is characterized by a sublinear regime. As a result, current practices and technological approaches to couple freshwater and urban systems will not be able to withstand the resource demands of mid-term future population scenarios. Our approach allows to quantify the projected gaps to achieve a sustained resource base for urban systems in MRB.
How to cite: Angarita, H., Mehta, V., and Domínguez, E.: Non-linear interactions of urban and freshwater systems: Exploring implications for sustainability and water planning and management, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13031, https://doi.org/10.5194/egusphere-egu2020-13031, 2020.
EGU2020-18578 | Displays | ITS2.10/NP3.3
High-resolution polarimetric radar network for improving urban resilience to natural disasters in a complex environmentChandrasekar V. Chandra, Haonan Chen, and Rob Cifelli
The operational Weather Surveillance Radar - 1988 Doppler (WSR-88D) network is an efficient tool for observing hydrometeorology processes and it forms the cornerstone of national weather forecast and warning systems. However, the observation performance of the WSR-88D network is severely hampered over the western U.S., due to 1) the radar network density is not as high as that over the eastern U.S.; 2) WSR-88D radar beams are often partially or fully blocked by the mountainous terrain in the western U.S.
For example, the San Francisco Bay Area in Northern California, which supports one of the most prosperous economies in the U.S., is expected to be covered by two WSR-88D radars: KMUX and KDAX. The KMUX radar is located in the Santa Cruz Mountains at an elevation of over 1000 m above mean sea level (AMSL) compared with the densely populated valley regions which are near the sea level. Typically, the storms in Northern California have freezing levels approximately 1–2 km AMSL. As the distance from the radar increases, the KMUX radar beam can easily overshoot the mixed-phase hydrometeors in the bright band or snowflakes above the bright band, even if it is raining at the ground. The KDAX radar is located near the sea level in Davis, California. However, the KDAX radar beams are partially blocked by the Coast Ranges at low elevation angles. The coverage limitations of the KMUX and KDAX radars are further compounded by the complex precipitation microphysics as a result of land-ocean interaction in the coastal regions and orographic enhancement in the mountainous regions. As a result, it is still challenging to monitor and predict the changing atmospheric conditions using operational radars in the Bay Area, which will make the Bay Area particularly susceptible to catastrophic flooding that disrupts transportation, threatens public safety, and negatively impacts water quality.
In this paper, we present an Advanced Quantitative Precipitation Information (AQPI) system built by NOAA and collaborating partners to improve monitoring and forecasting of precipitation and coastal flooding in the Bay Area. The high-frequency (i.e., C and X band) high-resolution gap-filling radars deployed as part of the AQPI program are detailed. A radar-based rainfall system is designed to improve real-time precipitation estimation over the Bay Area. The sensitivity of rainfall products on the occurrence of hydrologic extremes is investigated through a distributed hydrological model to improve the streamflow forecast. The performance of rainfall and associated hydrological impacts during the 2018-2019 and 2019-2020 winter storm seasons is quantified in the context of improving urban resiliency to natural disasters in such a complex environment.
How to cite: Chandra, C. V., Chen, H., and Cifelli, R.: High-resolution polarimetric radar network for improving urban resilience to natural disasters in a complex environment, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18578, https://doi.org/10.5194/egusphere-egu2020-18578, 2020.
The operational Weather Surveillance Radar - 1988 Doppler (WSR-88D) network is an efficient tool for observing hydrometeorology processes and it forms the cornerstone of national weather forecast and warning systems. However, the observation performance of the WSR-88D network is severely hampered over the western U.S., due to 1) the radar network density is not as high as that over the eastern U.S.; 2) WSR-88D radar beams are often partially or fully blocked by the mountainous terrain in the western U.S.
For example, the San Francisco Bay Area in Northern California, which supports one of the most prosperous economies in the U.S., is expected to be covered by two WSR-88D radars: KMUX and KDAX. The KMUX radar is located in the Santa Cruz Mountains at an elevation of over 1000 m above mean sea level (AMSL) compared with the densely populated valley regions which are near the sea level. Typically, the storms in Northern California have freezing levels approximately 1–2 km AMSL. As the distance from the radar increases, the KMUX radar beam can easily overshoot the mixed-phase hydrometeors in the bright band or snowflakes above the bright band, even if it is raining at the ground. The KDAX radar is located near the sea level in Davis, California. However, the KDAX radar beams are partially blocked by the Coast Ranges at low elevation angles. The coverage limitations of the KMUX and KDAX radars are further compounded by the complex precipitation microphysics as a result of land-ocean interaction in the coastal regions and orographic enhancement in the mountainous regions. As a result, it is still challenging to monitor and predict the changing atmospheric conditions using operational radars in the Bay Area, which will make the Bay Area particularly susceptible to catastrophic flooding that disrupts transportation, threatens public safety, and negatively impacts water quality.
In this paper, we present an Advanced Quantitative Precipitation Information (AQPI) system built by NOAA and collaborating partners to improve monitoring and forecasting of precipitation and coastal flooding in the Bay Area. The high-frequency (i.e., C and X band) high-resolution gap-filling radars deployed as part of the AQPI program are detailed. A radar-based rainfall system is designed to improve real-time precipitation estimation over the Bay Area. The sensitivity of rainfall products on the occurrence of hydrologic extremes is investigated through a distributed hydrological model to improve the streamflow forecast. The performance of rainfall and associated hydrological impacts during the 2018-2019 and 2019-2020 winter storm seasons is quantified in the context of improving urban resiliency to natural disasters in such a complex environment.
How to cite: Chandra, C. V., Chen, H., and Cifelli, R.: High-resolution polarimetric radar network for improving urban resilience to natural disasters in a complex environment, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18578, https://doi.org/10.5194/egusphere-egu2020-18578, 2020.
EGU2020-19163 | Displays | ITS2.10/NP3.3
On Statistical Modeling of Extreme Rainfall Processes for Urban Water Infrastructure Design in the Context of Climate ChangeVan-Thanh-Van Nguyen
There exists an urgent need to assess the possible impacts of climate change on the Intensity-Duration-Frequency (IDF) relations in general and on the design storm in particular for improving the design of urban water infrastructure in the context of a changing climate. At present, the derivation of IDF relations in the context of climate change at a location of interest has been recognized as one of the most challenging tasks in current engineering practices. The main challenge is how to establish the linkages between the climate projections given by Global Climate Models (GCMs) at the global scale and the observed extreme rainfalls at a given local site. If these linkages could be established, then the projected climate change conditions given by GCMs could be used to predict the resulting changes of local extreme rainfalls and related runoff characteristics. Consequently, innovative downscaling approaches are needed in the modeling extreme rainfall (ER) processes over a wide range of temporal and spatial scales for climate change impact and adaptation studies in urban areas. Therefore, the overall objective of the present paper is to provide an overview of some recent progress in the modeling of extreme rainfall processes in a changing climate from both theoretical and practical viewpoints. In particular, the main focus of this paper is on recently developed statistical downscaling (SD) methods for linking GCM climate predictors to the observed daily and sub-daily rainfall extremes at a single site as well as at many sites concurrently. In addition, new SD procedures are presented for describing the linkages between GCM outputs and rainfall characteristics at a given location where the rainfall data are limited or unavailable, a common and crucial challenge in engineering practice.
How to cite: Nguyen, V.-T.-V.: On Statistical Modeling of Extreme Rainfall Processes for Urban Water Infrastructure Design in the Context of Climate Change, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19163, https://doi.org/10.5194/egusphere-egu2020-19163, 2020.
There exists an urgent need to assess the possible impacts of climate change on the Intensity-Duration-Frequency (IDF) relations in general and on the design storm in particular for improving the design of urban water infrastructure in the context of a changing climate. At present, the derivation of IDF relations in the context of climate change at a location of interest has been recognized as one of the most challenging tasks in current engineering practices. The main challenge is how to establish the linkages between the climate projections given by Global Climate Models (GCMs) at the global scale and the observed extreme rainfalls at a given local site. If these linkages could be established, then the projected climate change conditions given by GCMs could be used to predict the resulting changes of local extreme rainfalls and related runoff characteristics. Consequently, innovative downscaling approaches are needed in the modeling extreme rainfall (ER) processes over a wide range of temporal and spatial scales for climate change impact and adaptation studies in urban areas. Therefore, the overall objective of the present paper is to provide an overview of some recent progress in the modeling of extreme rainfall processes in a changing climate from both theoretical and practical viewpoints. In particular, the main focus of this paper is on recently developed statistical downscaling (SD) methods for linking GCM climate predictors to the observed daily and sub-daily rainfall extremes at a single site as well as at many sites concurrently. In addition, new SD procedures are presented for describing the linkages between GCM outputs and rainfall characteristics at a given location where the rainfall data are limited or unavailable, a common and crucial challenge in engineering practice.
How to cite: Nguyen, V.-T.-V.: On Statistical Modeling of Extreme Rainfall Processes for Urban Water Infrastructure Design in the Context of Climate Change, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19163, https://doi.org/10.5194/egusphere-egu2020-19163, 2020.
EGU2020-20583 * | Displays | ITS2.10/NP3.3 | Highlight
Opportunistic sensing in hydrometeorologyRemko Uijlenhoet, Lotte de Vos, Aart Overeem, and Hidde Leijnse
Traditionally, hydrologists have relied on dedicated measurement equipment to do their business (e.g. rainfall-runoff modeling). Such instruments are typically owned and operated by government agencies and regional or local authorities. Installed and maintained according to (inter)national standards, they offer accurate and reliable information about the state of and fluxes in the hydrological systems we study as scientists or manage as operational agencies. Such standard instruments are often further developments of novel measurement techniques which have their origins in the research community and have been tested during dedicated field campaigns.
One drawback of the operational measurement networks available to the hydrological community today is that they often lack the required coverage and spatial and/or temporal resolution for high-resolution real-time monitoring or short-term forecasting of rapidly responding hydrological systems (e.g. urban areas). Another drawback is that dedicated networks are often costly to install and maintain, which makes it a challenge for nations in the developing world to operate them on a continuous basis, for instance.
Yet, our world is nowadays full of sensors, often related to the rapid development in wireless communication networks we are currently witnessing (notably 5G). Let us try to make use of such opportunistic sensors to do our (hydrologic) science and our (water management) operations. They may not be as accurate or reliable as the dedicated measurement equipment we are used to working with, let alone meet official international standards, but they typically come in large numbers and are accessible online. Hence, in combination with smart retrieval algorithms and statistical treatment, opportunistic sensors may provide a valuable complementary source of information regarding the state of our environment.
The presentation will focus on some recent examples of the potential of opportunistic sensing techniques in hydrology and water resources, from rainfall monitoring using microwave links from cellular communication networks (in Europe, South America, Africa and Asia), via crowdsourcing urban air temperatures using smartphone battery temperatures to high-resolution urban rainfall monitoring using personal weather stations.
How to cite: Uijlenhoet, R., de Vos, L., Overeem, A., and Leijnse, H.: Opportunistic sensing in hydrometeorology, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20583, https://doi.org/10.5194/egusphere-egu2020-20583, 2020.
Traditionally, hydrologists have relied on dedicated measurement equipment to do their business (e.g. rainfall-runoff modeling). Such instruments are typically owned and operated by government agencies and regional or local authorities. Installed and maintained according to (inter)national standards, they offer accurate and reliable information about the state of and fluxes in the hydrological systems we study as scientists or manage as operational agencies. Such standard instruments are often further developments of novel measurement techniques which have their origins in the research community and have been tested during dedicated field campaigns.
One drawback of the operational measurement networks available to the hydrological community today is that they often lack the required coverage and spatial and/or temporal resolution for high-resolution real-time monitoring or short-term forecasting of rapidly responding hydrological systems (e.g. urban areas). Another drawback is that dedicated networks are often costly to install and maintain, which makes it a challenge for nations in the developing world to operate them on a continuous basis, for instance.
Yet, our world is nowadays full of sensors, often related to the rapid development in wireless communication networks we are currently witnessing (notably 5G). Let us try to make use of such opportunistic sensors to do our (hydrologic) science and our (water management) operations. They may not be as accurate or reliable as the dedicated measurement equipment we are used to working with, let alone meet official international standards, but they typically come in large numbers and are accessible online. Hence, in combination with smart retrieval algorithms and statistical treatment, opportunistic sensors may provide a valuable complementary source of information regarding the state of our environment.
The presentation will focus on some recent examples of the potential of opportunistic sensing techniques in hydrology and water resources, from rainfall monitoring using microwave links from cellular communication networks (in Europe, South America, Africa and Asia), via crowdsourcing urban air temperatures using smartphone battery temperatures to high-resolution urban rainfall monitoring using personal weather stations.
How to cite: Uijlenhoet, R., de Vos, L., Overeem, A., and Leijnse, H.: Opportunistic sensing in hydrometeorology, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20583, https://doi.org/10.5194/egusphere-egu2020-20583, 2020.
EGU2020-2464 | Displays | ITS2.10/NP3.3
Circular economy in cities: Reviewing how environmental research aligns with local practicesAnna Petit-Boix and Sina Leipold
Circular economy (CE) is gaining popularity at different levels with the promise of creating more sustainable processes. In this context, cities are implementing a number of initiatives that aim to turn them into sustainable circular systems. Whether these initiatives achieve their sustainability goals, however, is largely unknown. Nevertheless, as the application of CE strategies is actively encouraged by many policies across the globe, there is a need to quantify the environmental impacts and to identify the strategies that support urban sustainability. This paper analyses the extent to which research focuses on quantifying the environmental balance of CE initiatives promoted at the municipal level. To this end, the analysis scanned CE initiatives reported in cities around the globe and classified them into urban targets and CE strategies. In parallel, the paper conducted a review of the literature that uses industrial ecology tools to account for the environmental impacts of CE strategies. Results show a diverse geographical representation, as reported cities concentrated in Europe, whereas for environmental research, the main results came from China. In general, cities encourage strategies relating to urban infrastructure (47%), with and additional focus on social consumption aspects, such as repair and reuse actions. In comparison, research mainly addressed industrial and business practices (58%), but the approach to infrastructure was similar to that of cities, both with a special interest in waste management. Research has yet to assess social consumption and urban planning strategies, the latter essential for defining the impacts of other urban elements. Hence, there is a need to define the environmental impacts of the strategies that cities select in their quest for circularity. Research and practice can also benefit from working collaboratively so as to prioritize the CE strategies that best fit into the features of each urban area.
How to cite: Petit-Boix, A. and Leipold, S.: Circular economy in cities: Reviewing how environmental research aligns with local practices, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2464, https://doi.org/10.5194/egusphere-egu2020-2464, 2020.
Circular economy (CE) is gaining popularity at different levels with the promise of creating more sustainable processes. In this context, cities are implementing a number of initiatives that aim to turn them into sustainable circular systems. Whether these initiatives achieve their sustainability goals, however, is largely unknown. Nevertheless, as the application of CE strategies is actively encouraged by many policies across the globe, there is a need to quantify the environmental impacts and to identify the strategies that support urban sustainability. This paper analyses the extent to which research focuses on quantifying the environmental balance of CE initiatives promoted at the municipal level. To this end, the analysis scanned CE initiatives reported in cities around the globe and classified them into urban targets and CE strategies. In parallel, the paper conducted a review of the literature that uses industrial ecology tools to account for the environmental impacts of CE strategies. Results show a diverse geographical representation, as reported cities concentrated in Europe, whereas for environmental research, the main results came from China. In general, cities encourage strategies relating to urban infrastructure (47%), with and additional focus on social consumption aspects, such as repair and reuse actions. In comparison, research mainly addressed industrial and business practices (58%), but the approach to infrastructure was similar to that of cities, both with a special interest in waste management. Research has yet to assess social consumption and urban planning strategies, the latter essential for defining the impacts of other urban elements. Hence, there is a need to define the environmental impacts of the strategies that cities select in their quest for circularity. Research and practice can also benefit from working collaboratively so as to prioritize the CE strategies that best fit into the features of each urban area.
How to cite: Petit-Boix, A. and Leipold, S.: Circular economy in cities: Reviewing how environmental research aligns with local practices, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2464, https://doi.org/10.5194/egusphere-egu2020-2464, 2020.
EGU2020-150 | Displays | ITS2.10/NP3.3
Comparison of driving habits of drivers living in Hungary and RomaniaFanni Vörös, Mátyás Magyari, and Béla Kovács
People have a basic need for moving – since the very beginning. What has changed is the hows and the whys of the route. With the advancement of technology we can travel more faster and more comfortable. Of course, not only the vehicles themselves, but also the devices inside them are becoming more modern and faster. One of the - maybe the most important - tools is the built-in navigation. It should have fast response time and it must provide appropriate amount of information to the driver.
We assumed that driving habits are influenced by lot of things, such as age, sex or residence. Drivers living in Hungary and Romania were examined in our project. Hungary is in Central Europe, in the Carpathian Basin. With about 10 million residents, it is a medium-sized member state of the European Union. Romania is at the junction of Central, Eastern, and South-eastern Europe and it is the 12th largest country and also the 7th most populous member state of the EU with almost 20 million inhabitants. The area difference between the two countries is already one aspect, which is supposed to be associated with different driving habits. Differences in road quality, GDP or infrastructure can also have an effect on it.
To test the assumptions we created two Google Forms - one for the Hungarian drivers (in Hungarian) and one for those who live in Romania. The latter was available in both Romanian and Hungarian, because the largest minority group in Romania are the Hungarians – in terms of the questionnaire the border of the countries were relevant. Both questionnaires had the same structure (three parts) and questions: the first parts contain 17 general, mandatory questions like age, education level, questions about the driver’s car (brand, age). Navigation habits are closely linked to driving habits and we put more emphasis on it. Depending on whether someone is using built-in car navigation or not, we have asked different questions – 3 if the filler does not have one, and 30 if he/she has in-built car navigation GPS. Most of our questions were about these tools but we gathered some information about mobile application usage too.
There are similarities and also differences in the results. Hungarian drivers have few years older cars (in average) than the Romanian cars (which is equivalent to the EU average), but most of them drive “second hand” cars (in both countries). Consequently, most people could not choose whether they would like a built-in GPS or not. Few respondents said that they would not use the device under any circumstances. So it can be said that people basically do not consider GPS unnecessary or discarded. The number of people who own and use in-built car GPS is roughly the same in the two countries.
FV is supported by the ÚNKP-19-3 New National Excellence Program of the Ministry for Innovation and Technology.
How to cite: Vörös, F., Magyari, M., and Kovács, B.: Comparison of driving habits of drivers living in Hungary and Romania, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-150, https://doi.org/10.5194/egusphere-egu2020-150, 2020.
People have a basic need for moving – since the very beginning. What has changed is the hows and the whys of the route. With the advancement of technology we can travel more faster and more comfortable. Of course, not only the vehicles themselves, but also the devices inside them are becoming more modern and faster. One of the - maybe the most important - tools is the built-in navigation. It should have fast response time and it must provide appropriate amount of information to the driver.
We assumed that driving habits are influenced by lot of things, such as age, sex or residence. Drivers living in Hungary and Romania were examined in our project. Hungary is in Central Europe, in the Carpathian Basin. With about 10 million residents, it is a medium-sized member state of the European Union. Romania is at the junction of Central, Eastern, and South-eastern Europe and it is the 12th largest country and also the 7th most populous member state of the EU with almost 20 million inhabitants. The area difference between the two countries is already one aspect, which is supposed to be associated with different driving habits. Differences in road quality, GDP or infrastructure can also have an effect on it.
To test the assumptions we created two Google Forms - one for the Hungarian drivers (in Hungarian) and one for those who live in Romania. The latter was available in both Romanian and Hungarian, because the largest minority group in Romania are the Hungarians – in terms of the questionnaire the border of the countries were relevant. Both questionnaires had the same structure (three parts) and questions: the first parts contain 17 general, mandatory questions like age, education level, questions about the driver’s car (brand, age). Navigation habits are closely linked to driving habits and we put more emphasis on it. Depending on whether someone is using built-in car navigation or not, we have asked different questions – 3 if the filler does not have one, and 30 if he/she has in-built car navigation GPS. Most of our questions were about these tools but we gathered some information about mobile application usage too.
There are similarities and also differences in the results. Hungarian drivers have few years older cars (in average) than the Romanian cars (which is equivalent to the EU average), but most of them drive “second hand” cars (in both countries). Consequently, most people could not choose whether they would like a built-in GPS or not. Few respondents said that they would not use the device under any circumstances. So it can be said that people basically do not consider GPS unnecessary or discarded. The number of people who own and use in-built car GPS is roughly the same in the two countries.
FV is supported by the ÚNKP-19-3 New National Excellence Program of the Ministry for Innovation and Technology.
How to cite: Vörös, F., Magyari, M., and Kovács, B.: Comparison of driving habits of drivers living in Hungary and Romania, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-150, https://doi.org/10.5194/egusphere-egu2020-150, 2020.
EGU2020-392 | Displays | ITS2.10/NP3.3
Assessing resident's satisfaction regarding housing environment - Case study Radauti municipalityVasile Efros, Luminita-Mirela Lazarescu, and Vasilica-Danut Horodnic
Housing, associated in the specialty literature with the habitat, is a dynamic process, circumscribed to human existence that is significantly influenced by the technological progress of the last decades. The access to information, the alert pace of life in the urban environment, the multiplication of population concerns, the need for privacy have created new preferences for housing and new expectations and needs of the residents in relation to the utilities, facilities and services available whitin the urban settlements. In consenquence, constant changes regarding housing-needs justifies the measures to assess the living conditions within a settlement.
The present article proposes an empirical analysis on the perception of the population on the conditions under which the housing is carried out in the city of Radauti, a city with a population of 34,692 inhabitants, which after the fall of the communist regime undergoes a process of urban regeneration, like many other small medium-sized cities in the ex-socialists states of Eastern Europe. The research used the method of sociological inquiry by applying a questionnaire to a sample of 350 inhabitants with a permanent residence in the municipality of Radauti who were selected from the lists of citizens with voting rights from the 15 existing constituencies, following the representation of the three age categories. The research related to criteria associated with the characteristics of the housing environment and to the criteria regarding the accessibility of some utilities and services, for each category being selected variables that can be improved by involving the local administration (characteristics of buildings and housing, existence and access to utilities, the arrangement of the parking lots, of the communication paths, of the spaces for pedestrian movement, the urban image etc.).
The interpretation of results allowed the association of the satisfaction and dissatisfaction of the respondents with concrete aspects of the settlement, which made it possible to individualize the generating factors of some situations that were negatively appreciated by the population. This fact confirmed the hypothesis that there is an important gap between the needs of the population and the concrete situation of the facilities, utilities and services to which the population has access, emphasizing the unattractive aspects of the living environment and the anticipated responses of the users to the future conditions. Research has also indicated that the evaluation of the population’s satisfaction regarding the main aspects which define housing in Radauti is a useful feed-back for policies makers indicating the concrete situations in which it could intervene to increase the quality of housing within the settlement.
Key words: housing quality, population needs, assessment, satisfaction, housing environment
How to cite: Efros, V., Lazarescu, L.-M., and Horodnic, V.-D.: Assessing resident's satisfaction regarding housing environment - Case study Radauti municipality, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-392, https://doi.org/10.5194/egusphere-egu2020-392, 2020.
Housing, associated in the specialty literature with the habitat, is a dynamic process, circumscribed to human existence that is significantly influenced by the technological progress of the last decades. The access to information, the alert pace of life in the urban environment, the multiplication of population concerns, the need for privacy have created new preferences for housing and new expectations and needs of the residents in relation to the utilities, facilities and services available whitin the urban settlements. In consenquence, constant changes regarding housing-needs justifies the measures to assess the living conditions within a settlement.
The present article proposes an empirical analysis on the perception of the population on the conditions under which the housing is carried out in the city of Radauti, a city with a population of 34,692 inhabitants, which after the fall of the communist regime undergoes a process of urban regeneration, like many other small medium-sized cities in the ex-socialists states of Eastern Europe. The research used the method of sociological inquiry by applying a questionnaire to a sample of 350 inhabitants with a permanent residence in the municipality of Radauti who were selected from the lists of citizens with voting rights from the 15 existing constituencies, following the representation of the three age categories. The research related to criteria associated with the characteristics of the housing environment and to the criteria regarding the accessibility of some utilities and services, for each category being selected variables that can be improved by involving the local administration (characteristics of buildings and housing, existence and access to utilities, the arrangement of the parking lots, of the communication paths, of the spaces for pedestrian movement, the urban image etc.).
The interpretation of results allowed the association of the satisfaction and dissatisfaction of the respondents with concrete aspects of the settlement, which made it possible to individualize the generating factors of some situations that were negatively appreciated by the population. This fact confirmed the hypothesis that there is an important gap between the needs of the population and the concrete situation of the facilities, utilities and services to which the population has access, emphasizing the unattractive aspects of the living environment and the anticipated responses of the users to the future conditions. Research has also indicated that the evaluation of the population’s satisfaction regarding the main aspects which define housing in Radauti is a useful feed-back for policies makers indicating the concrete situations in which it could intervene to increase the quality of housing within the settlement.
Key words: housing quality, population needs, assessment, satisfaction, housing environment
How to cite: Efros, V., Lazarescu, L.-M., and Horodnic, V.-D.: Assessing resident's satisfaction regarding housing environment - Case study Radauti municipality, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-392, https://doi.org/10.5194/egusphere-egu2020-392, 2020.
EGU2020-6662 | Displays | ITS2.10/NP3.3
Classification of Rural Areas by Typology : A Case Study of Yunlin & Chiayi, TaiwanHsin Yang and Hsueh-Sheng Chang
In the past, Taiwan's spatial planning has focused on the development of urban areas and overlooked rural areas, which has led to difficulty in promoting rural-urban relationships. This study suggests that rural areas should not just be seen as single entities, but as a collection of distinct areas. Since it is becoming important to develop a new spatial planning in Taiwan, this study examines territorial space structure from a regional perspective, with a focus on the development of the rural areas of Yunlin & Chiayi. Consequently, this study aims to classify rural areas by the procedure of typology, in terms of their development dynamics, location, and economic structure, selecting appropriate indicators for each focus of inquiry. The study then uses cluster analysis, accessibility analysis and overlay analysis methods to classify information about these rural areas. This approach will show the differences in their spatial characteristics along with their histories of development through time, as well as the relationship between these rural areas and the overall region in which they are situated. It is hoped that this research will provide a more accurate description than currently exists of the rural areas studied in this paper, and that this information will be a useful resource to those who are developing new plans and policies, so that better integration can occur between urban and rural in Taiwan.
How to cite: Yang, H. and Chang, H.-S.: Classification of Rural Areas by Typology : A Case Study of Yunlin & Chiayi, Taiwan, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6662, https://doi.org/10.5194/egusphere-egu2020-6662, 2020.
In the past, Taiwan's spatial planning has focused on the development of urban areas and overlooked rural areas, which has led to difficulty in promoting rural-urban relationships. This study suggests that rural areas should not just be seen as single entities, but as a collection of distinct areas. Since it is becoming important to develop a new spatial planning in Taiwan, this study examines territorial space structure from a regional perspective, with a focus on the development of the rural areas of Yunlin & Chiayi. Consequently, this study aims to classify rural areas by the procedure of typology, in terms of their development dynamics, location, and economic structure, selecting appropriate indicators for each focus of inquiry. The study then uses cluster analysis, accessibility analysis and overlay analysis methods to classify information about these rural areas. This approach will show the differences in their spatial characteristics along with their histories of development through time, as well as the relationship between these rural areas and the overall region in which they are situated. It is hoped that this research will provide a more accurate description than currently exists of the rural areas studied in this paper, and that this information will be a useful resource to those who are developing new plans and policies, so that better integration can occur between urban and rural in Taiwan.
How to cite: Yang, H. and Chang, H.-S.: Classification of Rural Areas by Typology : A Case Study of Yunlin & Chiayi, Taiwan, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6662, https://doi.org/10.5194/egusphere-egu2020-6662, 2020.
EGU2020-13352 | Displays | ITS2.10/NP3.3 | Highlight
Investigating Pedestrian-level Wind Fields and Thermal Environments Under Different Urban MorphologyWei-Jhe Chen and Jehn-Yih Juang
As urban heat island effect intensifies, weather data produced by a mainly official weather station are not adaptable to represent and reflect the microclimate situation in a city. This study selected 17 weather stations in Tainan, Taiwan, to estimate the wind velocity at pedestrian-level and utilized 102 automatic stations from high-density street-level air temperature observation network (HiSAN) to measure air temperature at a height of 2 meters. Based on those observed weather data and urban environmental information provided by the government. This study established a method of generating high-resolution pedestrian-level weather information for urban areas. The method has taken urban morphological parameters, such as surface roughness, into consideration to be the factor of evaluating wind velocity. By interpolation and extrapolation, each grid obtained microclimate weather data on the pedestrian-level scale. In addition, both pieces of information were integrated into consideration of the thermal comfort index and presented by a useful tool, WebGIS. The application could provide a simple way to visualize an instantly environmental situation for urban planning and decision making.
How to cite: Chen, W.-J. and Juang, J.-Y.: Investigating Pedestrian-level Wind Fields and Thermal Environments Under Different Urban Morphology, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13352, https://doi.org/10.5194/egusphere-egu2020-13352, 2020.
As urban heat island effect intensifies, weather data produced by a mainly official weather station are not adaptable to represent and reflect the microclimate situation in a city. This study selected 17 weather stations in Tainan, Taiwan, to estimate the wind velocity at pedestrian-level and utilized 102 automatic stations from high-density street-level air temperature observation network (HiSAN) to measure air temperature at a height of 2 meters. Based on those observed weather data and urban environmental information provided by the government. This study established a method of generating high-resolution pedestrian-level weather information for urban areas. The method has taken urban morphological parameters, such as surface roughness, into consideration to be the factor of evaluating wind velocity. By interpolation and extrapolation, each grid obtained microclimate weather data on the pedestrian-level scale. In addition, both pieces of information were integrated into consideration of the thermal comfort index and presented by a useful tool, WebGIS. The application could provide a simple way to visualize an instantly environmental situation for urban planning and decision making.
How to cite: Chen, W.-J. and Juang, J.-Y.: Investigating Pedestrian-level Wind Fields and Thermal Environments Under Different Urban Morphology, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13352, https://doi.org/10.5194/egusphere-egu2020-13352, 2020.
EGU2020-19602 | Displays | ITS2.10/NP3.3
The effects of trees on outdoor thermal comfort in citiesJulien Cravero, Pierre-Antoine Versini, Adélaïde Feraille, Jean-François Caron, and Ioulia Tchiguirinskaia
Nature-based solutions appear to be an interesting option for enhancing the thermal comfort of the urban population during summer, while providing multiple services (e.g. biodiversity enhancement, the reduction in buildings energy consumption, stormwater management, acoustic insulation or air purification). However, the effects of green infrastructures on thermal comfort are not properly characterized, which prevents urban planning policies to be consistent.
The impacts of a single idealized tree on its microclimate are studied. The sensible heat flux emitted by the soil to the air is computed by solving the heat equation in a semi-infinite domain with a Robin boundary condition representing the energy balance of the soil. The sensible heat flux emitted by the vegetation is computed in two ways: with Newton’s law and with an energy-balance approach. This model is applicated to a tree-shaped structure supporting climbing plants and compared with the experimental data collected. The prototype has been built to assess the cooling performance of this type of vegetation, and particularly the part played by soil shading, evapotranspiration (i.e. the latent heat flux emitted to the air by the plants and the soil) and absorbed solar radiation. These results may permit to estimate the contribution of vegetation for mitigating urban heat island effects on a larger scale.
How to cite: Cravero, J., Versini, P.-A., Feraille, A., Caron, J.-F., and Tchiguirinskaia, I.: The effects of trees on outdoor thermal comfort in cities, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19602, https://doi.org/10.5194/egusphere-egu2020-19602, 2020.
Nature-based solutions appear to be an interesting option for enhancing the thermal comfort of the urban population during summer, while providing multiple services (e.g. biodiversity enhancement, the reduction in buildings energy consumption, stormwater management, acoustic insulation or air purification). However, the effects of green infrastructures on thermal comfort are not properly characterized, which prevents urban planning policies to be consistent.
The impacts of a single idealized tree on its microclimate are studied. The sensible heat flux emitted by the soil to the air is computed by solving the heat equation in a semi-infinite domain with a Robin boundary condition representing the energy balance of the soil. The sensible heat flux emitted by the vegetation is computed in two ways: with Newton’s law and with an energy-balance approach. This model is applicated to a tree-shaped structure supporting climbing plants and compared with the experimental data collected. The prototype has been built to assess the cooling performance of this type of vegetation, and particularly the part played by soil shading, evapotranspiration (i.e. the latent heat flux emitted to the air by the plants and the soil) and absorbed solar radiation. These results may permit to estimate the contribution of vegetation for mitigating urban heat island effects on a larger scale.
How to cite: Cravero, J., Versini, P.-A., Feraille, A., Caron, J.-F., and Tchiguirinskaia, I.: The effects of trees on outdoor thermal comfort in cities, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19602, https://doi.org/10.5194/egusphere-egu2020-19602, 2020.
EGU2020-21405 | Displays | ITS2.10/NP3.3
Surface runoff deposits and soils contamination in urban areas in BelarusMarharyta Kazyrenka and Tamara Kukharchyk
In the paper the results of study of surface runoff deposits and soils in two Belarusian cities are shown. It is known that urban soils are under significant anthropogenic impact. Investigations of industrial areas are limited due to the lack of direct access to them. In the same time soils on industrial sites can be a significant source of further contamination of adjacent urban area as a result of water and wind activity. Thus, surface runoff deposits can serve as an indicator of industrial soil pollution. Moreover, the redistribution of pollutants with surface runoff can also cause secondary urban soil contamination. The understanding of pollutants migration and accumulation in urban soils and their possible exposure routes into rivers is an important part of urban area investigations and planning.
The main objective of the study was an assessment the levels of pollutants in runoff deposits and revealing the role of surface runoff in the migration of pollutants from industrial sites.
Investigations of urban areas were carried out in 2008–2019 in Minsk and Lida, Grodno region (Belarus). Soil samples were taken form upper soil layer (mainly 0-5, 0-10 cm) in the territory of industrial enterprises and in their impact zones. Runoff deposits were sampled mainly in areas covered with asphalt or concrete near industrial enterprises and along roads. Particular attention was paid to areas with a slope of surface from enterprises. AAS method for heavy metals determination was applied; the content of total petroleum hydrocarbons was determined by fluorimetric method.
Elevated content of heavy metals and petroleum hydrocarbons in surface runoff deposits has been revealed. The concentrations of pollutants in runoff deposits were many times higher than in soils. Significance of differences between pollutants content in soils and deposits samples is statistically confirmed. Exceeding the maximum permissible concentrations for petroleum hydrocarbons was observed in 100%, for metals – in 70–100% of analyzed surface runoff deposits samples.
The findings confirm an important role of surface runoff in migration and accumulation of pollutants and suggest the need for more in-depth studies of urban areas with the study of local erosive processes, the characteristics of formation and role of surface runoff in the migration and redistribution of pollutants outside their direct sources. The adoption of measures to prevent pollutants migration from industrial areas is an important factor in improving the state of soils in urban areas.
How to cite: Kazyrenka, M. and Kukharchyk, T.: Surface runoff deposits and soils contamination in urban areas in Belarus, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21405, https://doi.org/10.5194/egusphere-egu2020-21405, 2020.
In the paper the results of study of surface runoff deposits and soils in two Belarusian cities are shown. It is known that urban soils are under significant anthropogenic impact. Investigations of industrial areas are limited due to the lack of direct access to them. In the same time soils on industrial sites can be a significant source of further contamination of adjacent urban area as a result of water and wind activity. Thus, surface runoff deposits can serve as an indicator of industrial soil pollution. Moreover, the redistribution of pollutants with surface runoff can also cause secondary urban soil contamination. The understanding of pollutants migration and accumulation in urban soils and their possible exposure routes into rivers is an important part of urban area investigations and planning.
The main objective of the study was an assessment the levels of pollutants in runoff deposits and revealing the role of surface runoff in the migration of pollutants from industrial sites.
Investigations of urban areas were carried out in 2008–2019 in Minsk and Lida, Grodno region (Belarus). Soil samples were taken form upper soil layer (mainly 0-5, 0-10 cm) in the territory of industrial enterprises and in their impact zones. Runoff deposits were sampled mainly in areas covered with asphalt or concrete near industrial enterprises and along roads. Particular attention was paid to areas with a slope of surface from enterprises. AAS method for heavy metals determination was applied; the content of total petroleum hydrocarbons was determined by fluorimetric method.
Elevated content of heavy metals and petroleum hydrocarbons in surface runoff deposits has been revealed. The concentrations of pollutants in runoff deposits were many times higher than in soils. Significance of differences between pollutants content in soils and deposits samples is statistically confirmed. Exceeding the maximum permissible concentrations for petroleum hydrocarbons was observed in 100%, for metals – in 70–100% of analyzed surface runoff deposits samples.
The findings confirm an important role of surface runoff in migration and accumulation of pollutants and suggest the need for more in-depth studies of urban areas with the study of local erosive processes, the characteristics of formation and role of surface runoff in the migration and redistribution of pollutants outside their direct sources. The adoption of measures to prevent pollutants migration from industrial areas is an important factor in improving the state of soils in urban areas.
How to cite: Kazyrenka, M. and Kukharchyk, T.: Surface runoff deposits and soils contamination in urban areas in Belarus, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21405, https://doi.org/10.5194/egusphere-egu2020-21405, 2020.
EGU2020-1100 | Displays | ITS2.10/NP3.3
Is the hydrological response of Nature-Based Solutions related to the spatial variability of rainfall?Yangzi Qiu, Ioulia Tchiguirinskaia, and Daniel Scherzter
Nature-Based Solutions (NBS) practices provide many benefits for sustainable development of urban environments, one of which is their ability to mitigate the urban waterlogging. In many previous studies, the performances of NBS practices are analysed with the semi-distributed model and artificial rainfall without considering the spatial variability of rainfall. However, the NBS practices are decentralized in urban areas, their hydrological response is very depends on the small-scale heterogeneity of urban environments. Therefore, this research aims to investigate the impacts of small-scale rainfall variability on the hydrological responses of NBS practices.
In this study, the hydrological response of NBS practices was analysed at the urban catchment scale. A 5.2 km2 semi-urban catchment (Guyancourt, located in the South-West of Paris) are investigated under various future NBS implementation scenarios (porous pavement, green roof, rain garden and combined). Regarding the objective of this research, three typical rainfall events are selected. Three sets of distributed rainfall data at a high resolution of 250 m×250 m×3.41 min were obtained from the X-band radar of Ecole des Ponts ParisTech (ENPC). In addition, three sets of corresponded homogeneous rainfall data are applied and used for comparing with the distributed one. Furthermore, a fully distributed and grid based hydrological model (Multi-Hydro), developed at ENPC, which takes into consideration the spatial variability of the whole catchment at 10 m scale. The hydrological response of NBS scenarios was analysed with the percentage error on total volume and peak discharge, with regards to the baseline scenario (current configuration).
Results show that the spatial variability of rainfall has the impact on the hydrological response of NBS scenarios in varying degrees, and it is more evident for green roof scenario. In three rainfall events, the maximum percentage error on peak discharge of green roof scenario under distributed rainfall is 23 %, while that of the green roof scenario under homogeneous rainfall is 17.7%. Overall, the results suggest that the implementation of porous pavement and rain garden is more flexible than implementation of green roof in a semi-urban catchment.
How to cite: Qiu, Y., Tchiguirinskaia, I., and Scherzter, D.: Is the hydrological response of Nature-Based Solutions related to the spatial variability of rainfall?, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1100, https://doi.org/10.5194/egusphere-egu2020-1100, 2020.
Nature-Based Solutions (NBS) practices provide many benefits for sustainable development of urban environments, one of which is their ability to mitigate the urban waterlogging. In many previous studies, the performances of NBS practices are analysed with the semi-distributed model and artificial rainfall without considering the spatial variability of rainfall. However, the NBS practices are decentralized in urban areas, their hydrological response is very depends on the small-scale heterogeneity of urban environments. Therefore, this research aims to investigate the impacts of small-scale rainfall variability on the hydrological responses of NBS practices.
In this study, the hydrological response of NBS practices was analysed at the urban catchment scale. A 5.2 km2 semi-urban catchment (Guyancourt, located in the South-West of Paris) are investigated under various future NBS implementation scenarios (porous pavement, green roof, rain garden and combined). Regarding the objective of this research, three typical rainfall events are selected. Three sets of distributed rainfall data at a high resolution of 250 m×250 m×3.41 min were obtained from the X-band radar of Ecole des Ponts ParisTech (ENPC). In addition, three sets of corresponded homogeneous rainfall data are applied and used for comparing with the distributed one. Furthermore, a fully distributed and grid based hydrological model (Multi-Hydro), developed at ENPC, which takes into consideration the spatial variability of the whole catchment at 10 m scale. The hydrological response of NBS scenarios was analysed with the percentage error on total volume and peak discharge, with regards to the baseline scenario (current configuration).
Results show that the spatial variability of rainfall has the impact on the hydrological response of NBS scenarios in varying degrees, and it is more evident for green roof scenario. In three rainfall events, the maximum percentage error on peak discharge of green roof scenario under distributed rainfall is 23 %, while that of the green roof scenario under homogeneous rainfall is 17.7%. Overall, the results suggest that the implementation of porous pavement and rain garden is more flexible than implementation of green roof in a semi-urban catchment.
How to cite: Qiu, Y., Tchiguirinskaia, I., and Scherzter, D.: Is the hydrological response of Nature-Based Solutions related to the spatial variability of rainfall?, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1100, https://doi.org/10.5194/egusphere-egu2020-1100, 2020.
EGU2020-13115 | Displays | ITS2.10/NP3.3
Risk assessment for tsunami events in the city of Siracusa, ItalyGianluca Pagnoni, Alberto Armigliato, Stefano Tinti, and Filippo Zaniboni
Siracusa is an important historical city of Greek origin, located on the southern part of the eastern coast of Sicily. The old town developed on the island of Ortigia, and expanded on the near mainland, but later it declined and during the Middle Ages it occupied only the island. The development of the built areas on the mainland restarted at the end of the nineteenth century, with the construction of a number of new quarters. Nowadays the island of Ortigia is connected to the rest of the town by two drive-over bridges. The history of Siracusa as well as of the eastern coast of Sicily is marked by destructive earthquake events that caused significant damage and many fatalities, and also by lethal tsunamis (occurred in the years 1169, 1693 and 1908). Indeed, this region is one of the coastal areas most prone to tsunami attacks in the Mediterranean Sea, being affected by local-source tsunamis and also by those generated by earthquakes in the Western Hellenic Arc.
For these reasons, in the last decade the need has developed to prepare adequate evacuation measures to respond to tsunami hazardous events. This work, using the method proposed by Pagnoni et al. (2020) and applied to the near town of Augusta, studies the tsunami risk for different inundation levels. The results are provided in terms of the Human Damage (HD), which is the number of people involved and the number of fatalities, and of the Economic Loss (EL), which returns the loss of economic value of buildings affected by tsunamis. Maps of HD and EL per each inundation scenario allow one to understand which areas of Siracusa are most involved and also to identify evacuation paths to potential safe collection areas and/or buildings for efficient emergency plans.
How to cite: Pagnoni, G., Armigliato, A., Tinti, S., and Zaniboni, F.: Risk assessment for tsunami events in the city of Siracusa, Italy, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13115, https://doi.org/10.5194/egusphere-egu2020-13115, 2020.
Siracusa is an important historical city of Greek origin, located on the southern part of the eastern coast of Sicily. The old town developed on the island of Ortigia, and expanded on the near mainland, but later it declined and during the Middle Ages it occupied only the island. The development of the built areas on the mainland restarted at the end of the nineteenth century, with the construction of a number of new quarters. Nowadays the island of Ortigia is connected to the rest of the town by two drive-over bridges. The history of Siracusa as well as of the eastern coast of Sicily is marked by destructive earthquake events that caused significant damage and many fatalities, and also by lethal tsunamis (occurred in the years 1169, 1693 and 1908). Indeed, this region is one of the coastal areas most prone to tsunami attacks in the Mediterranean Sea, being affected by local-source tsunamis and also by those generated by earthquakes in the Western Hellenic Arc.
For these reasons, in the last decade the need has developed to prepare adequate evacuation measures to respond to tsunami hazardous events. This work, using the method proposed by Pagnoni et al. (2020) and applied to the near town of Augusta, studies the tsunami risk for different inundation levels. The results are provided in terms of the Human Damage (HD), which is the number of people involved and the number of fatalities, and of the Economic Loss (EL), which returns the loss of economic value of buildings affected by tsunamis. Maps of HD and EL per each inundation scenario allow one to understand which areas of Siracusa are most involved and also to identify evacuation paths to potential safe collection areas and/or buildings for efficient emergency plans.
How to cite: Pagnoni, G., Armigliato, A., Tinti, S., and Zaniboni, F.: Risk assessment for tsunami events in the city of Siracusa, Italy, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13115, https://doi.org/10.5194/egusphere-egu2020-13115, 2020.
EGU2020-20197 | Displays | ITS2.10/NP3.3 | Highlight
New Technologies, Techniques and Tools to Dynamically Manage Urban Resilience: the Fresnel Platform for Greater ParisGuillaume Drouen, Daniel Schertzer, and Ioulia Tchiguirinskaia
As cities are put under greater pressure from the threat of the global impact of climate change, in particular the risk of heavier rainfall and flooding, there is a growing need to establish a hierarchical form of resilience in which critical infrastructure can become sustainable. The main difficulty is that geophysics and urban dynamics are strongly nonlinear with an associated, extreme variability over a wide range of space-time scales. To better link the fundamental and experimental research on these topics, an advanced urban hydro-meteorological observatory with the associated SaaS developments, the Fresnel platform (https://hmco.enpc.fr/portfolio-archive/fresnel-platform/), has been purposely set-up to provide the concerned communities with the necessary observation data thanks to an unprecedented deployment of higher resolution sensors, that easily yield Big Data.
To give an example, the installation of the polarimetric X-band radar at the ENPC’s campus (East of Paris) introduced a paradigm change in the prospects of environmental monitoring in Ile-de France. The radar is operated since May 2015 and has several characteristics that makes it of central importance for the environmental monitoring of the region. In particular, it demonstrated the crucial importance to have high resolution 3D+1 data, whereas earlier remote sensing developments have been mostly focused on vertical measurements.
This presentation discusses the associated Fresnel SaaS (Sofware as a Service) platform as an example of nowadays IT tools to dynamically enhance urban resilience. It is rooted on an integrated suite of modular components based on an asynchronous event-driven JavaScript runtime environment. It features non-blocking interaction model and high scalability to ensure optimized availability. It includes a comprehensive and (real-time) accessible database to support multi-criteria choices and it has been built up through stakeholder consultation and participative co-creation. At the same time these components are designed in such a way that they are tunable for specific case studies with the help of an adjustable visual interface. Depending on that case study, these components can be integrated to satisfy the particular needs with the help of maps other visual tools and forecasting systems, eventually from third parties.
All these developments have greatly benefited from the support of the Chair “Hydrology for a Resilient City” (https://hmco.enpc.fr/portfolio-archive/chair-hydrology-for-resilient-cities/) endowed by the world leader industrial in water management and from previous EU framework programmes. To sustain the necessary public-private partnerships, Fresnel facilitates synergies between research and innovation, fosters the theoretical research, national and international collaborative networking, and the development of various aspects of data science for a resilient city.
How to cite: Drouen, G., Schertzer, D., and Tchiguirinskaia, I.: New Technologies, Techniques and Tools to Dynamically Manage Urban Resilience: the Fresnel Platform for Greater Paris, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20197, https://doi.org/10.5194/egusphere-egu2020-20197, 2020.
As cities are put under greater pressure from the threat of the global impact of climate change, in particular the risk of heavier rainfall and flooding, there is a growing need to establish a hierarchical form of resilience in which critical infrastructure can become sustainable. The main difficulty is that geophysics and urban dynamics are strongly nonlinear with an associated, extreme variability over a wide range of space-time scales. To better link the fundamental and experimental research on these topics, an advanced urban hydro-meteorological observatory with the associated SaaS developments, the Fresnel platform (https://hmco.enpc.fr/portfolio-archive/fresnel-platform/), has been purposely set-up to provide the concerned communities with the necessary observation data thanks to an unprecedented deployment of higher resolution sensors, that easily yield Big Data.
To give an example, the installation of the polarimetric X-band radar at the ENPC’s campus (East of Paris) introduced a paradigm change in the prospects of environmental monitoring in Ile-de France. The radar is operated since May 2015 and has several characteristics that makes it of central importance for the environmental monitoring of the region. In particular, it demonstrated the crucial importance to have high resolution 3D+1 data, whereas earlier remote sensing developments have been mostly focused on vertical measurements.
This presentation discusses the associated Fresnel SaaS (Sofware as a Service) platform as an example of nowadays IT tools to dynamically enhance urban resilience. It is rooted on an integrated suite of modular components based on an asynchronous event-driven JavaScript runtime environment. It features non-blocking interaction model and high scalability to ensure optimized availability. It includes a comprehensive and (real-time) accessible database to support multi-criteria choices and it has been built up through stakeholder consultation and participative co-creation. At the same time these components are designed in such a way that they are tunable for specific case studies with the help of an adjustable visual interface. Depending on that case study, these components can be integrated to satisfy the particular needs with the help of maps other visual tools and forecasting systems, eventually from third parties.
All these developments have greatly benefited from the support of the Chair “Hydrology for a Resilient City” (https://hmco.enpc.fr/portfolio-archive/chair-hydrology-for-resilient-cities/) endowed by the world leader industrial in water management and from previous EU framework programmes. To sustain the necessary public-private partnerships, Fresnel facilitates synergies between research and innovation, fosters the theoretical research, national and international collaborative networking, and the development of various aspects of data science for a resilient city.
How to cite: Drouen, G., Schertzer, D., and Tchiguirinskaia, I.: New Technologies, Techniques and Tools to Dynamically Manage Urban Resilience: the Fresnel Platform for Greater Paris, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20197, https://doi.org/10.5194/egusphere-egu2020-20197, 2020.
EGU2020-21547 | Displays | ITS2.10/NP3.3
Air pollution studies in “street canyons” in Minsk and urban planning for minimization of its exposureOlga Krukowskaya and Hanna Malchykhina
Reducing the risks associated with the effects of polluted air on public health is one of the main tasks of sustainable urban development. This problem can be solved in two different ways: by emission reduction and by minimization of human exposure to elevated concentrations of pollutants. In the context of the second approach, it is important to plan the urban area in order to minimize places with a large number of people and poor dispersion conditions.
For this purpose investigation and identification of street canyons in Minsk city was performed. With population ca 2 mln inhabitants Minsk is one of the most populated European cities. Due to many historical destructions of the city nowadays it has mainly planned structure of streets and buildings according to General plans of urban development designed in the second part of the XX century. According to the plans Minsk has relatively wide main transport lines surrounded by mid-level buildings and has good conditions for air circulation and air pollutions spatial dispersion. Nevertheless, there is some location in the city with conditions close to urban street canyons and is characterised with high pedestrian and traffic intensity. Besides in modern construction so density planning not so rare. That's in addition to limited air pollution concentration researches makes important measurements and assessments in such conditions in Minsk.
For sampling, urban canyons NOx concentration in the air were carried out in 2012-2019 in Minsk. Air was sampled on both sides of “street canyons” taking into account weather conditions. During sampling, traffic accounting was carried out. The concentration of NOx was determined by the fluorimetric method.
Obtained results have shown that the actual formation of “street canyons” occurs even with a low height of buildings along to the streets with heavy traffic. It has been shown that a statistically significant increase of NOx content by 20–50% on the windward side compared to the leeward with buildings height comparable to the width of streets. Besides statistical reliable correlation between emissions levels (assessed based on traffic data) and measured concentrations are observed.
Identified patterns of air concentration in combination with GIS allow identifying areas with potential increased risk of exposure. This knowledge will help to plan urban territory in a sustainable way.
How to cite: Krukowskaya, O. and Malchykhina, H.: Air pollution studies in “street canyons” in Minsk and urban planning for minimization of its exposure, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21547, https://doi.org/10.5194/egusphere-egu2020-21547, 2020.
Reducing the risks associated with the effects of polluted air on public health is one of the main tasks of sustainable urban development. This problem can be solved in two different ways: by emission reduction and by minimization of human exposure to elevated concentrations of pollutants. In the context of the second approach, it is important to plan the urban area in order to minimize places with a large number of people and poor dispersion conditions.
For this purpose investigation and identification of street canyons in Minsk city was performed. With population ca 2 mln inhabitants Minsk is one of the most populated European cities. Due to many historical destructions of the city nowadays it has mainly planned structure of streets and buildings according to General plans of urban development designed in the second part of the XX century. According to the plans Minsk has relatively wide main transport lines surrounded by mid-level buildings and has good conditions for air circulation and air pollutions spatial dispersion. Nevertheless, there is some location in the city with conditions close to urban street canyons and is characterised with high pedestrian and traffic intensity. Besides in modern construction so density planning not so rare. That's in addition to limited air pollution concentration researches makes important measurements and assessments in such conditions in Minsk.
For sampling, urban canyons NOx concentration in the air were carried out in 2012-2019 in Minsk. Air was sampled on both sides of “street canyons” taking into account weather conditions. During sampling, traffic accounting was carried out. The concentration of NOx was determined by the fluorimetric method.
Obtained results have shown that the actual formation of “street canyons” occurs even with a low height of buildings along to the streets with heavy traffic. It has been shown that a statistically significant increase of NOx content by 20–50% on the windward side compared to the leeward with buildings height comparable to the width of streets. Besides statistical reliable correlation between emissions levels (assessed based on traffic data) and measured concentrations are observed.
Identified patterns of air concentration in combination with GIS allow identifying areas with potential increased risk of exposure. This knowledge will help to plan urban territory in a sustainable way.
How to cite: Krukowskaya, O. and Malchykhina, H.: Air pollution studies in “street canyons” in Minsk and urban planning for minimization of its exposure, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21547, https://doi.org/10.5194/egusphere-egu2020-21547, 2020.
EGU2020-21404 | Displays | ITS2.10/NP3.3
Assessment of seasonal variability of air pollution by transport in BelarusHanna Malchykhina and Olga Krukowskaya
Air pollution problem is the main challenge of the present. It is known that the transport is one of the main emission sources of such pollutants as NOx, CO, and TSP. Thus decreasing of emissions from the mobile sources could be one of the key elements of air emissions problem solution. The developing of measures to reduce the negative impact of the air pollution requires detailed information on emissions sources and the relationship between emissions and air quality. The article is devoted to assessment of annual variability of air pollution by transport and revealing the correlation between emissions of pollutants and their concentration.
Emissions assessment of main pollutants (NOx, CO, SO2, NMVOC, TSP) was carried out using emissions model COPERT, which is widely used to assess emissions from transport sector on different levels of aggregation – from city to country. The main input parameters of the model are vehicle fleet information (number of vehicles by fuel type, environmental standards, and the engine capacity), fuel consumption, meteorological conditions, mileage by vehicle types, average speeds for each category of vehicles. Data on pollutants concentration in the air were obtained from the National environmental monitoring system of Belarus.
It was shown that the annual emissions variability differs depending on type of pollutants. In particular the maximum carbon monoxide emissions were observed in cold months, and minimum - in warm months. The main source of CO emissions variation is emissions during the cold start. In the case of NMVOC emissions the situation is reverse. Maximum emissions were obtained in August and minimum emissions in winter months. Comparison of the obtained emissions data with the concentration has shown high correlation for CO and NMVOC.
The findings could help to understand ways of air quality formation thereby to develop a solution on air quality management.
How to cite: Malchykhina, H. and Krukowskaya, O.: Assessment of seasonal variability of air pollution by transport in Belarus, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21404, https://doi.org/10.5194/egusphere-egu2020-21404, 2020.
Air pollution problem is the main challenge of the present. It is known that the transport is one of the main emission sources of such pollutants as NOx, CO, and TSP. Thus decreasing of emissions from the mobile sources could be one of the key elements of air emissions problem solution. The developing of measures to reduce the negative impact of the air pollution requires detailed information on emissions sources and the relationship between emissions and air quality. The article is devoted to assessment of annual variability of air pollution by transport and revealing the correlation between emissions of pollutants and their concentration.
Emissions assessment of main pollutants (NOx, CO, SO2, NMVOC, TSP) was carried out using emissions model COPERT, which is widely used to assess emissions from transport sector on different levels of aggregation – from city to country. The main input parameters of the model are vehicle fleet information (number of vehicles by fuel type, environmental standards, and the engine capacity), fuel consumption, meteorological conditions, mileage by vehicle types, average speeds for each category of vehicles. Data on pollutants concentration in the air were obtained from the National environmental monitoring system of Belarus.
It was shown that the annual emissions variability differs depending on type of pollutants. In particular the maximum carbon monoxide emissions were observed in cold months, and minimum - in warm months. The main source of CO emissions variation is emissions during the cold start. In the case of NMVOC emissions the situation is reverse. Maximum emissions were obtained in August and minimum emissions in winter months. Comparison of the obtained emissions data with the concentration has shown high correlation for CO and NMVOC.
The findings could help to understand ways of air quality formation thereby to develop a solution on air quality management.
How to cite: Malchykhina, H. and Krukowskaya, O.: Assessment of seasonal variability of air pollution by transport in Belarus, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21404, https://doi.org/10.5194/egusphere-egu2020-21404, 2020.
EGU2020-21423 | Displays | ITS2.10/NP3.3
From 3R Approach to 4C Systems: on the Road to Sustainable, Desirable and Resilient CityIoulia Tchiguirinskaia, Pierre-Antoine Versini, and Daniel Schertzer
A wider recognition of climate change enhances in the society the 3R approach – Reduce, Recycle and Reuse –, thus broadening the spectrum of Urban Geoscience topics. This strengthens also the consensus that business models of companies are often too focused on their financial value, to the detriment of social and environmental added value. It therefore seems timely to change this way of doing things so that their growth is built more as part of a sustainable development approach, by emphasising the paradigm shift of ‘shared value’.
'Shared value' means that by meeting the needs and challenges of society, businesses can create their economic value in a way that also benefits society, in direct link with COP21's commitments and in response to energy, environmental and IT transition laws, hence bringing political ambition and market reality together. To highlight such opportunities, this presentation will capitalise on several research initiatives launched in Greater Paris during recent years related to this topic (https://hmco.enpc.fr/portfolio-archive/):
(i) research to extend non-linear approaches in environment and geophysics;
(ii) results on defining environmental indicators for our cities - considering their multimodal, multiscale and multifunctional structure - to quantify their environmental impacts (e.g., thermal, visual comfort, air quality, heat island mitigation, stormwater management etc.);
(iii) numerous instrumentation and modelling experiments related to the impacts of climate change and to the means of their attenuation;
(iv) results on the monetisation of amenities provided by Blue-Green Solutions in urban areas and their large-scale socio-economic contextualisation;
(v) environmental assessment of many (infra)structures that take into account their design method, implementation, operation, maintenance and end-of-life.
All these research initiatives constitute the basis for the ‘shared value’ theoretical emergence in the 4C framework – Cognitive, Collaborative, Coevolutionary and Complex – systems, with a practical methodology towards the sustainable, desirable and resilient city and call for larger developments of Urban Geosciences.
How to cite: Tchiguirinskaia, I., Versini, P.-A., and Schertzer, D.: From 3R Approach to 4C Systems: on the Road to Sustainable, Desirable and Resilient City, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21423, https://doi.org/10.5194/egusphere-egu2020-21423, 2020.
A wider recognition of climate change enhances in the society the 3R approach – Reduce, Recycle and Reuse –, thus broadening the spectrum of Urban Geoscience topics. This strengthens also the consensus that business models of companies are often too focused on their financial value, to the detriment of social and environmental added value. It therefore seems timely to change this way of doing things so that their growth is built more as part of a sustainable development approach, by emphasising the paradigm shift of ‘shared value’.
'Shared value' means that by meeting the needs and challenges of society, businesses can create their economic value in a way that also benefits society, in direct link with COP21's commitments and in response to energy, environmental and IT transition laws, hence bringing political ambition and market reality together. To highlight such opportunities, this presentation will capitalise on several research initiatives launched in Greater Paris during recent years related to this topic (https://hmco.enpc.fr/portfolio-archive/):
(i) research to extend non-linear approaches in environment and geophysics;
(ii) results on defining environmental indicators for our cities - considering their multimodal, multiscale and multifunctional structure - to quantify their environmental impacts (e.g., thermal, visual comfort, air quality, heat island mitigation, stormwater management etc.);
(iii) numerous instrumentation and modelling experiments related to the impacts of climate change and to the means of their attenuation;
(iv) results on the monetisation of amenities provided by Blue-Green Solutions in urban areas and their large-scale socio-economic contextualisation;
(v) environmental assessment of many (infra)structures that take into account their design method, implementation, operation, maintenance and end-of-life.
All these research initiatives constitute the basis for the ‘shared value’ theoretical emergence in the 4C framework – Cognitive, Collaborative, Coevolutionary and Complex – systems, with a practical methodology towards the sustainable, desirable and resilient city and call for larger developments of Urban Geosciences.
How to cite: Tchiguirinskaia, I., Versini, P.-A., and Schertzer, D.: From 3R Approach to 4C Systems: on the Road to Sustainable, Desirable and Resilient City, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21423, https://doi.org/10.5194/egusphere-egu2020-21423, 2020.
EGU2020-13620 | Displays | ITS2.10/NP3.3
Three different approaches to provide urban geological information from a geological survey perspective: the Catalan case studyGuillem Subiela, Miquel Vilà, Roser Pi, and Elena Sánchez
Studying urban geology is a key way to identify municipal issues involved with urban development and sustainability, land resources and hazard awareness in highly populated areas. In the last decade, one of the lines of work of the Catalan Geological Survey (Institut Cartogràfic i Geològic de Catalunya) has been the development of (i) 1:5.000 scale Urban Geological Map of Catalonia project. Besides, two pilot projects have recently been started: (ii) the system of layers of geological information and (iii) the fundamental geological guides of municipalities. This communication focuses on the presentation of these projects and their utility, with the aim of finding effective ways of transferring geological knowledge and information of a territory, from a geological survey perspective.
The 1:5.000 urban geological maps of Catalonia (i) have been a great ambitious project focused on providing detailed, consistent and accurate geological, geotechnical and anthropogenic activity information of the main urban areas of Catalonia. Nevertheless, it must be taken into account that the compilation and elaboration of a large volume of geological information and also the high level of detail require a lot of time for data completeness.
In order to optimize a greater distribution of information, a system of layers of geological information (ii) covering urban areas is being developed. This pilot project consists of providing specific layers of Bedrock materials, Quaternary deposits, anthropogenic grounds, structural measures, geochemical compositions, borehole data and so on. However, as information layers are treated individually, it may not be clear the coherence between data from different layers of information and its use is currently limited to Earth-science professionals working with geological data.
Hence, as a strategy to reach a wider range of users and also provide a homogeneous and varied geological information, the development of fundamental geological guides for municipalities is also being carried out (iii). These documents include the general geological characterization of the municipality, the description of the main geological factors (related to geotechnical properties, hydrogeology, environmental concerns and geological hazards and resources) and the list of the sources of geological information to be considered. Moreover, each guide contains a 1:50.000 geological map that has cartographic continuity with the neighbouring municipalities. The municipal guides allow a synthesis of the geological environment of the different Catalan municipalities and give fundamental recommendations for the characterization of the geological environment of the municipality.
In conclusion, the three projects facilitate the characterization of geological environment of urban areas, the evaluation of geological factors in ground studies and also, in general, the management of the environment. These products differ depending on the degree of detail, the coherence of the geological information, the necessary knowledge for their execution or their purpose of use. This set of projects defines a geological urban framework, which is adjusted depending on the government’s requirements, the society’s needs and the geological survey’s available resources.
How to cite: Subiela, G., Vilà, M., Pi, R., and Sánchez, E.: Three different approaches to provide urban geological information from a geological survey perspective: the Catalan case study, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13620, https://doi.org/10.5194/egusphere-egu2020-13620, 2020.
Studying urban geology is a key way to identify municipal issues involved with urban development and sustainability, land resources and hazard awareness in highly populated areas. In the last decade, one of the lines of work of the Catalan Geological Survey (Institut Cartogràfic i Geològic de Catalunya) has been the development of (i) 1:5.000 scale Urban Geological Map of Catalonia project. Besides, two pilot projects have recently been started: (ii) the system of layers of geological information and (iii) the fundamental geological guides of municipalities. This communication focuses on the presentation of these projects and their utility, with the aim of finding effective ways of transferring geological knowledge and information of a territory, from a geological survey perspective.
The 1:5.000 urban geological maps of Catalonia (i) have been a great ambitious project focused on providing detailed, consistent and accurate geological, geotechnical and anthropogenic activity information of the main urban areas of Catalonia. Nevertheless, it must be taken into account that the compilation and elaboration of a large volume of geological information and also the high level of detail require a lot of time for data completeness.
In order to optimize a greater distribution of information, a system of layers of geological information (ii) covering urban areas is being developed. This pilot project consists of providing specific layers of Bedrock materials, Quaternary deposits, anthropogenic grounds, structural measures, geochemical compositions, borehole data and so on. However, as information layers are treated individually, it may not be clear the coherence between data from different layers of information and its use is currently limited to Earth-science professionals working with geological data.
Hence, as a strategy to reach a wider range of users and also provide a homogeneous and varied geological information, the development of fundamental geological guides for municipalities is also being carried out (iii). These documents include the general geological characterization of the municipality, the description of the main geological factors (related to geotechnical properties, hydrogeology, environmental concerns and geological hazards and resources) and the list of the sources of geological information to be considered. Moreover, each guide contains a 1:50.000 geological map that has cartographic continuity with the neighbouring municipalities. The municipal guides allow a synthesis of the geological environment of the different Catalan municipalities and give fundamental recommendations for the characterization of the geological environment of the municipality.
In conclusion, the three projects facilitate the characterization of geological environment of urban areas, the evaluation of geological factors in ground studies and also, in general, the management of the environment. These products differ depending on the degree of detail, the coherence of the geological information, the necessary knowledge for their execution or their purpose of use. This set of projects defines a geological urban framework, which is adjusted depending on the government’s requirements, the society’s needs and the geological survey’s available resources.
How to cite: Subiela, G., Vilà, M., Pi, R., and Sánchez, E.: Three different approaches to provide urban geological information from a geological survey perspective: the Catalan case study, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13620, https://doi.org/10.5194/egusphere-egu2020-13620, 2020.
EGU2020-10556 | Displays | ITS2.10/NP3.3
Satellite-based monitoring urban environmental change and its implications in the coupled human-nature systemYuyu Zhou, Xuecao Li, Ghassem Asrar, Zhengyuan Zhu, and Lin Meng
Changes in urban environments play important roles in sustainable urban development. Satellite observations in fine spatial and temporal resolutions, together with new computer technologies, provide the possibility to monitor these changes across large geographic areas and over a long time period. In this study, we developed new algorithms to characterize dynamics of urban extent, urban heat island, and phenology (i.e., onsets of green-up and senescence phases) and successfully implemented them on the advanced Google Earth Engine, a start-of-art platform for planetary-scale data analysis, mapping, and modelling. The evaluation indicates that the proposed algorithms are robust and perform well in deriving changes in urban environments. Finally, we explored the implications of urban environment changes in the coupled human-nature system by investigating the responses of building energy use and pollen season to these changes. The resulted products of annual dynamics of urban extents, urban heat island, and phenology indicators from this study offer new datasets for relevant urban studies such as modeling urban sprawl over large areas and investigating ecosystem responses and human activities to urbanization.
How to cite: Zhou, Y., Li, X., Asrar, G., Zhu, Z., and Meng, L.: Satellite-based monitoring urban environmental change and its implications in the coupled human-nature system, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10556, https://doi.org/10.5194/egusphere-egu2020-10556, 2020.
Changes in urban environments play important roles in sustainable urban development. Satellite observations in fine spatial and temporal resolutions, together with new computer technologies, provide the possibility to monitor these changes across large geographic areas and over a long time period. In this study, we developed new algorithms to characterize dynamics of urban extent, urban heat island, and phenology (i.e., onsets of green-up and senescence phases) and successfully implemented them on the advanced Google Earth Engine, a start-of-art platform for planetary-scale data analysis, mapping, and modelling. The evaluation indicates that the proposed algorithms are robust and perform well in deriving changes in urban environments. Finally, we explored the implications of urban environment changes in the coupled human-nature system by investigating the responses of building energy use and pollen season to these changes. The resulted products of annual dynamics of urban extents, urban heat island, and phenology indicators from this study offer new datasets for relevant urban studies such as modeling urban sprawl over large areas and investigating ecosystem responses and human activities to urbanization.
How to cite: Zhou, Y., Li, X., Asrar, G., Zhu, Z., and Meng, L.: Satellite-based monitoring urban environmental change and its implications in the coupled human-nature system, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10556, https://doi.org/10.5194/egusphere-egu2020-10556, 2020.
EGU2020-11186 | Displays | ITS2.10/NP3.3
Discrete cascade disaggregation of climate models for high resolution rainfall estimation in urban environmentClément Brochet, Auguste Gires, Daniel Schertzer, and Ioulia Tchiguirinskaia
Extreme rainfalls have strong consequences in urban area. Their knowledge is required to properly handle storm water management systems and avoid urban flooding as well as optimize depollution capabilities. Hence improving understanding of future rainfall extreme in a changing climate is of paramount interest to adapt the cities and increase their resilience.
In this paper future rainfall extremes are quantified through the universal multifractal (UM) framework. This is a parsimonious framework that has been widely used to characterize and simulate geophysical, extremely variable fields, such as rainfall, across wide range of scales. It has also been used for statistical downscaling of geophysical fields.
Here, we apply this formalism to analyse output data from Regional Climate Models CNRM-CM5 and SMHI-RCA4 over the European-Mediterranean domain EUR-11 of the CORDEX Project. We first use the multifractal analysis techniques to characterize the scaling behaviour of future rainfall . The three UM parameters are then assessed. The notion of maximum observable singularity is then used to quantify extremes across the available scales (12.5 km and 1 hour resolution at maximum)
Finally, initial work using discrete cascades, to generate realistic rainfall series at higher resolution with very light parametrization will be presented. Basically the underlying cascade process retrieved on the available scales is continued down to the scales required for urban hydrology applications. Both spatial and temporal downscaling are carried out, allowing to get new insights on how to model seasonal effects using multifractal formalism.
How to cite: Brochet, C., Gires, A., Schertzer, D., and Tchiguirinskaia, I.: Discrete cascade disaggregation of climate models for high resolution rainfall estimation in urban environment, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11186, https://doi.org/10.5194/egusphere-egu2020-11186, 2020.
Extreme rainfalls have strong consequences in urban area. Their knowledge is required to properly handle storm water management systems and avoid urban flooding as well as optimize depollution capabilities. Hence improving understanding of future rainfall extreme in a changing climate is of paramount interest to adapt the cities and increase their resilience.
In this paper future rainfall extremes are quantified through the universal multifractal (UM) framework. This is a parsimonious framework that has been widely used to characterize and simulate geophysical, extremely variable fields, such as rainfall, across wide range of scales. It has also been used for statistical downscaling of geophysical fields.
Here, we apply this formalism to analyse output data from Regional Climate Models CNRM-CM5 and SMHI-RCA4 over the European-Mediterranean domain EUR-11 of the CORDEX Project. We first use the multifractal analysis techniques to characterize the scaling behaviour of future rainfall . The three UM parameters are then assessed. The notion of maximum observable singularity is then used to quantify extremes across the available scales (12.5 km and 1 hour resolution at maximum)
Finally, initial work using discrete cascades, to generate realistic rainfall series at higher resolution with very light parametrization will be presented. Basically the underlying cascade process retrieved on the available scales is continued down to the scales required for urban hydrology applications. Both spatial and temporal downscaling are carried out, allowing to get new insights on how to model seasonal effects using multifractal formalism.
How to cite: Brochet, C., Gires, A., Schertzer, D., and Tchiguirinskaia, I.: Discrete cascade disaggregation of climate models for high resolution rainfall estimation in urban environment, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11186, https://doi.org/10.5194/egusphere-egu2020-11186, 2020.
ITS2.12/HS12.24 – Nature-Based Solutions in Urban Environments
EGU2020-7877 | Displays | ITS2.12/HS12.24
How are the characteristics of Nature-based solutions clustered in European cities?Clair Cooper
An increased awareness of the way in which urbanisation, climate change, a reduction in the quality, quantity of and access to green space and natural infrastructure (such as blue spaces) all interact to threaten health and well-being of urban populations (Nesshover, et al, 2017; Kabisch & van de Bosch, 2017) has led to the emergence of a new conceptual framework, Nature-based Solutions. Through the management and use of nature, this concept aims to co-produce ecosystems services that not only allow cities to mitigate and adapt against the effects of climate change and increased urbanisation, but also reduce the public health risks associated with these challenges(WHO, 2016, 2017; Hartig at el. 2014; Kabisch et al. 2017), stimulate economies to improve inequality in cities (Nesshover, et al, 2017) and improve the quality of urban life (Mitchel & Popham, 2008; Mitchell et al. 2015). Using data from the Urban Nature Atlas, a database of a 1000 nature-based solutions from across a 100 European cities, this paper examines how the differing characteristics of these solutions (such as their ecological domains, ecosystems services, forms of governance, innovation, etc) are clustered and how the characteristics of these clusters relate to different social, economic and health factors that influence quality of life in our cities.
How to cite: Cooper, C.: How are the characteristics of Nature-based solutions clustered in European cities?, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7877, https://doi.org/10.5194/egusphere-egu2020-7877, 2020.
An increased awareness of the way in which urbanisation, climate change, a reduction in the quality, quantity of and access to green space and natural infrastructure (such as blue spaces) all interact to threaten health and well-being of urban populations (Nesshover, et al, 2017; Kabisch & van de Bosch, 2017) has led to the emergence of a new conceptual framework, Nature-based Solutions. Through the management and use of nature, this concept aims to co-produce ecosystems services that not only allow cities to mitigate and adapt against the effects of climate change and increased urbanisation, but also reduce the public health risks associated with these challenges(WHO, 2016, 2017; Hartig at el. 2014; Kabisch et al. 2017), stimulate economies to improve inequality in cities (Nesshover, et al, 2017) and improve the quality of urban life (Mitchel & Popham, 2008; Mitchell et al. 2015). Using data from the Urban Nature Atlas, a database of a 1000 nature-based solutions from across a 100 European cities, this paper examines how the differing characteristics of these solutions (such as their ecological domains, ecosystems services, forms of governance, innovation, etc) are clustered and how the characteristics of these clusters relate to different social, economic and health factors that influence quality of life in our cities.
How to cite: Cooper, C.: How are the characteristics of Nature-based solutions clustered in European cities?, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7877, https://doi.org/10.5194/egusphere-egu2020-7877, 2020.
EGU2020-13295 | Displays | ITS2.12/HS12.24
Assessment of green roof incentive policies in European cities by a fractal analysisPierre-Antoine Versini, Auguste Gires, Ioulia Tchiguirinskaia, and Daniel Schertzer
Green roofs represent a market of several tens millions of m2 implemented every year in Europe. They appear to be particularly efficient to reduce the potential impact of new and existing urban developments by making the city “greener” and more resilient to climate change. Indeed, they provide several ecosystem services, particularly in stormwater management, urban heat island attenuation, and biodiversity conservation. For these reasons, municipalities are implementing specific policies to promote a large diffusion of green roofs on their territory. Nevertheless, to optimize their performances through urban scales, green roofs spatial distribution should be analysed.
In order to study the current green roof implementation and to assess the relevancy of the related policies, a multi-scale analysis based on fractal theory as been conducted. Such analysis, widely used in geophysics, is particularly suitable to characterize spatial fields exhibiting strong heterogeneity over wide range of scales. This fractal analysis was performed here to characterize the spatial distribution of green roofs in several European cities (London, Amsterdam, Geneva, Lyon, Paris, Berlin, Frankfort, Copenhagen, Oslo…). These cities have been chosen because: (i) GIS database containing the location and geometry of implemented green roofs is available, (ii) they have implemented various kind of green roofs policies.
The results show that every studied city depicts similar behaviour with the definition of three distinct scaling regimes. The second regime (between 16/32 and 512/1024 m) characterizes not only single roofs but their distribution in space which is what we are interested in. The fractal dimension charactering this regime is the most variable, ranging from 0.50 to 1.35 and illustrates some different degrees of progress in urban greening. It has to be noticed that the more ambitious incentive measures (where monetary subsidies are proposed) correspond to the cities characterized by the highest fractal dimension. Nevertheless, as these policies are relatively recent, they cannot completely explain the current green roof distribution (architectural history has also to be mentioned).
The obtained results demonstrate some significant inconsistencies between political ambition and their in situ realization. They illustrate the necessity to better take into account the spatial distribution of green roof implementations in order to optimize their performances. To provide ecosystem services at large scales, green roofs have to be widely and relevantly implemented. Fractal analysis can be seen as innovative multi-scale approach to adjust policies for this purpose.
This work has been made thanks to ANR EVNATURB project (https://hmco.enpc.fr/portfolio-archive/evnaturb/) and the Academic Chair “Hydrology for Resilient Cities”, a partnership between Ecole des Ponts ParisTech and the Veolia group.
How to cite: Versini, P.-A., Gires, A., Tchiguirinskaia, I., and Schertzer, D.: Assessment of green roof incentive policies in European cities by a fractal analysis , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13295, https://doi.org/10.5194/egusphere-egu2020-13295, 2020.
Green roofs represent a market of several tens millions of m2 implemented every year in Europe. They appear to be particularly efficient to reduce the potential impact of new and existing urban developments by making the city “greener” and more resilient to climate change. Indeed, they provide several ecosystem services, particularly in stormwater management, urban heat island attenuation, and biodiversity conservation. For these reasons, municipalities are implementing specific policies to promote a large diffusion of green roofs on their territory. Nevertheless, to optimize their performances through urban scales, green roofs spatial distribution should be analysed.
In order to study the current green roof implementation and to assess the relevancy of the related policies, a multi-scale analysis based on fractal theory as been conducted. Such analysis, widely used in geophysics, is particularly suitable to characterize spatial fields exhibiting strong heterogeneity over wide range of scales. This fractal analysis was performed here to characterize the spatial distribution of green roofs in several European cities (London, Amsterdam, Geneva, Lyon, Paris, Berlin, Frankfort, Copenhagen, Oslo…). These cities have been chosen because: (i) GIS database containing the location and geometry of implemented green roofs is available, (ii) they have implemented various kind of green roofs policies.
The results show that every studied city depicts similar behaviour with the definition of three distinct scaling regimes. The second regime (between 16/32 and 512/1024 m) characterizes not only single roofs but their distribution in space which is what we are interested in. The fractal dimension charactering this regime is the most variable, ranging from 0.50 to 1.35 and illustrates some different degrees of progress in urban greening. It has to be noticed that the more ambitious incentive measures (where monetary subsidies are proposed) correspond to the cities characterized by the highest fractal dimension. Nevertheless, as these policies are relatively recent, they cannot completely explain the current green roof distribution (architectural history has also to be mentioned).
The obtained results demonstrate some significant inconsistencies between political ambition and their in situ realization. They illustrate the necessity to better take into account the spatial distribution of green roof implementations in order to optimize their performances. To provide ecosystem services at large scales, green roofs have to be widely and relevantly implemented. Fractal analysis can be seen as innovative multi-scale approach to adjust policies for this purpose.
This work has been made thanks to ANR EVNATURB project (https://hmco.enpc.fr/portfolio-archive/evnaturb/) and the Academic Chair “Hydrology for Resilient Cities”, a partnership between Ecole des Ponts ParisTech and the Veolia group.
How to cite: Versini, P.-A., Gires, A., Tchiguirinskaia, I., and Schertzer, D.: Assessment of green roof incentive policies in European cities by a fractal analysis , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13295, https://doi.org/10.5194/egusphere-egu2020-13295, 2020.
EGU2020-10107 | Displays | ITS2.12/HS12.24
Proposing a typology of nature-based solutions for strengthening resilience of Central Vietnamese cities – First findings from the GreenCityLabHue projectSebastian Scheuer, Jessica Jache, Kora Rösler, Tran Tuan Anh, Nguyen Ngoc Tung, Nguyen Vu Minh, Nguyen Quang Huy, Hoang Thi Binh Minh, Patrick Konopatzki, Fabian Stolpe, Luca Sumfleth, Michael Zschiesche, and Dagmar Haase
Idea and Objectives: This case study presents first findings of the GreenCityLabHue project. The project aims at implementing an urban learning lab in the city of Hue, Vietnam, for the participatory identification and implementation of innovative nature-based solutions for the protection and improvement of urban ecosystem services and climate change adaptation. We will present urgent environmental and societal challenges for the city of Hue, including the estimated impacts of climate change and resulting disaster risks. Subsequently, we will discuss elements of the green-blue infrastructure to tackle these risks in a sustainable and environmentally just manner in the context of a proposed typology of nature-based solutions. This typology specifically shifts the focus from a European perspective towards nature-based solutions that are locally relevant to strengthen the resilience of Hue and comparable cities in Central Vietnam and/or South-East Asia.
Background: Vietnam is a country that faces multiple challenges. It is a country that experiences rapid urban growth, with an estimated 50% of citizens living in urban areas by 2030 up from 35%, resulting in urban expansion that necessitates safeguarding urban ecosystem services, e.g., for the protection of human health and human well-being. Vietnam is also heavily affected by climate change. Particularly in Central Vietnam, cities face increasing risks of flooding, storms, and temperature extremes.
By providing multifunctional ecosystem services and diverse benefits, nature-based solutions—and in particular green-blue infrastructure elements—may help to address the aforementioned environmental and societal challenges in a sustainable and integrative manner, e.g., for maintaining air quality, stormwater mitigation, climate regulation, and improving environmental equity.
Hue is the capital of the Thua Thien-Hue province, located in Central Vietnam on the banks of the Perfume River. It has a population of approximately half a million people, represents a touristic and educational hotspot, and is rated a “top priority city” by the Vietnamese government. In Hue, first steps that consider strengthening the green-blue infrastructure were devised in form of the Hue GrEEEn City Action Plan. However, a more holistic urban planning approach that also addresses challenges related to climate change is still lacking.
How to cite: Scheuer, S., Jache, J., Rösler, K., Tuan Anh, T., Ngoc Tung, N., Vu Minh, N., Quang Huy, N., Thi Binh Minh, H., Konopatzki, P., Stolpe, F., Sumfleth, L., Zschiesche, M., and Haase, D.: Proposing a typology of nature-based solutions for strengthening resilience of Central Vietnamese cities – First findings from the GreenCityLabHue project, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10107, https://doi.org/10.5194/egusphere-egu2020-10107, 2020.
Idea and Objectives: This case study presents first findings of the GreenCityLabHue project. The project aims at implementing an urban learning lab in the city of Hue, Vietnam, for the participatory identification and implementation of innovative nature-based solutions for the protection and improvement of urban ecosystem services and climate change adaptation. We will present urgent environmental and societal challenges for the city of Hue, including the estimated impacts of climate change and resulting disaster risks. Subsequently, we will discuss elements of the green-blue infrastructure to tackle these risks in a sustainable and environmentally just manner in the context of a proposed typology of nature-based solutions. This typology specifically shifts the focus from a European perspective towards nature-based solutions that are locally relevant to strengthen the resilience of Hue and comparable cities in Central Vietnam and/or South-East Asia.
Background: Vietnam is a country that faces multiple challenges. It is a country that experiences rapid urban growth, with an estimated 50% of citizens living in urban areas by 2030 up from 35%, resulting in urban expansion that necessitates safeguarding urban ecosystem services, e.g., for the protection of human health and human well-being. Vietnam is also heavily affected by climate change. Particularly in Central Vietnam, cities face increasing risks of flooding, storms, and temperature extremes.
By providing multifunctional ecosystem services and diverse benefits, nature-based solutions—and in particular green-blue infrastructure elements—may help to address the aforementioned environmental and societal challenges in a sustainable and integrative manner, e.g., for maintaining air quality, stormwater mitigation, climate regulation, and improving environmental equity.
Hue is the capital of the Thua Thien-Hue province, located in Central Vietnam on the banks of the Perfume River. It has a population of approximately half a million people, represents a touristic and educational hotspot, and is rated a “top priority city” by the Vietnamese government. In Hue, first steps that consider strengthening the green-blue infrastructure were devised in form of the Hue GrEEEn City Action Plan. However, a more holistic urban planning approach that also addresses challenges related to climate change is still lacking.
How to cite: Scheuer, S., Jache, J., Rösler, K., Tuan Anh, T., Ngoc Tung, N., Vu Minh, N., Quang Huy, N., Thi Binh Minh, H., Konopatzki, P., Stolpe, F., Sumfleth, L., Zschiesche, M., and Haase, D.: Proposing a typology of nature-based solutions for strengthening resilience of Central Vietnamese cities – First findings from the GreenCityLabHue project, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10107, https://doi.org/10.5194/egusphere-egu2020-10107, 2020.
EGU2020-11151 | Displays | ITS2.12/HS12.24
Green or grey? Integration of nature-based solutions for climate change adaptation in densifying citiesSabrina Erlwein and Stephan Pauleit
Urban green and blue spaces such as water bodies, parks and street trees reduce outdoor temperatures and energy consumption of buildings through evaporative cooling and shading and are thus promoted as nature based solutions to enhance climate resilience. However, in growing cities, supply of urban green space often conflicts with increasing housing demand, resulting in dense neighbourhoods with lack of green. Therefore, the transdisciplinary project “Future green city” seeks to identify possibilities for balancing population growth and increasing living space demand with the development of nature-based solutions for climate change adaptation. In a transdisciplinary approach with the City of Munich, living labs are used to investigate how nature-based solutions can be integrated into spatial planning processes.
For the case of an urban redevelopment site with row buildings and a vast amount of greenery, eight densification scenarios were elaborated with city planners to derive planning guidelines for the further development of the area. The scenarios consider the effects of densification with additional floors and new buildings, the use of new building materials and energy efficiency standards, the construction of underground car parks and consequently a loss of green space to varying degrees. We are particularly interested in the interplay of densification and availability of green and its impact on indoor and outdoor thermal comfort, energy efficiency of buildings and their life cycle based emission balance. Microclimate modelling is employed to quantify and evaluate the impacts of densification on outdoor thermal conditions during heat days and the benefits of urban green in reducing heat stress.
First modelling results show that additional floors have less impact on human thermal comfort than loss of green space caused by the provision of required parking space. Though underground car parking avoids surface soil sealing, it leads to the removal of existing urban green and excludes the planting of large trees. Informal instruments such as mobility concepts can reduce space consumption by car parking. Moreover, urban redevelopment also bears the potential to increase climate resilience of the stock by targeted greening strategies. The potential is greater, the earlier climate change adaptation is considered as a topic in planning processes. Modelling helps to explore strength and weaknesses of different alternatives in early design stages.
How to cite: Erlwein, S. and Pauleit, S.: Green or grey? Integration of nature-based solutions for climate change adaptation in densifying cities, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11151, https://doi.org/10.5194/egusphere-egu2020-11151, 2020.
Urban green and blue spaces such as water bodies, parks and street trees reduce outdoor temperatures and energy consumption of buildings through evaporative cooling and shading and are thus promoted as nature based solutions to enhance climate resilience. However, in growing cities, supply of urban green space often conflicts with increasing housing demand, resulting in dense neighbourhoods with lack of green. Therefore, the transdisciplinary project “Future green city” seeks to identify possibilities for balancing population growth and increasing living space demand with the development of nature-based solutions for climate change adaptation. In a transdisciplinary approach with the City of Munich, living labs are used to investigate how nature-based solutions can be integrated into spatial planning processes.
For the case of an urban redevelopment site with row buildings and a vast amount of greenery, eight densification scenarios were elaborated with city planners to derive planning guidelines for the further development of the area. The scenarios consider the effects of densification with additional floors and new buildings, the use of new building materials and energy efficiency standards, the construction of underground car parks and consequently a loss of green space to varying degrees. We are particularly interested in the interplay of densification and availability of green and its impact on indoor and outdoor thermal comfort, energy efficiency of buildings and their life cycle based emission balance. Microclimate modelling is employed to quantify and evaluate the impacts of densification on outdoor thermal conditions during heat days and the benefits of urban green in reducing heat stress.
First modelling results show that additional floors have less impact on human thermal comfort than loss of green space caused by the provision of required parking space. Though underground car parking avoids surface soil sealing, it leads to the removal of existing urban green and excludes the planting of large trees. Informal instruments such as mobility concepts can reduce space consumption by car parking. Moreover, urban redevelopment also bears the potential to increase climate resilience of the stock by targeted greening strategies. The potential is greater, the earlier climate change adaptation is considered as a topic in planning processes. Modelling helps to explore strength and weaknesses of different alternatives in early design stages.
How to cite: Erlwein, S. and Pauleit, S.: Green or grey? Integration of nature-based solutions for climate change adaptation in densifying cities, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11151, https://doi.org/10.5194/egusphere-egu2020-11151, 2020.
EGU2020-10553 | Displays | ITS2.12/HS12.24
Nature-based solutions for water resource management in the urban fringeAmy Oen and Sarah Hale
A research project called "Nature based solutions for water management in the peri-urban: linking ecological, social and economic dimensions (NATWIP)" started in 2019 and has the overall goal of: contributing to closing the water cycle gap by exploring the potential that nature-based solutions (NBS) offer to address water management challenges in landscape areas that have been neglected because they lie in the transition zones between the urban and the rural. Since NBS have most commonly been applied in urban areas, it is interesting to broaden the focus to assess the application of NBS on the outskirts of urban areas or the urban fringe as such areas are often affected by expansion processes of the city. Furthermore, these areas have historically played important roles in development and sustenance of urban centres, provision of water-related ecosystem services, particularly water supply, wastewater management and flood control.
Key NATWIP activities include the establishment of a methodological framework to analyse the social, economic and ecological sustainability dimensions of NBS and subsequently to apply the framework at case study sites in Norway, Sweden, Brazil, India, South Africa and Spain. These case study sites present very diverse water management problems as well as NBS. As more emphasis is placed on the use of NBS in the Nordic countries it is important to identify successful mechanisms for their implementation and monitoring. The case study site in Norway, Skien, represents a highly relevant urban challenge to balance water quality and the increases of water quantity as a result of climate change. This site focuses on the opening of a buried river using blue-green infrastructure as a catalyst for city development. In Sweden rain water harvesting in Gotland has been used in order to address water shortages caused by drought as well as water excess.
The other case studies sites present interesting examples where the framework is used to explore potential management practices that Nordic countries could learn from. In Spain, the Barcelona Metropolitan backbone is home to green-blue infrastructure and a variety of NBS that aim to improve environmental quality and water cycle management. The Brazilian case study focuses on the most advanced Payment for Environmental Service initiative in Latin America. Through this project, fees collected from water users pay farmers to conserve and restore riparian forests on their lands. In India rainwater harvesting is used to combat water scarcity and compromised water quality in new peri-urban areas. Two case studies in South Africa show how NBS can address the problems of water scarcity in combination with increasingly variable rainfall, frequent drought and floods as well as growing water demand.
Results from the first assessment of these case study sites will be presented to highlight similarities, differences, challenges, as well as potential synergies for learning from the different case study site contexts.
How to cite: Oen, A. and Hale, S.: Nature-based solutions for water resource management in the urban fringe, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10553, https://doi.org/10.5194/egusphere-egu2020-10553, 2020.
A research project called "Nature based solutions for water management in the peri-urban: linking ecological, social and economic dimensions (NATWIP)" started in 2019 and has the overall goal of: contributing to closing the water cycle gap by exploring the potential that nature-based solutions (NBS) offer to address water management challenges in landscape areas that have been neglected because they lie in the transition zones between the urban and the rural. Since NBS have most commonly been applied in urban areas, it is interesting to broaden the focus to assess the application of NBS on the outskirts of urban areas or the urban fringe as such areas are often affected by expansion processes of the city. Furthermore, these areas have historically played important roles in development and sustenance of urban centres, provision of water-related ecosystem services, particularly water supply, wastewater management and flood control.
Key NATWIP activities include the establishment of a methodological framework to analyse the social, economic and ecological sustainability dimensions of NBS and subsequently to apply the framework at case study sites in Norway, Sweden, Brazil, India, South Africa and Spain. These case study sites present very diverse water management problems as well as NBS. As more emphasis is placed on the use of NBS in the Nordic countries it is important to identify successful mechanisms for their implementation and monitoring. The case study site in Norway, Skien, represents a highly relevant urban challenge to balance water quality and the increases of water quantity as a result of climate change. This site focuses on the opening of a buried river using blue-green infrastructure as a catalyst for city development. In Sweden rain water harvesting in Gotland has been used in order to address water shortages caused by drought as well as water excess.
The other case studies sites present interesting examples where the framework is used to explore potential management practices that Nordic countries could learn from. In Spain, the Barcelona Metropolitan backbone is home to green-blue infrastructure and a variety of NBS that aim to improve environmental quality and water cycle management. The Brazilian case study focuses on the most advanced Payment for Environmental Service initiative in Latin America. Through this project, fees collected from water users pay farmers to conserve and restore riparian forests on their lands. In India rainwater harvesting is used to combat water scarcity and compromised water quality in new peri-urban areas. Two case studies in South Africa show how NBS can address the problems of water scarcity in combination with increasingly variable rainfall, frequent drought and floods as well as growing water demand.
Results from the first assessment of these case study sites will be presented to highlight similarities, differences, challenges, as well as potential synergies for learning from the different case study site contexts.
How to cite: Oen, A. and Hale, S.: Nature-based solutions for water resource management in the urban fringe, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10553, https://doi.org/10.5194/egusphere-egu2020-10553, 2020.
EGU2020-818 | Displays | ITS2.12/HS12.24
Treatment and reuse of domestic greywater through green wallsElisa Costamagna, Fulvio Boano, Alice Caruso, Silvia Fiore, Marco Chiappero, Ana Galvao, Joana Pisoeiro, Anacleto Rizzo, and Fabio Masi
The principles of circular economy and sustainability involve also water management. Since both scarcity and demand of water are increasing, wastewater reuse represents a necessary element to preserve the environment while guaranteeing human development. Greywater is the amount of wastewater that is more suitable for reuse purposes: it comes from sinks, showers, bath tubes and laundry. Greywater has low pollutant concentration and developed countries generate high volumes of it everyday.
Nature-based solutions are well suited for greywater treatment purposes thanks to their environmental and energetic advantages. In fact, these green systems have low energy consumption (that means also low CO2 emissions), improve the quality of the air (e.g. capturing CO2), reduce heat island and promote biodiversity. However, their efficiency in treating greywater needs to be deeply investigated in order to couple their efficacy with the lack of space in urban areas.
In this study we have realized a pilot system to treat greywater through green walls, in order to exploit the unused surfaces of buildings and improve urban areas, increasing their sustainability and resilience as recommended from the Sustainable Development Goal 11 (Make cities and human settlements inclusive, safe, resilient and sustainable) of the UN 2030 Agenda. Our innovative system produces treated greywater that can be reused for non-potable purposes (e.g. gardening and toilet flushing), driving a reduction of potable water consumption in our houses.
In order to guarantee aesthetic requirements, we selected three types of evergreen plants that are able to survive a great amount of water per day. We prepared different porous media mixes in order to evaluate the effects of additives on the common media used in usual green walls. We built six modular panels with three replicates per media mix, in order to assess the statistical variability of the results. Each panel has four independent columns of three pots. Each column contains a different porous media mix and is planted with the same sequence of three plant species. We daily fed each panel with around 100 L of synthetic greywater and monitored different parameters (e.g. BOD, COD, DO, Nitrogen, Phosphorus, E. coli).
In a first phase we evaluated differences in treatment performance among different mixes. Removal efficiency exhibits some variability depending on the considered parameter but in general our results show statistically significant differences between configurations. In a second phase we consider the treatment performances along each column. Preliminary results of this phase show a significant decrease in pollution after the second line of pots already. In summary, concentrations at the system outlet respect the most common reuse guidelines for many parameters without any other treatment.
How to cite: Costamagna, E., Boano, F., Caruso, A., Fiore, S., Chiappero, M., Galvao, A., Pisoeiro, J., Rizzo, A., and Masi, F.: Treatment and reuse of domestic greywater through green walls, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-818, https://doi.org/10.5194/egusphere-egu2020-818, 2020.
The principles of circular economy and sustainability involve also water management. Since both scarcity and demand of water are increasing, wastewater reuse represents a necessary element to preserve the environment while guaranteeing human development. Greywater is the amount of wastewater that is more suitable for reuse purposes: it comes from sinks, showers, bath tubes and laundry. Greywater has low pollutant concentration and developed countries generate high volumes of it everyday.
Nature-based solutions are well suited for greywater treatment purposes thanks to their environmental and energetic advantages. In fact, these green systems have low energy consumption (that means also low CO2 emissions), improve the quality of the air (e.g. capturing CO2), reduce heat island and promote biodiversity. However, their efficiency in treating greywater needs to be deeply investigated in order to couple their efficacy with the lack of space in urban areas.
In this study we have realized a pilot system to treat greywater through green walls, in order to exploit the unused surfaces of buildings and improve urban areas, increasing their sustainability and resilience as recommended from the Sustainable Development Goal 11 (Make cities and human settlements inclusive, safe, resilient and sustainable) of the UN 2030 Agenda. Our innovative system produces treated greywater that can be reused for non-potable purposes (e.g. gardening and toilet flushing), driving a reduction of potable water consumption in our houses.
In order to guarantee aesthetic requirements, we selected three types of evergreen plants that are able to survive a great amount of water per day. We prepared different porous media mixes in order to evaluate the effects of additives on the common media used in usual green walls. We built six modular panels with three replicates per media mix, in order to assess the statistical variability of the results. Each panel has four independent columns of three pots. Each column contains a different porous media mix and is planted with the same sequence of three plant species. We daily fed each panel with around 100 L of synthetic greywater and monitored different parameters (e.g. BOD, COD, DO, Nitrogen, Phosphorus, E. coli).
In a first phase we evaluated differences in treatment performance among different mixes. Removal efficiency exhibits some variability depending on the considered parameter but in general our results show statistically significant differences between configurations. In a second phase we consider the treatment performances along each column. Preliminary results of this phase show a significant decrease in pollution after the second line of pots already. In summary, concentrations at the system outlet respect the most common reuse guidelines for many parameters without any other treatment.
How to cite: Costamagna, E., Boano, F., Caruso, A., Fiore, S., Chiappero, M., Galvao, A., Pisoeiro, J., Rizzo, A., and Masi, F.: Treatment and reuse of domestic greywater through green walls, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-818, https://doi.org/10.5194/egusphere-egu2020-818, 2020.
EGU2020-18833 | Displays | ITS2.12/HS12.24
Flood risk and water resources management with nature-based solutions on Florence city environmentTommaso Pacetti, Matteo Pampaloni, Giulio Castelli, Enrica Caporali, Elena Bresci, Matteo Isola, and Marco Lompi
Increasing urbanization, evolving socio-economic scenarios and the impacts of climate change require innovative strategies to adapt urban and peri-urban environments, making them more resilient and sustainable. In this context, Nature Based Solutions (NBS), i.e. actions inspired or supported by nature, can be designed to adapt and provide integrated responses to the environmental, social and economic future challenges.
The FLORENCE (FLOod risk and water Resources management with Nature based solutions on City Environment) project evaluates the possibility of including NBS as an innovative tool for the management of the territory of the City of Florence (Firenze), Italy. The project develops a quantitative evaluation methodology that clarifies the benefits and co-benefits of NBS, highlighting the limitations and exploring the possible synergies with existing infrastructures.
Starting from the existing literature on the NBS siting, a set of parameters to be considered in order to map Ecosystem Services (ES) priority areas (main functions and co-benefits) is derived. This analysis is then coupled with the identification of the constraints (regulatory, urban planning, economic, environmental, social) which determine the boundary conditions for the inclusion of NBS in the Florence city urban environment. Once the most suitable implementation areas of NBS are identified, the hydraulic modeling of multiple NBS implementation scenarios using EPA SWMM is implemented. This allows the definition of the scenario that best respond to the city's green development needs and that maximize the production of ES.
How to cite: Pacetti, T., Pampaloni, M., Castelli, G., Caporali, E., Bresci, E., Isola, M., and Lompi, M.: Flood risk and water resources management with nature-based solutions on Florence city environment, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18833, https://doi.org/10.5194/egusphere-egu2020-18833, 2020.
Increasing urbanization, evolving socio-economic scenarios and the impacts of climate change require innovative strategies to adapt urban and peri-urban environments, making them more resilient and sustainable. In this context, Nature Based Solutions (NBS), i.e. actions inspired or supported by nature, can be designed to adapt and provide integrated responses to the environmental, social and economic future challenges.
The FLORENCE (FLOod risk and water Resources management with Nature based solutions on City Environment) project evaluates the possibility of including NBS as an innovative tool for the management of the territory of the City of Florence (Firenze), Italy. The project develops a quantitative evaluation methodology that clarifies the benefits and co-benefits of NBS, highlighting the limitations and exploring the possible synergies with existing infrastructures.
Starting from the existing literature on the NBS siting, a set of parameters to be considered in order to map Ecosystem Services (ES) priority areas (main functions and co-benefits) is derived. This analysis is then coupled with the identification of the constraints (regulatory, urban planning, economic, environmental, social) which determine the boundary conditions for the inclusion of NBS in the Florence city urban environment. Once the most suitable implementation areas of NBS are identified, the hydraulic modeling of multiple NBS implementation scenarios using EPA SWMM is implemented. This allows the definition of the scenario that best respond to the city's green development needs and that maximize the production of ES.
How to cite: Pacetti, T., Pampaloni, M., Castelli, G., Caporali, E., Bresci, E., Isola, M., and Lompi, M.: Flood risk and water resources management with nature-based solutions on Florence city environment, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18833, https://doi.org/10.5194/egusphere-egu2020-18833, 2020.
EGU2020-3921 | Displays | ITS2.12/HS12.24
Installation of blue-green solutions at large scale to mitigate pluvial floodsElena Cristiano, Stefano Farris, Roberto Deidda, and Francesco Viola
The growth of urbanization and the intensification of extreme rainfall events, that has characterized the last century, are leading to an increase of pluvial floods, which are becoming a significant problem in many cities. Among the different solutions proposed and developed to mitigate flood risk in urban areas, green roofs and rainwater harvesting systems have been deeply investigated to reduce the runoff contribution generated from rooftops. These tools have been largely studied at small scale, analysing the flood reduction that can be achieved from one single building or in a small neighbourhood, without considering the large-scale effects. In this work, the potential impact of the installation of green-blue solutions on all the rooftops of a city is evaluated, assuming to place green roofs on flat roofs and rainwater harvesting systems on sloped ones. We investigated nine cities from 5 different countries (Canada, Haiti, United Kingdom, Italy and New Zealand), representing different climatological and geomorphological characteristics. The behaviour of the blue-green solution was estimated with the help of a conceptual lumped ecohydrological model and the mass conservation, using rainfall and temperature time series as climatological input to derive the discharge reduction for different scenarios. Due to the high percentage of sloped roofs in most of the investigated locations, the cost-efficiency analysis highlights that the large-scale installation of rainwater harvesting tanks enables to achieve higher mitigation capacity than green roofs at lower cost. Green roofs, however, present many additional benefits (such as biodiversity contribution, thermal insulation for buildings, pollution reduction and increase of aesthetic added value) that need to be evaluated by urban planners and policy makers. The best achievable performance is given by the coupled system of rainwater harvesting tanks and intensive green roofs: for extreme rainfall events this solution guarantees a discharge reduction up to 20% in most of the cities.
How to cite: Cristiano, E., Farris, S., Deidda, R., and Viola, F.: Installation of blue-green solutions at large scale to mitigate pluvial floods, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3921, https://doi.org/10.5194/egusphere-egu2020-3921, 2020.
The growth of urbanization and the intensification of extreme rainfall events, that has characterized the last century, are leading to an increase of pluvial floods, which are becoming a significant problem in many cities. Among the different solutions proposed and developed to mitigate flood risk in urban areas, green roofs and rainwater harvesting systems have been deeply investigated to reduce the runoff contribution generated from rooftops. These tools have been largely studied at small scale, analysing the flood reduction that can be achieved from one single building or in a small neighbourhood, without considering the large-scale effects. In this work, the potential impact of the installation of green-blue solutions on all the rooftops of a city is evaluated, assuming to place green roofs on flat roofs and rainwater harvesting systems on sloped ones. We investigated nine cities from 5 different countries (Canada, Haiti, United Kingdom, Italy and New Zealand), representing different climatological and geomorphological characteristics. The behaviour of the blue-green solution was estimated with the help of a conceptual lumped ecohydrological model and the mass conservation, using rainfall and temperature time series as climatological input to derive the discharge reduction for different scenarios. Due to the high percentage of sloped roofs in most of the investigated locations, the cost-efficiency analysis highlights that the large-scale installation of rainwater harvesting tanks enables to achieve higher mitigation capacity than green roofs at lower cost. Green roofs, however, present many additional benefits (such as biodiversity contribution, thermal insulation for buildings, pollution reduction and increase of aesthetic added value) that need to be evaluated by urban planners and policy makers. The best achievable performance is given by the coupled system of rainwater harvesting tanks and intensive green roofs: for extreme rainfall events this solution guarantees a discharge reduction up to 20% in most of the cities.
How to cite: Cristiano, E., Farris, S., Deidda, R., and Viola, F.: Installation of blue-green solutions at large scale to mitigate pluvial floods, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3921, https://doi.org/10.5194/egusphere-egu2020-3921, 2020.
EGU2020-16428 | Displays | ITS2.12/HS12.24
A GIS Based Model Proposal for Assessing the Transversal Connectivity of Natural Landscapes Along the Urban Rivers in support of NBSArtan Hysa
This study aims to present a GIS-based method for estimating the transversal connectivity among natural landscape patches along urban rivers within the metropolitan area. The method presented here relies on the transversally connected natural landscape mosaics (TCNLM) model, which is based on a reclassification procedure for landscape patches based on their relative connectivity to the water sources. The identified existing and potential TCNLMs can be considered as focal areas for providing ecosystem services in the metropolitan zone. The raw material of the analytical process is Urban Atlas (UA) land cover data. All phases of the process are modelled in Graphical Modeler in QGIS software. The metropolitan areas of London and Paris are selected as specimens of urban agglomerations along major waterbodies such as Thames and Seine River. The selected cases have considerable similarities and differences among them. Jointly with the results, they provide a comparative ground for a quantitative and qualitative evaluation. The results show that the method is easily reproducible in other European metropolitan areas being developed along watercourses. The presented model brings a rapid method for highlighting the transversal connectivity capacities of the natural landscapes along rivers within the metropolitan area in support of Nature Based Solutions for urban challenges.
How to cite: Hysa, A.: A GIS Based Model Proposal for Assessing the Transversal Connectivity of Natural Landscapes Along the Urban Rivers in support of NBS, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-16428, https://doi.org/10.5194/egusphere-egu2020-16428, 2020.
This study aims to present a GIS-based method for estimating the transversal connectivity among natural landscape patches along urban rivers within the metropolitan area. The method presented here relies on the transversally connected natural landscape mosaics (TCNLM) model, which is based on a reclassification procedure for landscape patches based on their relative connectivity to the water sources. The identified existing and potential TCNLMs can be considered as focal areas for providing ecosystem services in the metropolitan zone. The raw material of the analytical process is Urban Atlas (UA) land cover data. All phases of the process are modelled in Graphical Modeler in QGIS software. The metropolitan areas of London and Paris are selected as specimens of urban agglomerations along major waterbodies such as Thames and Seine River. The selected cases have considerable similarities and differences among them. Jointly with the results, they provide a comparative ground for a quantitative and qualitative evaluation. The results show that the method is easily reproducible in other European metropolitan areas being developed along watercourses. The presented model brings a rapid method for highlighting the transversal connectivity capacities of the natural landscapes along rivers within the metropolitan area in support of Nature Based Solutions for urban challenges.
How to cite: Hysa, A.: A GIS Based Model Proposal for Assessing the Transversal Connectivity of Natural Landscapes Along the Urban Rivers in support of NBS, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-16428, https://doi.org/10.5194/egusphere-egu2020-16428, 2020.
EGU2020-17661 | Displays | ITS2.12/HS12.24
Urban parks as nature-based solutions for improved well-being under the flight paths: A soundscape analysis in the vicinity of Heathrow AirportJulia Föllmer, Gemma Moore, and Thomas Kistemann
In the light of inconclusive evidence on the effectiveness of noise protection measures, new strategies are needed to tackle health risks of increasing air traffic. Noise-related health issues are a result of the complex interplay between noise exposure, coping strategies and sound perception, which might be in turn influenced by environmental quality and neighbourhood satisfaction. Thus, the conventional approach of primarily reducing noise levels does not automatically lead to improved well-being and quality of life for affected people. Nature-based solutions, including trees, parks and other tranquil areas, are increasingly being recognised as health-promoting and sustainable forms of noise mitigation in growing cities, as highlighted by the EU Environmental Noise Directive.
Apart from its ability of physically reducing sound pressure levels, the potentials of vegetation as a psychological buffer through reduction of stress and mental fatigue need to be further investigated. A multisensory approach in communities around London Heathrow Airport explored how acoustic and visual factors affect cognitive and behavioural responses to aircraft noise. Since the interplay of different senses appears to be an important moderator of sound perception, self-rated measures of psychological stressors and resources were combined with objective evaluations of visual and acoustic environmental quality.
High-quality neighbourhoods were associated with (i) lower general noise annoyance, (ii) fewer noise-disturbed outdoor activities, (iii) higher satisfaction with the residential area, and (iv) better opportunities for recreational coping. Particularly high-quality green spaces appeared to reduce stress and refresh concentration capacity by enabling noise-exposed residents to shift from effortful (e.g. focusing on aircraft noise) to effortless (e.g. experiencing tranquillity) attention, thus potentially enhancing well-being. Nature sounds, such as sounds of birds, wind and water, had limited capacity for reducing perceived outdoor sound levels. Yet, their main potentials in improving a soundscape lay in their intrinsic ability to promote relaxation and tranquillity, which might in turn reduce perceived noise exposure in the longer term.
Shifting the research interest towards the question of how to achieve desirable soundscapes and neighbourhoods rather than just finding ways to technically eliminate noise, this soundscape study provides an insightful starting point for creating healthier environments in the vicinity of airports. Demonstrating the potential of tranquil urban green spaces as compensation strategies in neighbourhoods affected by aircraft noise might support residents to adopt active and health-enhancing coping strategies, and therefore generate wider spill-over effects on satisfaction, restoration, well-being, and quality of life among communities living under the flight paths. This will help build strategic alliances between health promotion, noise mitigation, and sustainable urban planning.
How to cite: Föllmer, J., Moore, G., and Kistemann, T.: Urban parks as nature-based solutions for improved well-being under the flight paths: A soundscape analysis in the vicinity of Heathrow Airport, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-17661, https://doi.org/10.5194/egusphere-egu2020-17661, 2020.
In the light of inconclusive evidence on the effectiveness of noise protection measures, new strategies are needed to tackle health risks of increasing air traffic. Noise-related health issues are a result of the complex interplay between noise exposure, coping strategies and sound perception, which might be in turn influenced by environmental quality and neighbourhood satisfaction. Thus, the conventional approach of primarily reducing noise levels does not automatically lead to improved well-being and quality of life for affected people. Nature-based solutions, including trees, parks and other tranquil areas, are increasingly being recognised as health-promoting and sustainable forms of noise mitigation in growing cities, as highlighted by the EU Environmental Noise Directive.
Apart from its ability of physically reducing sound pressure levels, the potentials of vegetation as a psychological buffer through reduction of stress and mental fatigue need to be further investigated. A multisensory approach in communities around London Heathrow Airport explored how acoustic and visual factors affect cognitive and behavioural responses to aircraft noise. Since the interplay of different senses appears to be an important moderator of sound perception, self-rated measures of psychological stressors and resources were combined with objective evaluations of visual and acoustic environmental quality.
High-quality neighbourhoods were associated with (i) lower general noise annoyance, (ii) fewer noise-disturbed outdoor activities, (iii) higher satisfaction with the residential area, and (iv) better opportunities for recreational coping. Particularly high-quality green spaces appeared to reduce stress and refresh concentration capacity by enabling noise-exposed residents to shift from effortful (e.g. focusing on aircraft noise) to effortless (e.g. experiencing tranquillity) attention, thus potentially enhancing well-being. Nature sounds, such as sounds of birds, wind and water, had limited capacity for reducing perceived outdoor sound levels. Yet, their main potentials in improving a soundscape lay in their intrinsic ability to promote relaxation and tranquillity, which might in turn reduce perceived noise exposure in the longer term.
Shifting the research interest towards the question of how to achieve desirable soundscapes and neighbourhoods rather than just finding ways to technically eliminate noise, this soundscape study provides an insightful starting point for creating healthier environments in the vicinity of airports. Demonstrating the potential of tranquil urban green spaces as compensation strategies in neighbourhoods affected by aircraft noise might support residents to adopt active and health-enhancing coping strategies, and therefore generate wider spill-over effects on satisfaction, restoration, well-being, and quality of life among communities living under the flight paths. This will help build strategic alliances between health promotion, noise mitigation, and sustainable urban planning.
How to cite: Föllmer, J., Moore, G., and Kistemann, T.: Urban parks as nature-based solutions for improved well-being under the flight paths: A soundscape analysis in the vicinity of Heathrow Airport, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-17661, https://doi.org/10.5194/egusphere-egu2020-17661, 2020.
EGU2020-21301 | Displays | ITS2.12/HS12.24
The associations between urban green spaces and self-reported health for university students in Singapore and TurinLiqing Zhang and Puay Yok Tan
Numerous studies have found that green spaces can promote human health. However, most of the studies only investigate the relationship between green space and health in one single city. Therefore, whether the relationship between green space and health differs among cities with distinct differences in social-cultural and climatic context or there are universal patterns regarding such relationship is still remain unanswered. To investigate this question, this study aims to compare the associations between green space quantity and self-reported health for university students in Singapore and Turin, two high density cities with different social-cultural and climatic context. Students from National University of Singapore (NUS) and Politecnico di Torino (POLITO) were involved in an online survey to measure their self-reported health, use of green spaces and confounding factors. Through collecting the geographic location of student’s residence from online survey, the quantity of green spaces within 400 m-radius buffer surrounding the residence was calculated for each respondent. Through statistical analysis, the associations between green space quantity and self-reported health were revealed in both cities. The results from this work enhanced the knowledge regarding the dependence of green space-health relationship on social-cultural context.
How to cite: Zhang, L. and Tan, P. Y.: The associations between urban green spaces and self-reported health for university students in Singapore and Turin, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21301, https://doi.org/10.5194/egusphere-egu2020-21301, 2020.
Numerous studies have found that green spaces can promote human health. However, most of the studies only investigate the relationship between green space and health in one single city. Therefore, whether the relationship between green space and health differs among cities with distinct differences in social-cultural and climatic context or there are universal patterns regarding such relationship is still remain unanswered. To investigate this question, this study aims to compare the associations between green space quantity and self-reported health for university students in Singapore and Turin, two high density cities with different social-cultural and climatic context. Students from National University of Singapore (NUS) and Politecnico di Torino (POLITO) were involved in an online survey to measure their self-reported health, use of green spaces and confounding factors. Through collecting the geographic location of student’s residence from online survey, the quantity of green spaces within 400 m-radius buffer surrounding the residence was calculated for each respondent. Through statistical analysis, the associations between green space quantity and self-reported health were revealed in both cities. The results from this work enhanced the knowledge regarding the dependence of green space-health relationship on social-cultural context.
How to cite: Zhang, L. and Tan, P. Y.: The associations between urban green spaces and self-reported health for university students in Singapore and Turin, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21301, https://doi.org/10.5194/egusphere-egu2020-21301, 2020.
EGU2020-7120 | Displays | ITS2.12/HS12.24
Test the Effectiveness of the Open Spaces Scenario in Promoting Socio-economic DevelopmentSi Chen, Zipan Cai, and Brian Deal
The preservation of open spaces is treated as an important policy in recent years as urbanization level is increasing higher in the world (Geoghegan, 2002). There are multiple positive effects associated with open spaces, including recreation, aesthetic and environment values (Geoghegan, 2002). The positive effects of open space as a nature-based solution on urban social, economic and environmental factors have been explored by a number of previous papers, such as housing price (Lutzenhiser & Netusil, 2001; Bolitzer & Netusil, 2000), spatial pattern (Lewis et al., 2009; Irwin & Bockstael, 2004), human health (Groenewegen et al., 2006; Irvine et al., 2013) and social safety (Groenewegen et al., 2006; Fischer et al., 2004). However, relatively less papers have predicted the open spaces’ influences on socio-economic development. This paper will firstly verify the open space influences on economic factor (housing sale prices) and social factor (sense of safety, residential agglomeration) using a linear regression model. We consider the housing attributes, urban form attributes (eg. population density, block size, road density), driving and walking accessibility to different types of public open spaces, and accessibility to other amenities (eg. hospitals and schools) as influential features. Then, we test several machine learning algorithms in predicting the housing price and sense of safety change based on future open space planning scenarios, and choose the most suitable machine learning algorithm. City of Chicago, Illinois, US is chosen to be study area since data availability, sufficient open space types and long-term open space preservation strategies. This study can quantify the values of the open spaces in influencing socio-economic developments and provide a way to test the open space scenarios. It has potential to work as a tool for local planners to make better nature-based solutions in open space designs and plans.
How to cite: Chen, S., Cai, Z., and Deal, B.: Test the Effectiveness of the Open Spaces Scenario in Promoting Socio-economic Development, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7120, https://doi.org/10.5194/egusphere-egu2020-7120, 2020.
The preservation of open spaces is treated as an important policy in recent years as urbanization level is increasing higher in the world (Geoghegan, 2002). There are multiple positive effects associated with open spaces, including recreation, aesthetic and environment values (Geoghegan, 2002). The positive effects of open space as a nature-based solution on urban social, economic and environmental factors have been explored by a number of previous papers, such as housing price (Lutzenhiser & Netusil, 2001; Bolitzer & Netusil, 2000), spatial pattern (Lewis et al., 2009; Irwin & Bockstael, 2004), human health (Groenewegen et al., 2006; Irvine et al., 2013) and social safety (Groenewegen et al., 2006; Fischer et al., 2004). However, relatively less papers have predicted the open spaces’ influences on socio-economic development. This paper will firstly verify the open space influences on economic factor (housing sale prices) and social factor (sense of safety, residential agglomeration) using a linear regression model. We consider the housing attributes, urban form attributes (eg. population density, block size, road density), driving and walking accessibility to different types of public open spaces, and accessibility to other amenities (eg. hospitals and schools) as influential features. Then, we test several machine learning algorithms in predicting the housing price and sense of safety change based on future open space planning scenarios, and choose the most suitable machine learning algorithm. City of Chicago, Illinois, US is chosen to be study area since data availability, sufficient open space types and long-term open space preservation strategies. This study can quantify the values of the open spaces in influencing socio-economic developments and provide a way to test the open space scenarios. It has potential to work as a tool for local planners to make better nature-based solutions in open space designs and plans.
How to cite: Chen, S., Cai, Z., and Deal, B.: Test the Effectiveness of the Open Spaces Scenario in Promoting Socio-economic Development, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7120, https://doi.org/10.5194/egusphere-egu2020-7120, 2020.
EGU2020-3435 | Displays | ITS2.12/HS12.24
Developing statistically driven eco-engineering designs from LiDAR and structure from motion surveys for marine artificial structures.Peter Lawrence, Ally Evans, Paul Brooks, Tim D'Urban Jackson, Stuart Jenkins, Pippa Moore, Ciaran McNally, Atteyeh Natanzi, Andy Davies, and Tasman Crowe
Coastal ecosystems are threatened by habitat loss and anthropogenic “smoothing” as hard engineering approaches to sea defence, such as sea-walls, rock armouring, and offshore reefs, become common place. These artificial structures use homogenous materials (e.g. concrete or quarried rock) and as a result, lack the surface heterogeneity of natural rocky shoreline known to play a key role in niche creation and higher species diversity. Despite significant investment and research into soft engineering and ecologically sensitive approaches to coastal development, there are still knowledge gaps, particularly in relation to how patterns that are observed in nature can be utilised to improve artificial shores.
Given the technical improvements and significant reductions in cost within the portable remote sensing field (structure from motion and laser scanning), we are now able to plug gaps in our understanding of how habitat heterogeneity can influence overall site diversity. These improvements represent an excellent opportunity to improve our understanding of the spatial scales and complexity of habitats that species occur within and ultimately improve the ecological design of engineered structures in areas experiencing “smoothing” and habitat loss.
In this talk, I will highlight how advances in remote sensing techniques can be applied to context-specific ecological problems, such as low diversity and loss of rare species within marine infrastructure. I will describe our approach to combining large-scale ecological, 3D geophysical and engineering research to design statistically-derived ecologically-inspired solutions to smooth artificial surfaces. We created experimental concrete enhancement units and deployed them at a number of coastal locations. I will present preliminary ecological results, provide a workflow of unit development and statistical approaches, and finally discuss how these advances may improve future ecological intervention and design options.
How to cite: Lawrence, P., Evans, A., Brooks, P., D'Urban Jackson, T., Jenkins, S., Moore, P., McNally, C., Natanzi, A., Davies, A., and Crowe, T.: Developing statistically driven eco-engineering designs from LiDAR and structure from motion surveys for marine artificial structures., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3435, https://doi.org/10.5194/egusphere-egu2020-3435, 2020.
Coastal ecosystems are threatened by habitat loss and anthropogenic “smoothing” as hard engineering approaches to sea defence, such as sea-walls, rock armouring, and offshore reefs, become common place. These artificial structures use homogenous materials (e.g. concrete or quarried rock) and as a result, lack the surface heterogeneity of natural rocky shoreline known to play a key role in niche creation and higher species diversity. Despite significant investment and research into soft engineering and ecologically sensitive approaches to coastal development, there are still knowledge gaps, particularly in relation to how patterns that are observed in nature can be utilised to improve artificial shores.
Given the technical improvements and significant reductions in cost within the portable remote sensing field (structure from motion and laser scanning), we are now able to plug gaps in our understanding of how habitat heterogeneity can influence overall site diversity. These improvements represent an excellent opportunity to improve our understanding of the spatial scales and complexity of habitats that species occur within and ultimately improve the ecological design of engineered structures in areas experiencing “smoothing” and habitat loss.
In this talk, I will highlight how advances in remote sensing techniques can be applied to context-specific ecological problems, such as low diversity and loss of rare species within marine infrastructure. I will describe our approach to combining large-scale ecological, 3D geophysical and engineering research to design statistically-derived ecologically-inspired solutions to smooth artificial surfaces. We created experimental concrete enhancement units and deployed them at a number of coastal locations. I will present preliminary ecological results, provide a workflow of unit development and statistical approaches, and finally discuss how these advances may improve future ecological intervention and design options.
How to cite: Lawrence, P., Evans, A., Brooks, P., D'Urban Jackson, T., Jenkins, S., Moore, P., McNally, C., Natanzi, A., Davies, A., and Crowe, T.: Developing statistically driven eco-engineering designs from LiDAR and structure from motion surveys for marine artificial structures., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3435, https://doi.org/10.5194/egusphere-egu2020-3435, 2020.
EGU2020-7942 | Displays | ITS2.12/HS12.24
Limits on nature-based solutions for coastal adaptation based on climate change indicatorsRosanne Martyr-Koller, Tabea Lissner, and Carl-Friedrich Schleussner
Climate impacts increase with higher warming and evidence is mounting that impacts increase strongly above 1.5°C. Therefore, adaptation needs also rise substantially at higher warming levels. Further, limits to adaptation will be reached above 1.5°C and loss and damage will be inferred. Coastal Nature-based Solutions (NbS) have arisen as popular adaptation options, particularly for coastal developing economies and Small Island Developing States (SIDS), because of their lower overall costs compared to traditional grey infrastructure approaches such as seawalls and levees; their economic co-benefits through positive effects on sectors such as tourism and fisheries; and a broader desire to shift toward so-called blue economies. Two NbS of particular interest for coastal protection are: 1) coral reefs, which reduce coastal erosion and flooding through wave attenuation; and 2) mangroves, which provide protection from storms, tsunamis and coastal erosion. Although there is international enthusiasm to implement these solutions, there is limited understanding of the future viability of these ecosystems, particularly in their capacities as coastal adaptation service providers, in a warmer world.
In this presentation, we highlight how long and with how much coverage coral and mangrove ecosystems can provide coastal protection services for future climate scenarios, using air temperature and sea level rise as climate change indicators. A mathematical model for each ecosystem is developed, based on the physical parameters necessary for the sustainability of these ecosystems. We investigate the protective capabilities of each ecosystem under warming and sea level rise scenarios compatible with: below 1.5°C warming; below 2°C warming; warming based on current global commitments to carbon emissions reductions (3-3.5°C); and with no carbon mitigation (6°C). Results show what temperature and sea level rise values beyond which these ecosystems can no longer provide coastal protective services. These results have also been framed in a temporal window to show when these services may not be feasible, beyond which more costly adaptation measures and/or loss and damage may be incurred.
How to cite: Martyr-Koller, R., Lissner, T., and Schleussner, C.-F.: Limits on nature-based solutions for coastal adaptation based on climate change indicators, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7942, https://doi.org/10.5194/egusphere-egu2020-7942, 2020.
Climate impacts increase with higher warming and evidence is mounting that impacts increase strongly above 1.5°C. Therefore, adaptation needs also rise substantially at higher warming levels. Further, limits to adaptation will be reached above 1.5°C and loss and damage will be inferred. Coastal Nature-based Solutions (NbS) have arisen as popular adaptation options, particularly for coastal developing economies and Small Island Developing States (SIDS), because of their lower overall costs compared to traditional grey infrastructure approaches such as seawalls and levees; their economic co-benefits through positive effects on sectors such as tourism and fisheries; and a broader desire to shift toward so-called blue economies. Two NbS of particular interest for coastal protection are: 1) coral reefs, which reduce coastal erosion and flooding through wave attenuation; and 2) mangroves, which provide protection from storms, tsunamis and coastal erosion. Although there is international enthusiasm to implement these solutions, there is limited understanding of the future viability of these ecosystems, particularly in their capacities as coastal adaptation service providers, in a warmer world.
In this presentation, we highlight how long and with how much coverage coral and mangrove ecosystems can provide coastal protection services for future climate scenarios, using air temperature and sea level rise as climate change indicators. A mathematical model for each ecosystem is developed, based on the physical parameters necessary for the sustainability of these ecosystems. We investigate the protective capabilities of each ecosystem under warming and sea level rise scenarios compatible with: below 1.5°C warming; below 2°C warming; warming based on current global commitments to carbon emissions reductions (3-3.5°C); and with no carbon mitigation (6°C). Results show what temperature and sea level rise values beyond which these ecosystems can no longer provide coastal protective services. These results have also been framed in a temporal window to show when these services may not be feasible, beyond which more costly adaptation measures and/or loss and damage may be incurred.
How to cite: Martyr-Koller, R., Lissner, T., and Schleussner, C.-F.: Limits on nature-based solutions for coastal adaptation based on climate change indicators, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7942, https://doi.org/10.5194/egusphere-egu2020-7942, 2020.
EGU2020-326 | Displays | ITS2.12/HS12.24
The use of birch Betula pubescens folia morphology as indicator of atmosphere pollution and anthropogenic pressureValentin Sapunov
The increase of anthropogenic pressure and permanent pollution of the natural and urbanized environment requires the availability of effective methods for monitoring the ecological quality of territories. The priority should be considered simple, cheap and knowledge-intensive methods that can be owned by researchers without special environmental education. Such methods are the methods of phenogenetic indication and assessment of the morphological variability of widespread plants. Contaminants and pollutants can be divided into 4 categories: toxins, teratogens, carcinogens and mutagens. Toxins inhibit the development of organisms, but do not affect their genetic program. Teratogens disrupt the implementation of the genetic program. Mutagens and carcinogens destroy the genetic program itself, and these disorders can be passed on to the next generation. A convenient object of express monitoring is the birch Betula pubescens (alba) L, which is widespread in Eurasia. Toxic emissions destroy its growth and normal ontogenesis. The variability of the linear parameters of the leaves increases under environment stress. Teratogens increase the proportion of trees with dichotomy and trichotomy. The indicator of fluctuating asymmetry of leaves can serve as a criterion for mutagenic pollution of the environment. This paper presents estimates of morphological variability in different places of the Leningrad region. The coefficient of fluctuating asymmetry KA = (l1 - l2)2 / (l1 + l2) is introduced, where l1 and l2 are linear indicators of asymmetry. A high correlation was established between the level of diversity and the distance from the motorways and pollution by lead compounds, which is a teratogen. Fluctuating asymmetry is increased in places of radioactive contamination, depends on the distance to the trace of the Chernobyl disaster, the nuclear power plant. It is also increased in places of natural increased background radiation associated with the outputs of radioactive radon and the presence of granites. A map of the distribution of vegetation with varying degrees of morphological diversity and fluctuating asymmetry is presented. It is proposed to use the developed methods and algorithms for the assessment of toxic, teratogenic and mutagenic pollution of the environment and for the ecological monitoring of urbanized and non-urban areas.
How to cite: Sapunov, V.: The use of birch Betula pubescens folia morphology as indicator of atmosphere pollution and anthropogenic pressure, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-326, https://doi.org/10.5194/egusphere-egu2020-326, 2020.
The increase of anthropogenic pressure and permanent pollution of the natural and urbanized environment requires the availability of effective methods for monitoring the ecological quality of territories. The priority should be considered simple, cheap and knowledge-intensive methods that can be owned by researchers without special environmental education. Such methods are the methods of phenogenetic indication and assessment of the morphological variability of widespread plants. Contaminants and pollutants can be divided into 4 categories: toxins, teratogens, carcinogens and mutagens. Toxins inhibit the development of organisms, but do not affect their genetic program. Teratogens disrupt the implementation of the genetic program. Mutagens and carcinogens destroy the genetic program itself, and these disorders can be passed on to the next generation. A convenient object of express monitoring is the birch Betula pubescens (alba) L, which is widespread in Eurasia. Toxic emissions destroy its growth and normal ontogenesis. The variability of the linear parameters of the leaves increases under environment stress. Teratogens increase the proportion of trees with dichotomy and trichotomy. The indicator of fluctuating asymmetry of leaves can serve as a criterion for mutagenic pollution of the environment. This paper presents estimates of morphological variability in different places of the Leningrad region. The coefficient of fluctuating asymmetry KA = (l1 - l2)2 / (l1 + l2) is introduced, where l1 and l2 are linear indicators of asymmetry. A high correlation was established between the level of diversity and the distance from the motorways and pollution by lead compounds, which is a teratogen. Fluctuating asymmetry is increased in places of radioactive contamination, depends on the distance to the trace of the Chernobyl disaster, the nuclear power plant. It is also increased in places of natural increased background radiation associated with the outputs of radioactive radon and the presence of granites. A map of the distribution of vegetation with varying degrees of morphological diversity and fluctuating asymmetry is presented. It is proposed to use the developed methods and algorithms for the assessment of toxic, teratogenic and mutagenic pollution of the environment and for the ecological monitoring of urbanized and non-urban areas.
How to cite: Sapunov, V.: The use of birch Betula pubescens folia morphology as indicator of atmosphere pollution and anthropogenic pressure, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-326, https://doi.org/10.5194/egusphere-egu2020-326, 2020.
EGU2020-828 | Displays | ITS2.12/HS12.24
Phytoremediation: Nature based solution for contaminated urban soilsZorana Hrkic Ilic, Marijana Kapovic Solomun, and Nada Sumatic
Abstract: Rapid growth of urban population and consequential increasing traffic, construction of buildings, roads, industrial areas, affects urban soils as well as urban environment in general. Urban soils differ from the natural soils by their disturbed structure resulting from waste disposal, construction sites, pollution from atmospheric deposition, traffic and industrial activities. Mismanagement of urban environment can cause severe contamination of green areas in cities, with serious health risk for urban population. To prevail those issues and improve the sustainability of urban green areas, innovative and nature based solutions (NBS) should gain more attention, particularly those easily applied such as tree-based phytoremediation. Unlike traditional remediation techniques that are expensive, very demanding and can cause secondary pollution, tree-based phytoremediation is NBS with wide spectrum of application. It is low-cost technique, based on urban green infrastructure (parks, alleys, community gardens) and has numerous benefits reflected throught sustainable management of urban soils and improvement of general environmental, health, social and economic conditions for urban population. Primarly, urban green infrastructure consist of different tree species capable to mitigate soil contamination, especially contamination with toxic heavy metals (HMs). Regeneration of urban ecosystems based on the role of tree species is connected to ability of trees to retain, uptake and decompose pollutants (including HMs) from contaminated urban soils, enabling their re-use process and turning them into green and environmental friendly areas. Taking into account advantages of phytoremediation technique, the aim of this paper is to present concentration of some HMs (cadmium, lead and zinc) in urban soils of cities accross Bosnia and Herzegovina and look into phytoremediation potential of common urban tree species: horse chestnut (Aesculus hippocastanum L.) and planetree (Platanus × acerifolia (Aiton) Willd.). Results showed high phytoremediation potential of above mentioned tree species, which opens space for further research and introduction of this NBS for remediation of many severely polluted urban soils, drawing attention to better-understood urban sustainability and importance of application of phytoremediation as NBS on local level.
Key words: nature-based solutions, phytoremediation, urban soil, trees, heavy metals
How to cite: Hrkic Ilic, Z., Kapovic Solomun, M., and Sumatic, N.: Phytoremediation: Nature based solution for contaminated urban soils, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-828, https://doi.org/10.5194/egusphere-egu2020-828, 2020.
Abstract: Rapid growth of urban population and consequential increasing traffic, construction of buildings, roads, industrial areas, affects urban soils as well as urban environment in general. Urban soils differ from the natural soils by their disturbed structure resulting from waste disposal, construction sites, pollution from atmospheric deposition, traffic and industrial activities. Mismanagement of urban environment can cause severe contamination of green areas in cities, with serious health risk for urban population. To prevail those issues and improve the sustainability of urban green areas, innovative and nature based solutions (NBS) should gain more attention, particularly those easily applied such as tree-based phytoremediation. Unlike traditional remediation techniques that are expensive, very demanding and can cause secondary pollution, tree-based phytoremediation is NBS with wide spectrum of application. It is low-cost technique, based on urban green infrastructure (parks, alleys, community gardens) and has numerous benefits reflected throught sustainable management of urban soils and improvement of general environmental, health, social and economic conditions for urban population. Primarly, urban green infrastructure consist of different tree species capable to mitigate soil contamination, especially contamination with toxic heavy metals (HMs). Regeneration of urban ecosystems based on the role of tree species is connected to ability of trees to retain, uptake and decompose pollutants (including HMs) from contaminated urban soils, enabling their re-use process and turning them into green and environmental friendly areas. Taking into account advantages of phytoremediation technique, the aim of this paper is to present concentration of some HMs (cadmium, lead and zinc) in urban soils of cities accross Bosnia and Herzegovina and look into phytoremediation potential of common urban tree species: horse chestnut (Aesculus hippocastanum L.) and planetree (Platanus × acerifolia (Aiton) Willd.). Results showed high phytoremediation potential of above mentioned tree species, which opens space for further research and introduction of this NBS for remediation of many severely polluted urban soils, drawing attention to better-understood urban sustainability and importance of application of phytoremediation as NBS on local level.
Key words: nature-based solutions, phytoremediation, urban soil, trees, heavy metals
How to cite: Hrkic Ilic, Z., Kapovic Solomun, M., and Sumatic, N.: Phytoremediation: Nature based solution for contaminated urban soils, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-828, https://doi.org/10.5194/egusphere-egu2020-828, 2020.
EGU2020-1239 | Displays | ITS2.12/HS12.24
An integrated approach for Territorial Spatial Planning Towards to Sustainable Urban Ecosystem Management: A Case Study of Yantai CityJing Wang and Ying Fang and the Jing Wang
EGU2020-1283 | Displays | ITS2.12/HS12.24
Modeling and evaluation of the effect of afforestation on the runoff generation within the Glinščica catchment (Slovenia)Gregor Johnen, Klaudija Sapač, Simon Rusjan, Vesna Zupanc, Andrej Vidmar, and Nejc Bezak
Modeling and evaluation of the effect of afforestation on the runoff generation within the Glinščica catchment (Slovenia)
Gregor Johnen1, Klaudija Sapač2, Simon Rusjan2, Vesna Zupanc3, Andrej Vidmar2, Nejc Bezak2
1 Radboud University Nijmegen, Faculty of Science
2 University of Ljubljana, Faculty of Civil and Geodetic Engineering
3 University of Ljubljana, Biotechnical Faculty
Abstract:
Increases in the frequency of flood events are one of the major risk factors induced by climate change that lead to a higher vulnerability of affected communities. Natural water retention measures such as afforestation on hillslopes and floodplains are increasingly discussed as cost-effective alternatives to hard engineering structures for providing flood regulation, particularly when the evaluation also considers beneficial ecosystem services other than flood regulation. The present study provides combined modelling approach and a cost-benefit analysis (CBA) of the impacts of afforestation on peak river flows and on selected ecosystem services within the Glinščica river catchment in Slovenia. In order to investigate the effects, the hydrological model HEC-HMS, the hydraulic model HEC-RAS and the flood damage model KRPAN, that was developed specifically for Slovenia, are used. It was found that increasing the amount of tree cover results in a flood peak reduction ranging from 9-13 %. Flood extensions were significantly lower for most scenarios leading to reduced economic losses. However, a 100-years CBA only showed positive net present values (NPV) for one of the considered scenarios and the benefits were dominated by the flood regulation benefits, which were higher than for example biodiversity or recreational benefits. Based on our findings we conclude that afforestation as a sole natural water retention measure (NWRM) provides a positive NPV only in some cases (i.e. scenarios) and if additional ecosystem co-benefits are considered.
How to cite: Johnen, G., Sapač, K., Rusjan, S., Zupanc, V., Vidmar, A., and Bezak, N.: Modeling and evaluation of the effect of afforestation on the runoff generation within the Glinščica catchment (Slovenia), EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1283, https://doi.org/10.5194/egusphere-egu2020-1283, 2020.
Modeling and evaluation of the effect of afforestation on the runoff generation within the Glinščica catchment (Slovenia)
Gregor Johnen1, Klaudija Sapač2, Simon Rusjan2, Vesna Zupanc3, Andrej Vidmar2, Nejc Bezak2
1 Radboud University Nijmegen, Faculty of Science
2 University of Ljubljana, Faculty of Civil and Geodetic Engineering
3 University of Ljubljana, Biotechnical Faculty
Abstract:
Increases in the frequency of flood events are one of the major risk factors induced by climate change that lead to a higher vulnerability of affected communities. Natural water retention measures such as afforestation on hillslopes and floodplains are increasingly discussed as cost-effective alternatives to hard engineering structures for providing flood regulation, particularly when the evaluation also considers beneficial ecosystem services other than flood regulation. The present study provides combined modelling approach and a cost-benefit analysis (CBA) of the impacts of afforestation on peak river flows and on selected ecosystem services within the Glinščica river catchment in Slovenia. In order to investigate the effects, the hydrological model HEC-HMS, the hydraulic model HEC-RAS and the flood damage model KRPAN, that was developed specifically for Slovenia, are used. It was found that increasing the amount of tree cover results in a flood peak reduction ranging from 9-13 %. Flood extensions were significantly lower for most scenarios leading to reduced economic losses. However, a 100-years CBA only showed positive net present values (NPV) for one of the considered scenarios and the benefits were dominated by the flood regulation benefits, which were higher than for example biodiversity or recreational benefits. Based on our findings we conclude that afforestation as a sole natural water retention measure (NWRM) provides a positive NPV only in some cases (i.e. scenarios) and if additional ecosystem co-benefits are considered.
How to cite: Johnen, G., Sapač, K., Rusjan, S., Zupanc, V., Vidmar, A., and Bezak, N.: Modeling and evaluation of the effect of afforestation on the runoff generation within the Glinščica catchment (Slovenia), EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1283, https://doi.org/10.5194/egusphere-egu2020-1283, 2020.
EGU2020-1291 | Displays | ITS2.12/HS12.24
The impact of two decades of land-use changes in potential ecosystem services supply in a Portuguese municipality - CoimbraInês Amorim Leitão, Carla Sofia Santos Ferreira, and António José Dinis Ferreira
Land-use changes affect the properties of ecosystems, and are typically associated with decreasing ability to supply services, which in turn causes a decrease in the social well-being. Urbanization is identified as one of the main causes of ecosystem degradation, once it is considered an artificial space that replaces natural areas.This study investigates the impact of land-use changes during 20 years (1995-2015) on the potential supply of ecosystem services in Coimbra municipality, central Portugal. The assessment was based on the evaluation performed by 31 experts familiar with the study area, through questionnaires. The experts ranked the potential supply of 31 ecosystem services, grouped in regulation, provisioning and cultural services, for the several land-uses existent. Experts performed a qualitative evaluation, considering ‘strong adverse potential’, ‘weak adverse potential’, ‘not relevant’, ‘low positive potential’ and ‘strong positive potential’. The qualitative evaluation was converted into a quantitative classification (-2, -1, 0, 1, 2). Quantitative values were then used to develop an ecosystem services quantification matrix and to map the information in the study area, using Geographic Information Systems (GIS). An urban expansion from 14% to 18% was recorded over the last 20 years. Agricultural land decreased 8% due to conversion into forest (4% increase) and urban areas (4% increase). This has led to a decrease in the supply of provision (e.g. food) and regulation services (e.g. flood regulation). In fact, over the last years, recurrent floods have been increasingly noticed in Coimbra city. On the other hand, the growth of forest areas has led to an increase in general ESs supply. The adverse impacts of urbanization were partially compensated by enlarging the benefits provided by forest areas, which is the land-use with greatest ESs potential supply. In order to support urban planning and develop sustainable cities, it is essential to quantify the potential supply of ecosystem services considering local scale and characteristics.
How to cite: Amorim Leitão, I., Santos Ferreira, C. S., and Dinis Ferreira, A. J.: The impact of two decades of land-use changes in potential ecosystem services supply in a Portuguese municipality - Coimbra, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1291, https://doi.org/10.5194/egusphere-egu2020-1291, 2020.
Land-use changes affect the properties of ecosystems, and are typically associated with decreasing ability to supply services, which in turn causes a decrease in the social well-being. Urbanization is identified as one of the main causes of ecosystem degradation, once it is considered an artificial space that replaces natural areas.This study investigates the impact of land-use changes during 20 years (1995-2015) on the potential supply of ecosystem services in Coimbra municipality, central Portugal. The assessment was based on the evaluation performed by 31 experts familiar with the study area, through questionnaires. The experts ranked the potential supply of 31 ecosystem services, grouped in regulation, provisioning and cultural services, for the several land-uses existent. Experts performed a qualitative evaluation, considering ‘strong adverse potential’, ‘weak adverse potential’, ‘not relevant’, ‘low positive potential’ and ‘strong positive potential’. The qualitative evaluation was converted into a quantitative classification (-2, -1, 0, 1, 2). Quantitative values were then used to develop an ecosystem services quantification matrix and to map the information in the study area, using Geographic Information Systems (GIS). An urban expansion from 14% to 18% was recorded over the last 20 years. Agricultural land decreased 8% due to conversion into forest (4% increase) and urban areas (4% increase). This has led to a decrease in the supply of provision (e.g. food) and regulation services (e.g. flood regulation). In fact, over the last years, recurrent floods have been increasingly noticed in Coimbra city. On the other hand, the growth of forest areas has led to an increase in general ESs supply. The adverse impacts of urbanization were partially compensated by enlarging the benefits provided by forest areas, which is the land-use with greatest ESs potential supply. In order to support urban planning and develop sustainable cities, it is essential to quantify the potential supply of ecosystem services considering local scale and characteristics.
How to cite: Amorim Leitão, I., Santos Ferreira, C. S., and Dinis Ferreira, A. J.: The impact of two decades of land-use changes in potential ecosystem services supply in a Portuguese municipality - Coimbra, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1291, https://doi.org/10.5194/egusphere-egu2020-1291, 2020.
EGU2020-2115 | Displays | ITS2.12/HS12.24
Adaptive Structure Design of Raingarden in Shanghai Coastal Saline Alkali Area for Improving Urban ResilienceBingqin Yu, Shengquan Che, and Lu Wang
Shanghai is one of the demonstration sites of Sponge City which is a typical coastal saline-alkali area. To improve the urban resilience and mitigate storm water, green infrastructure as raingarden, bioswale and green roof, etc. are used to regulate runoff. However, the design of raingarden have the disadvantage of solutions for high groundwater levels and soil salinization in Shanghai. In order to improve the regional adaptability and optimize the design of the raingarden, the indoor rainfall simulation experiments and orthogonal experiments were used to analyze the effect of salt isolation and rain infiltration impacted by different structures (salt-insulated layer material, salt-insulated layer position, filler layer thickness). The results show that the order of influence on salt isolation is: salt-insulated material>filler layer thickness>salt-insulated layer position. The order of impact on rain infiltration is: salt-insulated material>salt-insulated layer location>filler layer thickness. Three types of rain garden structures are proposed. The first is strong salt-insulated rain garden suitable for severe saline-alkali areas. The second is suitable for the comprehensive rain garden in the moderate saline-alkali area. The third is suitable for the permeable rain garden in the light saline-alkali area.
How to cite: Yu, B., Che, S., and Wang, L.: Adaptive Structure Design of Raingarden in Shanghai Coastal Saline Alkali Area for Improving Urban Resilience, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2115, https://doi.org/10.5194/egusphere-egu2020-2115, 2020.
Shanghai is one of the demonstration sites of Sponge City which is a typical coastal saline-alkali area. To improve the urban resilience and mitigate storm water, green infrastructure as raingarden, bioswale and green roof, etc. are used to regulate runoff. However, the design of raingarden have the disadvantage of solutions for high groundwater levels and soil salinization in Shanghai. In order to improve the regional adaptability and optimize the design of the raingarden, the indoor rainfall simulation experiments and orthogonal experiments were used to analyze the effect of salt isolation and rain infiltration impacted by different structures (salt-insulated layer material, salt-insulated layer position, filler layer thickness). The results show that the order of influence on salt isolation is: salt-insulated material>filler layer thickness>salt-insulated layer position. The order of impact on rain infiltration is: salt-insulated material>salt-insulated layer location>filler layer thickness. Three types of rain garden structures are proposed. The first is strong salt-insulated rain garden suitable for severe saline-alkali areas. The second is suitable for the comprehensive rain garden in the moderate saline-alkali area. The third is suitable for the permeable rain garden in the light saline-alkali area.
How to cite: Yu, B., Che, S., and Wang, L.: Adaptive Structure Design of Raingarden in Shanghai Coastal Saline Alkali Area for Improving Urban Resilience, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2115, https://doi.org/10.5194/egusphere-egu2020-2115, 2020.
EGU2020-2423 | Displays | ITS2.12/HS12.24
Township Spatial Commodification and Environmental Spatial Disaster under the Threat of Climate Change: Flooding and Development of (Taiwan) Zhubei UrbanizationChi-Tung Hung, Wen-Yen Lin, and Shih-Han Lin
Agriculture land has been treated as the urban reserved land which has potential value for transformation. Especially for the land development strategy by “Developmental Government”, agriculture land’s value is twisted and encouraging trading activity, that result in lower production and short in food supply. The threats from extreme weather and environmental change has increased the potential hazards of landuse and challenges toward town planning. This study uses environmental diagnosis and field survey with in-depth interviews, along with the result from FLO-2D flood model and GIS overlay of hazard risk maps, to proceed with a case of “property-led development” in (Taiwan, Xinzhu) Zhubei city. The findings indicate that run-off volume of some area has changed and the flooding depth / area has increased. By interpreting the empirical results, it has exposed the environmental hazard issues and land commodification by the failure of urban planning policy in Zhubei city.
How to cite: Hung, C.-T., Lin, W.-Y., and Lin, S.-H.: Township Spatial Commodification and Environmental Spatial Disaster under the Threat of Climate Change: Flooding and Development of (Taiwan) Zhubei Urbanization, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2423, https://doi.org/10.5194/egusphere-egu2020-2423, 2020.
Agriculture land has been treated as the urban reserved land which has potential value for transformation. Especially for the land development strategy by “Developmental Government”, agriculture land’s value is twisted and encouraging trading activity, that result in lower production and short in food supply. The threats from extreme weather and environmental change has increased the potential hazards of landuse and challenges toward town planning. This study uses environmental diagnosis and field survey with in-depth interviews, along with the result from FLO-2D flood model and GIS overlay of hazard risk maps, to proceed with a case of “property-led development” in (Taiwan, Xinzhu) Zhubei city. The findings indicate that run-off volume of some area has changed and the flooding depth / area has increased. By interpreting the empirical results, it has exposed the environmental hazard issues and land commodification by the failure of urban planning policy in Zhubei city.
How to cite: Hung, C.-T., Lin, W.-Y., and Lin, S.-H.: Township Spatial Commodification and Environmental Spatial Disaster under the Threat of Climate Change: Flooding and Development of (Taiwan) Zhubei Urbanization, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2423, https://doi.org/10.5194/egusphere-egu2020-2423, 2020.
EGU2020-3551 | Displays | ITS2.12/HS12.24
Exploring the relationships among ecosystem services and their drivers in the Beijing-Tianjin-Hebei urban agglomerationJiashu Shen
Understanding the relationships among multiple ecosystem services and their drivers is crucial for the sustainability of ecosystem services provision. Different ecosystem services were quantified using different models, and the relationships among ecosystem services and their drivers were analyzed using different statistical methods in the Beijing-Tianjin-Hebei urban agglomeration. Our results showed that the spatially concordant supply of regulating services and cultural services decreased from northwest to southeast, whereas the delivery of provisioning services decreased from southeast to northwest in the region. The provisioning service was antagonistic with both the regulating services and the cultural service, and the relationships among the regulating services and the cultural service were mostly synergistic. Different combinations of ecosystems provided seven types of ecosystem services bundles with different compositions and quantities of ecosystem services. Different drivers had different impacts on different ecosystem services. On the basis of our findings, we suggested that the features of ecosystem service relationships and their drivers should be considered to ensure the efficiency of the management of natural capital.
How to cite: Shen, J.: Exploring the relationships among ecosystem services and their drivers in the Beijing-Tianjin-Hebei urban agglomeration, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3551, https://doi.org/10.5194/egusphere-egu2020-3551, 2020.
Understanding the relationships among multiple ecosystem services and their drivers is crucial for the sustainability of ecosystem services provision. Different ecosystem services were quantified using different models, and the relationships among ecosystem services and their drivers were analyzed using different statistical methods in the Beijing-Tianjin-Hebei urban agglomeration. Our results showed that the spatially concordant supply of regulating services and cultural services decreased from northwest to southeast, whereas the delivery of provisioning services decreased from southeast to northwest in the region. The provisioning service was antagonistic with both the regulating services and the cultural service, and the relationships among the regulating services and the cultural service were mostly synergistic. Different combinations of ecosystems provided seven types of ecosystem services bundles with different compositions and quantities of ecosystem services. Different drivers had different impacts on different ecosystem services. On the basis of our findings, we suggested that the features of ecosystem service relationships and their drivers should be considered to ensure the efficiency of the management of natural capital.
How to cite: Shen, J.: Exploring the relationships among ecosystem services and their drivers in the Beijing-Tianjin-Hebei urban agglomeration, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3551, https://doi.org/10.5194/egusphere-egu2020-3551, 2020.
EGU2020-3823 | Displays | ITS2.12/HS12.24
Critical Region Identification of Land Use/Land Cover based on typical ecosystem services - a case study of Beijing-Tianjin-Hebei Urban AgglomerationYinglu Liu
Contradictions between population, economic development, land and ecological environment occur frequently in the Beijing-Tianjin-Hebei urban agglomeration, forming a complex problem of "population - land - social economy - ecological environment" at a regional level. This study considers seven indicators, including LUCC and three typical ecosystem services, to recognize the critical regions. Through continuous experiments and adjustments of parameters, we finally determine the building methods of overlaying in a equal power, and quantificationally evaluate the land use dynamic degrees, land use extents, diversity of land use types, ecological land use ratio, carbon sequestration service, soil conservation and water production services, integrated identify critical areas of the Beijing-Tianjin-Hebei urban agglomeration. We aim at realizing the coordinated sustainable development of Beijing-Tianjin-Hebei region as soon as possible, and providing the basis for land planning. The results show that the critical regions of the Beijing-Tianjin-Hebei urban agglomeration are mainly distributed in the Yanshan and Taihang mountain regions and the surrounding towns. On the scale of county level, the first-level critical regions are mainly located in Beijing, Qinhuangdao and Chengde, and the second-level critical regions are mainly located in Chengde, Beijing, Qinhuangdao and Baoding.
How to cite: Liu, Y.: Critical Region Identification of Land Use/Land Cover based on typical ecosystem services - a case study of Beijing-Tianjin-Hebei Urban Agglomeration, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3823, https://doi.org/10.5194/egusphere-egu2020-3823, 2020.
Contradictions between population, economic development, land and ecological environment occur frequently in the Beijing-Tianjin-Hebei urban agglomeration, forming a complex problem of "population - land - social economy - ecological environment" at a regional level. This study considers seven indicators, including LUCC and three typical ecosystem services, to recognize the critical regions. Through continuous experiments and adjustments of parameters, we finally determine the building methods of overlaying in a equal power, and quantificationally evaluate the land use dynamic degrees, land use extents, diversity of land use types, ecological land use ratio, carbon sequestration service, soil conservation and water production services, integrated identify critical areas of the Beijing-Tianjin-Hebei urban agglomeration. We aim at realizing the coordinated sustainable development of Beijing-Tianjin-Hebei region as soon as possible, and providing the basis for land planning. The results show that the critical regions of the Beijing-Tianjin-Hebei urban agglomeration are mainly distributed in the Yanshan and Taihang mountain regions and the surrounding towns. On the scale of county level, the first-level critical regions are mainly located in Beijing, Qinhuangdao and Chengde, and the second-level critical regions are mainly located in Chengde, Beijing, Qinhuangdao and Baoding.
How to cite: Liu, Y.: Critical Region Identification of Land Use/Land Cover based on typical ecosystem services - a case study of Beijing-Tianjin-Hebei Urban Agglomeration, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3823, https://doi.org/10.5194/egusphere-egu2020-3823, 2020.
EGU2020-6588 | Displays | ITS2.12/HS12.24
Impact of local industry expansion on farmland ecosystem services: A case study of farmland-factories in Changhua County, TaiwanJhu Ting Chen and Hsueh-Sheng Chang
As is known to everyone, the preservation of agricultural landscape plays a crucial role in productivity, sustainability and other ecosystem services of agricultural systems. In Taiwan, there was a disgraceful history between the sprawl of factories and farmland preservation. With the expansion of Cities’ boundaries, it formed a skyrocketing wave of urban sprawl at the fringe of urbanized area in the 1960s. It created a prevailing phenomenon in rural area in Taiwan called “ Non-Agricultural Use on Farmland “, which means landuse that violates against the current law, such as manufactories and houses. These human activities severely deteriorate the landscape and the quality of agricultural products in Taiwan. However, with the awareness of the importance of ecosystem services, people in Taiwan are no longer satisfied with nowadays policy-making process. They are asking to take ecosystem-value into consideration in policy formulation and spatial planning processes. Yet, current spatial planning policies rarely include the value of ecosystem services in the assessment process, which result in biases in policy evaluation and detract from the output of ecosystem services and human well-being.
Therefore, in order to incorporate the ecosystem service assessment into the process of spatial planning policy, this study will first evaluate the ecosystem services through the InVEST model and identify the spatial pattern of ecosystem services through spatial autocorrelation. In our research, we select four modules that is consider to be most relevant to the preservation of farmland landscape through literature review, which includes: “Carbon Storage and Sequestration”, ”Habitat Quality”, ”Annual Water Yield”, ”Sediment Delivery Ratio”. Then, by modifying the spatial pattern of the ecosystem services into criteria settings, this thesis simulates the change of the overall ecosystem services and the hotspots in different scenarios of farmland control policies. In order to further assess the spatial relevance of farmland-factory management policies to ecosystem services, this study use spatial autocorrelation to assess the location of ecosystem services and to identify a reasonable and effective farmland management strategy.
Primitive analysis points out that the demolition of the farmland-factory will have positive effects on multiple ecosystem services. We verify several scenarios including the scenarios that consider the spatial pattern, take hot spots of selected ecosystem services modules into consideration, the other that consider “ growth management ” and still the one that consider ongoing governmental policies. However, the output of ecosystem services and spatial pattern are different due to the spatial structure of the research environment and the physical status. Some of the ecosystem services show obvious result that they are affected by the spatial structure or by physical environment, and there are still some results showing no significant difference. This paper try to demonstrate and provide information in the process of farmland-factory management policy. The findings of this article can be applied to policies that concerned of landscaping preservation planning and management, providing a GIS-based and Scenario-based method of ecosystem services assessment, which we hope to construct a harmonious policy framework for landscape preservation and industry expansion
How to cite: Chen, J. T. and Chang, H.-S.: Impact of local industry expansion on farmland ecosystem services: A case study of farmland-factories in Changhua County, Taiwan , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6588, https://doi.org/10.5194/egusphere-egu2020-6588, 2020.
As is known to everyone, the preservation of agricultural landscape plays a crucial role in productivity, sustainability and other ecosystem services of agricultural systems. In Taiwan, there was a disgraceful history between the sprawl of factories and farmland preservation. With the expansion of Cities’ boundaries, it formed a skyrocketing wave of urban sprawl at the fringe of urbanized area in the 1960s. It created a prevailing phenomenon in rural area in Taiwan called “ Non-Agricultural Use on Farmland “, which means landuse that violates against the current law, such as manufactories and houses. These human activities severely deteriorate the landscape and the quality of agricultural products in Taiwan. However, with the awareness of the importance of ecosystem services, people in Taiwan are no longer satisfied with nowadays policy-making process. They are asking to take ecosystem-value into consideration in policy formulation and spatial planning processes. Yet, current spatial planning policies rarely include the value of ecosystem services in the assessment process, which result in biases in policy evaluation and detract from the output of ecosystem services and human well-being.
Therefore, in order to incorporate the ecosystem service assessment into the process of spatial planning policy, this study will first evaluate the ecosystem services through the InVEST model and identify the spatial pattern of ecosystem services through spatial autocorrelation. In our research, we select four modules that is consider to be most relevant to the preservation of farmland landscape through literature review, which includes: “Carbon Storage and Sequestration”, ”Habitat Quality”, ”Annual Water Yield”, ”Sediment Delivery Ratio”. Then, by modifying the spatial pattern of the ecosystem services into criteria settings, this thesis simulates the change of the overall ecosystem services and the hotspots in different scenarios of farmland control policies. In order to further assess the spatial relevance of farmland-factory management policies to ecosystem services, this study use spatial autocorrelation to assess the location of ecosystem services and to identify a reasonable and effective farmland management strategy.
Primitive analysis points out that the demolition of the farmland-factory will have positive effects on multiple ecosystem services. We verify several scenarios including the scenarios that consider the spatial pattern, take hot spots of selected ecosystem services modules into consideration, the other that consider “ growth management ” and still the one that consider ongoing governmental policies. However, the output of ecosystem services and spatial pattern are different due to the spatial structure of the research environment and the physical status. Some of the ecosystem services show obvious result that they are affected by the spatial structure or by physical environment, and there are still some results showing no significant difference. This paper try to demonstrate and provide information in the process of farmland-factory management policy. The findings of this article can be applied to policies that concerned of landscaping preservation planning and management, providing a GIS-based and Scenario-based method of ecosystem services assessment, which we hope to construct a harmonious policy framework for landscape preservation and industry expansion
How to cite: Chen, J. T. and Chang, H.-S.: Impact of local industry expansion on farmland ecosystem services: A case study of farmland-factories in Changhua County, Taiwan , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6588, https://doi.org/10.5194/egusphere-egu2020-6588, 2020.
EGU2020-12739 | Displays | ITS2.12/HS12.24
Methodological approaches to assess the impact of landscaping on the microclimate of urban environment of southern mountain resortsElena Chalaya, Natalia Efimenko, Nina Povolotskaya, Irina Senik, and Victor Slepykh
The changing urban system of resorts is characterized by a tendency towards a decrease in the volume of arborous phyto-resources of urban gardening involved in the environmental protection function of the environment, the appearance of pathogenic effects, the nature of which has not been sufficiently studied.
There have been considered some results of route landscape-climate monitoring in the southern region of the Caucasian Mineral Waters (the Russian Federation) by the methods adopted in balneology [1]. The subject of the study was the modules of the resort and recreational potential (RRP) in the experimental urbanized (open) and natural park areas (in the shade) at heights of 600, 800 and 1000 m above the sea level.
The obtained results indicate significant territorial differences in the values of integral RRP in urbanized and natural areas. Differences between the extreme values ranged from 0 to 2,55 points (out of 3 possible), that is from extremely unfavorable to comfortable conditions. Material analysis showed that in the “weight” ratio, the “pathogenicity” of microclimate of the urban systems had been formed due to differences in landscaping conditions at the experimental sites: by solar illumination (up to 100 lx), by total solar radiation (up to 600 W/m2), by the temperature of geological substate (up to 23-25°C), by relative air humidity on the Earth's surface (20%), by natural aeroanions (up to 420 ion/cm3), by the percentage of the minimum permissible level of anions (> 400 ion / cm3, up to 60%), by breathing ground-level aerosol pollution with a particle diameter of less than 1000 nm, penetrating to the alveoli (by 10-50%) when breathing, by terms of hypoxia (up to 10-20 g/m3 - 5-10%).
Conclusion: the obtained results indicate the dominant role of greening in the correction of microclimate modules and resort-recreational potential in urban mountain resorts. When developing urban planning standards for mountain resorts, it is necessary to provide a special type of urban landscaping aimed at reducing the area of stone coverings of buildings due to their vertical landscaping, increase in the “green shade” over urban pedestrians through the installation of "tent gardening" as well as bringing urban tree planting to 40-60% of the resort area.
References: 1.Resort study of Caucasian Mineralnye Vody region / Under the general edition of the prof. V.V. Uyba. Scientific publication. - Pyatigorsk. - 2011.– 368 p.
How to cite: Chalaya, E., Efimenko, N., Povolotskaya, N., Senik, I., and Slepykh, V.: Methodological approaches to assess the impact of landscaping on the microclimate of urban environment of southern mountain resorts, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12739, https://doi.org/10.5194/egusphere-egu2020-12739, 2020.
The changing urban system of resorts is characterized by a tendency towards a decrease in the volume of arborous phyto-resources of urban gardening involved in the environmental protection function of the environment, the appearance of pathogenic effects, the nature of which has not been sufficiently studied.
There have been considered some results of route landscape-climate monitoring in the southern region of the Caucasian Mineral Waters (the Russian Federation) by the methods adopted in balneology [1]. The subject of the study was the modules of the resort and recreational potential (RRP) in the experimental urbanized (open) and natural park areas (in the shade) at heights of 600, 800 and 1000 m above the sea level.
The obtained results indicate significant territorial differences in the values of integral RRP in urbanized and natural areas. Differences between the extreme values ranged from 0 to 2,55 points (out of 3 possible), that is from extremely unfavorable to comfortable conditions. Material analysis showed that in the “weight” ratio, the “pathogenicity” of microclimate of the urban systems had been formed due to differences in landscaping conditions at the experimental sites: by solar illumination (up to 100 lx), by total solar radiation (up to 600 W/m2), by the temperature of geological substate (up to 23-25°C), by relative air humidity on the Earth's surface (20%), by natural aeroanions (up to 420 ion/cm3), by the percentage of the minimum permissible level of anions (> 400 ion / cm3, up to 60%), by breathing ground-level aerosol pollution with a particle diameter of less than 1000 nm, penetrating to the alveoli (by 10-50%) when breathing, by terms of hypoxia (up to 10-20 g/m3 - 5-10%).
Conclusion: the obtained results indicate the dominant role of greening in the correction of microclimate modules and resort-recreational potential in urban mountain resorts. When developing urban planning standards for mountain resorts, it is necessary to provide a special type of urban landscaping aimed at reducing the area of stone coverings of buildings due to their vertical landscaping, increase in the “green shade” over urban pedestrians through the installation of "tent gardening" as well as bringing urban tree planting to 40-60% of the resort area.
References: 1.Resort study of Caucasian Mineralnye Vody region / Under the general edition of the prof. V.V. Uyba. Scientific publication. - Pyatigorsk. - 2011.– 368 p.
How to cite: Chalaya, E., Efimenko, N., Povolotskaya, N., Senik, I., and Slepykh, V.: Methodological approaches to assess the impact of landscaping on the microclimate of urban environment of southern mountain resorts, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12739, https://doi.org/10.5194/egusphere-egu2020-12739, 2020.
EGU2020-8536 | Displays | ITS2.12/HS12.24
Modelling London’s Urban Green physiological responses and impacts on flood retention sustainability under climate changeZiyan Zhang, Athanasios Paschalis, and Ana Mijic
Surface water flooding is the most likely cause of flooding in London, still affecting at least 3% of the area and up to 680,000 properties. Urbanization and climate change are expected to increase the impacts of urban flooding in the near future. To mitigate such problem and provide resilient ecosystem services for Europe’s largest capital, Urban Green Infrastructure adaptations have been extensively used in the last two decades in conjunction with traditional grey infrastructure. Sustainability and efficiency of green infrastructure depend on the ability of plants to emulate the natural ecosystem water and carbon cycles in the city. Considering the expected rise in temperature, changes in rainfall patterns and intensification of the urban heat island effect, existing and planned green infrastructure solutions might be vulnerable to plant water stress. Since there will be much less space available to accommodate future changes in cities, it is extremely important to think about the system’s potential performance further ahead the construction. In this study we perform a detailed evaluation of representative London parks and rain gardens to mitigate flood risk under a changing climate. Specifically, we focus on the hydrological performance of urban raingardens (consisting exclusively of low stature plants) and urban parks (as a composite of low stature vegetation and urban forests) in London. The coupled water and carbon dynamics were evaluated using the ecohydrological model Tethys-Chloris (TeC) forced with the last generation climate change projections UKCP18. Based on our simulation we disentangle the composite effects of climate change, to plant physiological responses to elevated CO2 and changes in precipitation patterns and temperature.
Our results indicate that:
(a) Changes in weather severely affect plant efficiency during the 2nd half of the 21st century;
(b) Effectiveness of green infrastructure is strongly dependent on possible climate change outcomes;
(c) Within a certain range of plausible climate changes, for the 1st half of the 21st century positive effects of changes in climate can mostly counteract negative plant physiological responses to elevated CO2, but those negative effects gradually become dominant;
(d) Efficient and sustainable design of urban green infrastructure to mitigate flooding must consider an optimal adaptive choice of plants to offset the projected negative impacts of elevated CO2 and uncertain climate.
How to cite: Zhang, Z., Paschalis, A., and Mijic, A.: Modelling London’s Urban Green physiological responses and impacts on flood retention sustainability under climate change, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8536, https://doi.org/10.5194/egusphere-egu2020-8536, 2020.
Surface water flooding is the most likely cause of flooding in London, still affecting at least 3% of the area and up to 680,000 properties. Urbanization and climate change are expected to increase the impacts of urban flooding in the near future. To mitigate such problem and provide resilient ecosystem services for Europe’s largest capital, Urban Green Infrastructure adaptations have been extensively used in the last two decades in conjunction with traditional grey infrastructure. Sustainability and efficiency of green infrastructure depend on the ability of plants to emulate the natural ecosystem water and carbon cycles in the city. Considering the expected rise in temperature, changes in rainfall patterns and intensification of the urban heat island effect, existing and planned green infrastructure solutions might be vulnerable to plant water stress. Since there will be much less space available to accommodate future changes in cities, it is extremely important to think about the system’s potential performance further ahead the construction. In this study we perform a detailed evaluation of representative London parks and rain gardens to mitigate flood risk under a changing climate. Specifically, we focus on the hydrological performance of urban raingardens (consisting exclusively of low stature plants) and urban parks (as a composite of low stature vegetation and urban forests) in London. The coupled water and carbon dynamics were evaluated using the ecohydrological model Tethys-Chloris (TeC) forced with the last generation climate change projections UKCP18. Based on our simulation we disentangle the composite effects of climate change, to plant physiological responses to elevated CO2 and changes in precipitation patterns and temperature.
Our results indicate that:
(a) Changes in weather severely affect plant efficiency during the 2nd half of the 21st century;
(b) Effectiveness of green infrastructure is strongly dependent on possible climate change outcomes;
(c) Within a certain range of plausible climate changes, for the 1st half of the 21st century positive effects of changes in climate can mostly counteract negative plant physiological responses to elevated CO2, but those negative effects gradually become dominant;
(d) Efficient and sustainable design of urban green infrastructure to mitigate flooding must consider an optimal adaptive choice of plants to offset the projected negative impacts of elevated CO2 and uncertain climate.
How to cite: Zhang, Z., Paschalis, A., and Mijic, A.: Modelling London’s Urban Green physiological responses and impacts on flood retention sustainability under climate change, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8536, https://doi.org/10.5194/egusphere-egu2020-8536, 2020.
EGU2020-22563 | Displays | ITS2.12/HS12.24
Reactive transport modeling of an innovative nature-based solution for domestic sewage treatmentJohannes Boog, Thomas Kalbacher, Jaime Nivala, Manfred van Afferden, and Roland A. Müller
The discharge of inadequately treated sewage is still a worldwide problem that contributes to the deterioration of receiving water bodies. Especially in urban environments of less developed countries this threatens drinking water availability and, therefore, puts human health at risk and impedes sustainable urban development. Aerated treatment wetlands are innovative nature-based solutions that have been successfully applied in treating domestic, municipal and industrial effluents. The advantage of these technologies is their simplicity which translates into low operation and maintenance requirements and robust treatment. Aerated wetlands can be easily integrated into decentralized water infrastructure to serve the demand of changing and fast-growing urban environments.
Aerated wetlands mimic natural processes to treat wastewater. Air is injected into these systems to provide an aerobic environment for increased aerobic biodegradation of pollutants. However, quantitative knowledge on how aeration governs oxygen transfer, organic matter and nitrogen removal within aerated wetlands is still insufficient.
In this study, we developed a reactive transport model for horizontal sub-surface flow aerated wetlands using the open-source multi-physics simulator OpenGeoSys. The model was calibrated and validated by pilot-scale experiments with real domestic sewage including steady-state operation and induced aeration failures. In both cases, the model achieved an acceptable degree of simulation accuracy. Furthermore, the experiments including short—term aeration failure showed that horizontal flow aerated wetlands can fully recover from such operational disruptions.
We then analyzed several simulation scenarios and found out that increasing aeration alters and shifts water quality gradients for organic carbon and nitrogen downstream. This can, for instance, be exploited to provide specific effluent qualities for different demands in an urban environment such as irrigation or groundwater recharge. We identified that the aeration rate required to provide an efficient and robust treatment efficacy for organic carbon and nitrogen of domestic wastewater is 150–200 L m2 h1. The developed model can be used by researchers and engineers to support the design of horizontal flow aerated wetlands in the context of applications in urban environments. Furthermore, our research highlights the suitability of horizontal flow aerated wetlands as a resilient treatment technology with potential application for water pollution control in urban environments.
How to cite: Boog, J., Kalbacher, T., Nivala, J., van Afferden, M., and Müller, R. A.: Reactive transport modeling of an innovative nature-based solution for domestic sewage treatment, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22563, https://doi.org/10.5194/egusphere-egu2020-22563, 2020.
The discharge of inadequately treated sewage is still a worldwide problem that contributes to the deterioration of receiving water bodies. Especially in urban environments of less developed countries this threatens drinking water availability and, therefore, puts human health at risk and impedes sustainable urban development. Aerated treatment wetlands are innovative nature-based solutions that have been successfully applied in treating domestic, municipal and industrial effluents. The advantage of these technologies is their simplicity which translates into low operation and maintenance requirements and robust treatment. Aerated wetlands can be easily integrated into decentralized water infrastructure to serve the demand of changing and fast-growing urban environments.
Aerated wetlands mimic natural processes to treat wastewater. Air is injected into these systems to provide an aerobic environment for increased aerobic biodegradation of pollutants. However, quantitative knowledge on how aeration governs oxygen transfer, organic matter and nitrogen removal within aerated wetlands is still insufficient.
In this study, we developed a reactive transport model for horizontal sub-surface flow aerated wetlands using the open-source multi-physics simulator OpenGeoSys. The model was calibrated and validated by pilot-scale experiments with real domestic sewage including steady-state operation and induced aeration failures. In both cases, the model achieved an acceptable degree of simulation accuracy. Furthermore, the experiments including short—term aeration failure showed that horizontal flow aerated wetlands can fully recover from such operational disruptions.
We then analyzed several simulation scenarios and found out that increasing aeration alters and shifts water quality gradients for organic carbon and nitrogen downstream. This can, for instance, be exploited to provide specific effluent qualities for different demands in an urban environment such as irrigation or groundwater recharge. We identified that the aeration rate required to provide an efficient and robust treatment efficacy for organic carbon and nitrogen of domestic wastewater is 150–200 L m2 h1. The developed model can be used by researchers and engineers to support the design of horizontal flow aerated wetlands in the context of applications in urban environments. Furthermore, our research highlights the suitability of horizontal flow aerated wetlands as a resilient treatment technology with potential application for water pollution control in urban environments.
How to cite: Boog, J., Kalbacher, T., Nivala, J., van Afferden, M., and Müller, R. A.: Reactive transport modeling of an innovative nature-based solution for domestic sewage treatment, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22563, https://doi.org/10.5194/egusphere-egu2020-22563, 2020.
EGU2020-21377 | Displays | ITS2.12/HS12.24
Assessing the performance of Sustainable Drainage Systems (SuDS) in urban context using SWMM5 modelling scenarios: the example of a typical industrial area in Lombardia Region, northern ItalyRoberta D'Ambrosio, Britta Schmalz, and Antonia Longobardi
Recently, particularly invasive urbanization dynamics, resulted into a substantial increase in the urban impervious surface that forced the administrations to deal more frequently with the inability of the traditional drainage systems to manage stormwater in a sustainable and effective manner. Worldwide, integrated approaches, such as Sustainable Drainage Systems (SuDS), whose basic principle is the management of rainwater at source through the implementation of prevention, mitigation and treatment strategies, are increasingly being developed.
The project aims to assess the benefits, in terms of reduction of floods, deriving from the widespread implementation of SuDS in an industrial area of about 300 ha in northern Italy and to analyse their behaviour under local climatic conditions. For this purpose, in absence of rain gauges in the case study area, analyses were carried out to obtain reliable and continuous rainfall data from all weather stations closest to the basin. Therefore, 10 years of rainfall data (2009-2018), recorded at 15 minutes timesteps from 10 station, have been acquired by the Regional Agency for Environmental Protection of the Lombardia Region and Inverse Distance Weighting has been used as a methodology of interpolation to obtain precipitation for the area of interest.
Critical precipitation scenarios, both annual and event scale, have been identified to evaluate the performance of SuDS during significant rainfall periods or events. For this reason, it was considered appropriate to extract from the complete dataset the year characterized by the maximum precipitation amount (1515.57 mm), the rain events with the maximum intensity in an hour (5.23 mm/h), with the maximum overall intensity (7.36 mm/h) and with the highest return period (5 years with a 6.87 mm/h intensity).
SWMM5 modelling allowed to compare the performance of the sewer system of the basin (overall 1148 nodes, 1141 pipelines for a total of 36 km of network) in a “traditional” scenario, without integrated strategies, and after the implementation of green infrastructures (about 10% surface area and located in the basin in accordance with the current structure of the urban agglomeration).
The results, assessed in terms of reduction of different parameters such as runoff coefficient (on average 12% for the year and 39% for the event analysis), maximum flow in the pipelines (on average 3% and 31% respectively), maximum total inflow in the outfalls (on average 7% and 40% respectively) and node flooded (on average 23% and 57% respectively) following the implementation of SuDS, suggest in the first instance that these systems can give their contribution in the mitigation of the effects of flooding in urban areas. Indeed, analyses aimed at investigating punctually over time flow and volume in the outfalls conducted so far, brought about no extremely positive results and the performance of SuDS seems to be particularly challenged by severity of rainfall events. As future aspects, this research strives to assess the performance of sustainable drainage systems under common rainfall scenarios and to establish, through an analysis of the climate change effects and the creation of rainfall data projections, the performance of these systems also over time.
How to cite: D'Ambrosio, R., Schmalz, B., and Longobardi, A.: Assessing the performance of Sustainable Drainage Systems (SuDS) in urban context using SWMM5 modelling scenarios: the example of a typical industrial area in Lombardia Region, northern Italy, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21377, https://doi.org/10.5194/egusphere-egu2020-21377, 2020.
Recently, particularly invasive urbanization dynamics, resulted into a substantial increase in the urban impervious surface that forced the administrations to deal more frequently with the inability of the traditional drainage systems to manage stormwater in a sustainable and effective manner. Worldwide, integrated approaches, such as Sustainable Drainage Systems (SuDS), whose basic principle is the management of rainwater at source through the implementation of prevention, mitigation and treatment strategies, are increasingly being developed.
The project aims to assess the benefits, in terms of reduction of floods, deriving from the widespread implementation of SuDS in an industrial area of about 300 ha in northern Italy and to analyse their behaviour under local climatic conditions. For this purpose, in absence of rain gauges in the case study area, analyses were carried out to obtain reliable and continuous rainfall data from all weather stations closest to the basin. Therefore, 10 years of rainfall data (2009-2018), recorded at 15 minutes timesteps from 10 station, have been acquired by the Regional Agency for Environmental Protection of the Lombardia Region and Inverse Distance Weighting has been used as a methodology of interpolation to obtain precipitation for the area of interest.
Critical precipitation scenarios, both annual and event scale, have been identified to evaluate the performance of SuDS during significant rainfall periods or events. For this reason, it was considered appropriate to extract from the complete dataset the year characterized by the maximum precipitation amount (1515.57 mm), the rain events with the maximum intensity in an hour (5.23 mm/h), with the maximum overall intensity (7.36 mm/h) and with the highest return period (5 years with a 6.87 mm/h intensity).
SWMM5 modelling allowed to compare the performance of the sewer system of the basin (overall 1148 nodes, 1141 pipelines for a total of 36 km of network) in a “traditional” scenario, without integrated strategies, and after the implementation of green infrastructures (about 10% surface area and located in the basin in accordance with the current structure of the urban agglomeration).
The results, assessed in terms of reduction of different parameters such as runoff coefficient (on average 12% for the year and 39% for the event analysis), maximum flow in the pipelines (on average 3% and 31% respectively), maximum total inflow in the outfalls (on average 7% and 40% respectively) and node flooded (on average 23% and 57% respectively) following the implementation of SuDS, suggest in the first instance that these systems can give their contribution in the mitigation of the effects of flooding in urban areas. Indeed, analyses aimed at investigating punctually over time flow and volume in the outfalls conducted so far, brought about no extremely positive results and the performance of SuDS seems to be particularly challenged by severity of rainfall events. As future aspects, this research strives to assess the performance of sustainable drainage systems under common rainfall scenarios and to establish, through an analysis of the climate change effects and the creation of rainfall data projections, the performance of these systems also over time.
How to cite: D'Ambrosio, R., Schmalz, B., and Longobardi, A.: Assessing the performance of Sustainable Drainage Systems (SuDS) in urban context using SWMM5 modelling scenarios: the example of a typical industrial area in Lombardia Region, northern Italy, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21377, https://doi.org/10.5194/egusphere-egu2020-21377, 2020.
EGU2020-20683 | Displays | ITS2.12/HS12.24
Mainstream NBS in sustainable urban planning, strengthen the European Green Deal and EU recommendationsMaria Carmen Garcia Mateo
It is expected that 80% of the world population will be living in urban areas by 2050, therefore more pressure on natural resources will be exacerbated if we continue with harmful human environmental practices, since pre-industrial era, intensifying the mayor environmental, social and economic challenges in cities. Scientific evidence shows the potential of Nature Based Solution to tackle environmental, societal and economic challenge related to urbanization, climate change, loss of biodiversity and ecosystem services in cities.
The research article analyses EU regulation and framework related to cities, environmental, economic, and aim to discuss the status quo of Nature-Based Solutions (NBS) integration in urban planning Cities in H2020 Project, for better implementation policies on NBS at the city level, identifying gaps and potentials through a comprehensive mapping of the terrain on NBS policies in EU Cities, allowing for upscale and replication of those solutions in a form of a validated roadmap for sustainable cities across Europe and world-wide.
The main findings to shape the sustainability world of tomorrow of the research activities are as follow: to promote the inclusion of NBS in urban planning and decision making processes it was generally perceived that cities with more investment in research and innovation funding are more suitable for enabling cities to design and implement transition pathways to becoming inclusive, resilient, sustainable, low-carbon and resource efficient, to tackle most of the challenges Europe is facing today, such as climate change, health and well-being, loss of biodiversity of unsustainable urbanization.
Therefore, cities will contribute to improve the environmental, social and economic dimension, providing the way towards a more resource efficient, competitive and green economy with the implemention of the NBS, that might be tackled in an integrated, coherent and holistic approach to enhance sustainability, resilience and quality of life for dwellers.
How to cite: Garcia Mateo, M. C.: Mainstream NBS in sustainable urban planning, strengthen the European Green Deal and EU recommendations, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20683, https://doi.org/10.5194/egusphere-egu2020-20683, 2020.
It is expected that 80% of the world population will be living in urban areas by 2050, therefore more pressure on natural resources will be exacerbated if we continue with harmful human environmental practices, since pre-industrial era, intensifying the mayor environmental, social and economic challenges in cities. Scientific evidence shows the potential of Nature Based Solution to tackle environmental, societal and economic challenge related to urbanization, climate change, loss of biodiversity and ecosystem services in cities.
The research article analyses EU regulation and framework related to cities, environmental, economic, and aim to discuss the status quo of Nature-Based Solutions (NBS) integration in urban planning Cities in H2020 Project, for better implementation policies on NBS at the city level, identifying gaps and potentials through a comprehensive mapping of the terrain on NBS policies in EU Cities, allowing for upscale and replication of those solutions in a form of a validated roadmap for sustainable cities across Europe and world-wide.
The main findings to shape the sustainability world of tomorrow of the research activities are as follow: to promote the inclusion of NBS in urban planning and decision making processes it was generally perceived that cities with more investment in research and innovation funding are more suitable for enabling cities to design and implement transition pathways to becoming inclusive, resilient, sustainable, low-carbon and resource efficient, to tackle most of the challenges Europe is facing today, such as climate change, health and well-being, loss of biodiversity of unsustainable urbanization.
Therefore, cities will contribute to improve the environmental, social and economic dimension, providing the way towards a more resource efficient, competitive and green economy with the implemention of the NBS, that might be tackled in an integrated, coherent and holistic approach to enhance sustainability, resilience and quality of life for dwellers.
How to cite: Garcia Mateo, M. C.: Mainstream NBS in sustainable urban planning, strengthen the European Green Deal and EU recommendations, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20683, https://doi.org/10.5194/egusphere-egu2020-20683, 2020.
EGU2020-4087 | Displays | ITS2.12/HS12.24
Nature-based Solutions to Mitigate Environmental Challenges: A Systems Thinking Approach for Integrated Understanding of Human-Nature InteractionsSamaneh Seifollahi-Aghmiuni, Zahra Kalantari, and Carla Sofia Santos Ferreira
Urban areas increasingly face challenges associated with dynamic interactions between human and nature systems, such as global (land-, water-use and climate) changes and their related environmental consequences. These challenges can be addressed by sustainable management of coupled human-nature systems that are being stablished and progressed in urban areas. In this context, nature-based solutions (NbSs), as cost-effective actions, are used to protect, sustain, and restore natural or engineered ecosystems for potentially increasing their services delivery to humans. Being inspired and supported by nature systems, NbSs provide human well-being and biodiversity benefits and address coupled environmental-social-economic challenges. This study develops an integrated understanding of human-nature interactions, by investigating wetland functions and their values in Stockholm region, a European densely populated urban area. Wetlands integrate natural and anthropogenic processes and help cities adapt to changes by enhancing their resilience to environmental and social challenges. In this study, a participatory approach has been applied for combining local and scientific knowledge to address the following questions: (i) What are the underlying system dynamics and interactions between urbanization and wetland regulating ecosystem services as coupled human-nature systems? and (ii) How do these dynamics affect synergies and trade-offs in achieving Sustainable Development Goals (SDGs)? Therefore, relevant actors have been involved in thematic sector workshops and followed a systems thinking technique to co-create a causal loop diagram (CLD) as a conceptual system representation. The CLD highlights key components and drivers of the system, providing actor-specific perspectives of interactions and feedback structures within the system. Dynamic hypotheses on the effectiveness and roles of wetlands as NbSs in the study region have also been examined in a fuzzy cognitive map, developed as a semi-quantitative system representation. The results provide insights on wetland contributions to attaining SDGs in urban areas, as well as potential transition pathways toward sustainable development by identifying opportunities and barriers for the study region.
How to cite: Seifollahi-Aghmiuni, S., Kalantari, Z., and Santos Ferreira, C. S.: Nature-based Solutions to Mitigate Environmental Challenges: A Systems Thinking Approach for Integrated Understanding of Human-Nature Interactions, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4087, https://doi.org/10.5194/egusphere-egu2020-4087, 2020.
Urban areas increasingly face challenges associated with dynamic interactions between human and nature systems, such as global (land-, water-use and climate) changes and their related environmental consequences. These challenges can be addressed by sustainable management of coupled human-nature systems that are being stablished and progressed in urban areas. In this context, nature-based solutions (NbSs), as cost-effective actions, are used to protect, sustain, and restore natural or engineered ecosystems for potentially increasing their services delivery to humans. Being inspired and supported by nature systems, NbSs provide human well-being and biodiversity benefits and address coupled environmental-social-economic challenges. This study develops an integrated understanding of human-nature interactions, by investigating wetland functions and their values in Stockholm region, a European densely populated urban area. Wetlands integrate natural and anthropogenic processes and help cities adapt to changes by enhancing their resilience to environmental and social challenges. In this study, a participatory approach has been applied for combining local and scientific knowledge to address the following questions: (i) What are the underlying system dynamics and interactions between urbanization and wetland regulating ecosystem services as coupled human-nature systems? and (ii) How do these dynamics affect synergies and trade-offs in achieving Sustainable Development Goals (SDGs)? Therefore, relevant actors have been involved in thematic sector workshops and followed a systems thinking technique to co-create a causal loop diagram (CLD) as a conceptual system representation. The CLD highlights key components and drivers of the system, providing actor-specific perspectives of interactions and feedback structures within the system. Dynamic hypotheses on the effectiveness and roles of wetlands as NbSs in the study region have also been examined in a fuzzy cognitive map, developed as a semi-quantitative system representation. The results provide insights on wetland contributions to attaining SDGs in urban areas, as well as potential transition pathways toward sustainable development by identifying opportunities and barriers for the study region.
How to cite: Seifollahi-Aghmiuni, S., Kalantari, Z., and Santos Ferreira, C. S.: Nature-based Solutions to Mitigate Environmental Challenges: A Systems Thinking Approach for Integrated Understanding of Human-Nature Interactions, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4087, https://doi.org/10.5194/egusphere-egu2020-4087, 2020.
EGU2020-4216 | Displays | ITS2.12/HS12.24
Ecosystem Amenities as Drivers for Urban Development and its Resulting Impacts: A Social-Ecological Modeling Approach for Stockholm, SwedenHaozhi Pan and Jessica Page
A comprehensive understanding and modeling of socio-ecological systems can better assess the interactions between ecosystem amenities and human urban development. Based on the theory of supply-demand of ecosystem service, this paper constructs a comprehensive socio-ecological system modeling approach to identify ecosystem amenities’ roles in shaping human urban development and how the developments in turn affect ecosystem services. In our model, ecosystem services are regarded as both attractors and costs to human activities that cause urban land use changes. It adds to the existing ecosystem impact assessment approaches by integrating ecosystem services as both supply and demand in a socio-ecological process model with dynamic interaction and feedback between social and ecological systems. The approach couples socioeconomic, urban land use, and ecosystem interactions in a fine scaled (30×30 m) modeling framework with multiple time steps and feedback. Calibration through machine-learning techniques is applied to depict the joint driving forces from ecosystem amenities, socioeconomic attractors, and biophysical factors in influencing urban land use developments. Ecosystem Preservation District policy is tested as a policy scenario that aims to protect high-value ecosystem service areas while ensuring maximum ecosystem amenity provisions to urban inhabitants. Stockholm County, Sweden constitutes the study area with forecasts to 2040.
The analytical results will include: 1) calibrated functional forms and variable coefficients of ecosystem amenities that drive urban developments in comparison to other socioeconomic and biophysical variables; 2) assessment of ecosystem service value losses induced by human development; and 3) simulation of ecosystem service value preserved through Ecosystem Preservation District policy scenario. The analytical evidence provides further proof-of-concept of superior capability of using comprehensive socio-ecological models of understanding interactions between human and ecological systems. The policy scenario analysis offers supporting evidence for mitigating environmental impacts from urban growth through growth management policies. Finally, optimization of supply-demand of ecosystem services is critical in constituting the toolkit for nature-based solutions in urban planning and management.
How to cite: Pan, H. and Page, J.: Ecosystem Amenities as Drivers for Urban Development and its Resulting Impacts: A Social-Ecological Modeling Approach for Stockholm, Sweden, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4216, https://doi.org/10.5194/egusphere-egu2020-4216, 2020.
A comprehensive understanding and modeling of socio-ecological systems can better assess the interactions between ecosystem amenities and human urban development. Based on the theory of supply-demand of ecosystem service, this paper constructs a comprehensive socio-ecological system modeling approach to identify ecosystem amenities’ roles in shaping human urban development and how the developments in turn affect ecosystem services. In our model, ecosystem services are regarded as both attractors and costs to human activities that cause urban land use changes. It adds to the existing ecosystem impact assessment approaches by integrating ecosystem services as both supply and demand in a socio-ecological process model with dynamic interaction and feedback between social and ecological systems. The approach couples socioeconomic, urban land use, and ecosystem interactions in a fine scaled (30×30 m) modeling framework with multiple time steps and feedback. Calibration through machine-learning techniques is applied to depict the joint driving forces from ecosystem amenities, socioeconomic attractors, and biophysical factors in influencing urban land use developments. Ecosystem Preservation District policy is tested as a policy scenario that aims to protect high-value ecosystem service areas while ensuring maximum ecosystem amenity provisions to urban inhabitants. Stockholm County, Sweden constitutes the study area with forecasts to 2040.
The analytical results will include: 1) calibrated functional forms and variable coefficients of ecosystem amenities that drive urban developments in comparison to other socioeconomic and biophysical variables; 2) assessment of ecosystem service value losses induced by human development; and 3) simulation of ecosystem service value preserved through Ecosystem Preservation District policy scenario. The analytical evidence provides further proof-of-concept of superior capability of using comprehensive socio-ecological models of understanding interactions between human and ecological systems. The policy scenario analysis offers supporting evidence for mitigating environmental impacts from urban growth through growth management policies. Finally, optimization of supply-demand of ecosystem services is critical in constituting the toolkit for nature-based solutions in urban planning and management.
How to cite: Pan, H. and Page, J.: Ecosystem Amenities as Drivers for Urban Development and its Resulting Impacts: A Social-Ecological Modeling Approach for Stockholm, Sweden, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4216, https://doi.org/10.5194/egusphere-egu2020-4216, 2020.
EGU2020-4755 | Displays | ITS2.12/HS12.24
Carbon Emissions and Sequestrations in Urban Landscapes and their Various and Changing Land and Water CoversJessica Page, Elisie Jonsson, Zahra Kalantari, and Georgia Destouni
In order to meet the dual challenges of providing for a growing global population and mitigating climate change effects, it is necessary to consider how urban areas can grow while achieving carbon neutrality, which is a complex and difficult task. It requires increased understanding of carbon dynamics in the coupled urban social-ecological systems, including process-level understanding and distinction of natural and human-perturbed carbon exchanges and their interactions. A better understanding of these complex systems and processes could, for example, facilitate enhanced use of nature-based solutions (NBS) to help mitigate and offset the greenhouse gas (GHG) emissions of urban regions. This paper addresses part of this challenge, aiming to further understanding of the complex interactions between urban growth and GHG emissions implied by associated land use changes, including the influence of water bodies within the urban region on the carbon source-sink dynamics.
The study involves a comprehensive analysis of the land-use related GHG emissions and removals (through carbon sequestration) in the urban region of Stockholm County in Sweden, which is currently experiencing large urban growth and rapid population growth. Stockholm County includes large urban areas, forested areas (both old and young preserved natural forests and managed forestry), farmlands, some wetlands, and a number of smaller towns and semi-urban areas. Geographically, much of the county is located on the Stockholm Archipelago – a series of islands in the Baltic Sea – and the remainder is dominated by many lakes, including Lake Mälaren, which is Sweden’s third largest lake and the main water supply for the capital city Stockholm. The water coverage prevailing in the county allows for investigation of its effects in combination and relation to the variable and changing urban and other land cover distribution on the regional GHG emissions and sequestrations. These effects may be considerable and are addressed in this study.
Results include an inventory of existing and planned land uses in Stockholm County, and the GHG emissions or sequestration potentials associated with each of these. The land uses include urban and semi-urban areas, different types of natural and cultivated vegetation, agriculture, forestry, water bodies and wetlands. The study provides a map of Stockholm County’s GHG emission and sequestration potential, which is further analysed to advance our understanding of how future development in the county can be shaped to effectively minimize urban GHG emissions and maximize carbon sequestrations. The inclusion of water bodies in this GHG inventory proved to be particularly interesting; while lakes and other water bodies are often considered as ‘blue’ nature-based solutions (NBS) for maintaining and providing a number of ecosystem services in urban regions, our results indicate the lakes in Stockholm County as considerable sources of GHG emissions. The contribution of inland waters to the regional GHG emissions emphasizes the need and importance of improving rather than deteriorating the regional carbon sequestration potential in the urbanization process. This can be achieved by using and enhancing other types of NBS, such as rehabilitation of green areas like forests, in order to achieve carbon neutrality in this urban region.
How to cite: Page, J., Jonsson, E., Kalantari, Z., and Destouni, G.: Carbon Emissions and Sequestrations in Urban Landscapes and their Various and Changing Land and Water Covers , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4755, https://doi.org/10.5194/egusphere-egu2020-4755, 2020.
In order to meet the dual challenges of providing for a growing global population and mitigating climate change effects, it is necessary to consider how urban areas can grow while achieving carbon neutrality, which is a complex and difficult task. It requires increased understanding of carbon dynamics in the coupled urban social-ecological systems, including process-level understanding and distinction of natural and human-perturbed carbon exchanges and their interactions. A better understanding of these complex systems and processes could, for example, facilitate enhanced use of nature-based solutions (NBS) to help mitigate and offset the greenhouse gas (GHG) emissions of urban regions. This paper addresses part of this challenge, aiming to further understanding of the complex interactions between urban growth and GHG emissions implied by associated land use changes, including the influence of water bodies within the urban region on the carbon source-sink dynamics.
The study involves a comprehensive analysis of the land-use related GHG emissions and removals (through carbon sequestration) in the urban region of Stockholm County in Sweden, which is currently experiencing large urban growth and rapid population growth. Stockholm County includes large urban areas, forested areas (both old and young preserved natural forests and managed forestry), farmlands, some wetlands, and a number of smaller towns and semi-urban areas. Geographically, much of the county is located on the Stockholm Archipelago – a series of islands in the Baltic Sea – and the remainder is dominated by many lakes, including Lake Mälaren, which is Sweden’s third largest lake and the main water supply for the capital city Stockholm. The water coverage prevailing in the county allows for investigation of its effects in combination and relation to the variable and changing urban and other land cover distribution on the regional GHG emissions and sequestrations. These effects may be considerable and are addressed in this study.
Results include an inventory of existing and planned land uses in Stockholm County, and the GHG emissions or sequestration potentials associated with each of these. The land uses include urban and semi-urban areas, different types of natural and cultivated vegetation, agriculture, forestry, water bodies and wetlands. The study provides a map of Stockholm County’s GHG emission and sequestration potential, which is further analysed to advance our understanding of how future development in the county can be shaped to effectively minimize urban GHG emissions and maximize carbon sequestrations. The inclusion of water bodies in this GHG inventory proved to be particularly interesting; while lakes and other water bodies are often considered as ‘blue’ nature-based solutions (NBS) for maintaining and providing a number of ecosystem services in urban regions, our results indicate the lakes in Stockholm County as considerable sources of GHG emissions. The contribution of inland waters to the regional GHG emissions emphasizes the need and importance of improving rather than deteriorating the regional carbon sequestration potential in the urbanization process. This can be achieved by using and enhancing other types of NBS, such as rehabilitation of green areas like forests, in order to achieve carbon neutrality in this urban region.
How to cite: Page, J., Jonsson, E., Kalantari, Z., and Destouni, G.: Carbon Emissions and Sequestrations in Urban Landscapes and their Various and Changing Land and Water Covers , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4755, https://doi.org/10.5194/egusphere-egu2020-4755, 2020.
EGU2020-6515 | Displays | ITS2.12/HS12.24
Using Eye-tracking and Deep Learning Approach to Promote Naturalness in Urban LandscapeYeshan Qiu, Yugang Chen, and Shengquan Che
Promoting greenness and naturalness has been the integral goal in nature-based solutions for urban environments. Design and building appreciated landscape for subjective public perception is a key factor in the success of promoting urban greenness and naturalness. The current measures of naturalness are siloed from public appreciation and acceptance of urban landscape designs. Our goal is to use state-of-art methods combining traditional design perception evaluation to embed naturalness with public landscape aesthetic perceptions evaluation system. A deep learning and eye-tracking based approach to understand public aesthetic perceptions of landscape street-view images is developed and applied to a case study of Shanghai. We use machine deep learning techniques to identify and assess landscape composition with landscape images and in-situ captured data to study the influence of naturalness of public perceptions of landscape based on a Bayesian network aesthetic evaluation model. The methodology extend the present landscape aesthetic evaluation framework and has the potential to be implemented to much wider applications. Our results indicate a co-conception of naturalness and public appreciation as a proof-of-concept of nature-based solutions.
Key words:Eye-tracking;Deep Learning;Naturalness;Public aesthetic perceptions;Bayesian network aesthetic evaluation
How to cite: Qiu, Y., Chen, Y., and Che, S.: Using Eye-tracking and Deep Learning Approach to Promote Naturalness in Urban Landscape , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6515, https://doi.org/10.5194/egusphere-egu2020-6515, 2020.
Promoting greenness and naturalness has been the integral goal in nature-based solutions for urban environments. Design and building appreciated landscape for subjective public perception is a key factor in the success of promoting urban greenness and naturalness. The current measures of naturalness are siloed from public appreciation and acceptance of urban landscape designs. Our goal is to use state-of-art methods combining traditional design perception evaluation to embed naturalness with public landscape aesthetic perceptions evaluation system. A deep learning and eye-tracking based approach to understand public aesthetic perceptions of landscape street-view images is developed and applied to a case study of Shanghai. We use machine deep learning techniques to identify and assess landscape composition with landscape images and in-situ captured data to study the influence of naturalness of public perceptions of landscape based on a Bayesian network aesthetic evaluation model. The methodology extend the present landscape aesthetic evaluation framework and has the potential to be implemented to much wider applications. Our results indicate a co-conception of naturalness and public appreciation as a proof-of-concept of nature-based solutions.
Key words:Eye-tracking;Deep Learning;Naturalness;Public aesthetic perceptions;Bayesian network aesthetic evaluation
How to cite: Qiu, Y., Chen, Y., and Che, S.: Using Eye-tracking and Deep Learning Approach to Promote Naturalness in Urban Landscape , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6515, https://doi.org/10.5194/egusphere-egu2020-6515, 2020.
EGU2020-6516 | Displays | ITS2.12/HS12.24
Combing deep learning and multi-source data to promote subjective perception of ecosystem services in urban landscapeYugang Chen, Yeshan Qiu, and Shengquan Che
Subjective perception of ecosystem services is an emerging topic to better understand nature-based solutions for human and natural sustainability. Survey-based methods for subjective perception has the difficulty to move their conclusions beyond site-specific applications. Potential data sources for subjective perception exist in many sources such as geo-tagged social media and street-view photos. In this paper, we develop a combined deep-learning, survey, and multi-source data big data approach to study and promote subjective ecosystem service perceptions beyond site-specific applications. Specifically, we use machine learning models trained to predict human perception from a large dataset of images to rate urban landscape photos from social media and street-view maps. The predictors include CNN-engineered photo features, geographic information, survey-based ratings as well as public ratings from social media and street-view maps. The method of this study can be applied to understand subjective perception of ecosystem services for a wide range of urban landscape site. The results contribute to a better understanding of connections between subjective perception and objective evaluation of ecosystem services value for urban landscape so that nature-based solutions can be better implemented for human well-being and sustainability.
Key Words: Deep learning; Multi-source big data; Subjective perception; Ecosystem services; Social media
How to cite: Chen, Y., Qiu, Y., and Che, S.: Combing deep learning and multi-source data to promote subjective perception of ecosystem services in urban landscape, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6516, https://doi.org/10.5194/egusphere-egu2020-6516, 2020.
Subjective perception of ecosystem services is an emerging topic to better understand nature-based solutions for human and natural sustainability. Survey-based methods for subjective perception has the difficulty to move their conclusions beyond site-specific applications. Potential data sources for subjective perception exist in many sources such as geo-tagged social media and street-view photos. In this paper, we develop a combined deep-learning, survey, and multi-source data big data approach to study and promote subjective ecosystem service perceptions beyond site-specific applications. Specifically, we use machine learning models trained to predict human perception from a large dataset of images to rate urban landscape photos from social media and street-view maps. The predictors include CNN-engineered photo features, geographic information, survey-based ratings as well as public ratings from social media and street-view maps. The method of this study can be applied to understand subjective perception of ecosystem services for a wide range of urban landscape site. The results contribute to a better understanding of connections between subjective perception and objective evaluation of ecosystem services value for urban landscape so that nature-based solutions can be better implemented for human well-being and sustainability.
Key Words: Deep learning; Multi-source big data; Subjective perception; Ecosystem services; Social media
How to cite: Chen, Y., Qiu, Y., and Che, S.: Combing deep learning and multi-source data to promote subjective perception of ecosystem services in urban landscape, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6516, https://doi.org/10.5194/egusphere-egu2020-6516, 2020.
EGU2020-9115 | Displays | ITS2.12/HS12.24
Heavy metals removal by flax fibers to a further use in urban runoff management systemsMeriem Kajeiou, Abdellah Alem, Anne Pantet, Soumaya Mezghich, and Nasre-Dine Ahfir
Abstract
Water pollution has long been considered a major problem causing environmental and public health issues. A range of contaminants are encountered in wastewater, industrial effluents and also road runoff, they include total suspended solids, nutrients, hydrocarbons and heavy metals. These latter have been found very toxic and hazardous, either for human health, or fauna and flora. In recent decades, studies have demonstrated a good removal efficiency of heavy metals by adsorption technique, and especially biosorption. Numerous biosorbents have been investigated, mainly lignocellulosic materials which have shown high adsorption capacity. Within this context, this study aims to investigate flax fibers capacity of zinc, copper and lead ions removal from aqueous solutions, in order to examine the best conditions to test a full-scale device designed to treat stormwater runoff. The choice of flax is related to its high availability, low cost and local economy reasons. The device consists of sand and layers of flax fibers geotextiles. It will be placed on a parking at the entrance of a retention basin in Le Havre. For this purpose, batch experiments were carried out with ternary and mono-metal solutions of zinc, copper and lead ions at room temperature with molar concentrations of 0.04 mmol.l-1, at pH around 6.4. Biosorption kinetics and biosorption equilibrium were performed and analyzed. The results showed a favorable adsorption for the three metals in the order Pb > Cu > Zn for both types of solutions, with adsorption rates of 94%, 75% and 62% respectively in the ternary metal solution and 94%, 81% and 82% in the mono-metal solutions. The effect of competition was important for zinc, barely visible for copper, and non-existent for lead.
Keywords: Biosorption, heavy metals, pollutants, stormwater management systems.
How to cite: Kajeiou, M., Alem, A., Pantet, A., Mezghich, S., and Ahfir, N.-D.: Heavy metals removal by flax fibers to a further use in urban runoff management systems, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9115, https://doi.org/10.5194/egusphere-egu2020-9115, 2020.
Abstract
Water pollution has long been considered a major problem causing environmental and public health issues. A range of contaminants are encountered in wastewater, industrial effluents and also road runoff, they include total suspended solids, nutrients, hydrocarbons and heavy metals. These latter have been found very toxic and hazardous, either for human health, or fauna and flora. In recent decades, studies have demonstrated a good removal efficiency of heavy metals by adsorption technique, and especially biosorption. Numerous biosorbents have been investigated, mainly lignocellulosic materials which have shown high adsorption capacity. Within this context, this study aims to investigate flax fibers capacity of zinc, copper and lead ions removal from aqueous solutions, in order to examine the best conditions to test a full-scale device designed to treat stormwater runoff. The choice of flax is related to its high availability, low cost and local economy reasons. The device consists of sand and layers of flax fibers geotextiles. It will be placed on a parking at the entrance of a retention basin in Le Havre. For this purpose, batch experiments were carried out with ternary and mono-metal solutions of zinc, copper and lead ions at room temperature with molar concentrations of 0.04 mmol.l-1, at pH around 6.4. Biosorption kinetics and biosorption equilibrium were performed and analyzed. The results showed a favorable adsorption for the three metals in the order Pb > Cu > Zn for both types of solutions, with adsorption rates of 94%, 75% and 62% respectively in the ternary metal solution and 94%, 81% and 82% in the mono-metal solutions. The effect of competition was important for zinc, barely visible for copper, and non-existent for lead.
Keywords: Biosorption, heavy metals, pollutants, stormwater management systems.
How to cite: Kajeiou, M., Alem, A., Pantet, A., Mezghich, S., and Ahfir, N.-D.: Heavy metals removal by flax fibers to a further use in urban runoff management systems, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9115, https://doi.org/10.5194/egusphere-egu2020-9115, 2020.
EGU2020-10661 | Displays | ITS2.12/HS12.24
Effect of trees on street canyon ventilationSofia Fellini, Alessandro De Giovanni, Massimo Marro, Luca Ridolfi, and Pietro Salizzoni
Due to the overall growth of the world population and to the progressive shift from rural to urban centres, 70% of the world population is expected to live in urban areas in 2050. This trend is alarming when related to the constant decline of urban air quality at the global level. To cope with this rapid urbanization, solutions for sustainable cities are extensively sought. In this framework, the mitigation of air pollution in street canyons plays a crucial role. The street canyon (a street flanked by high buildings on both sides) is the fundamental unit of the urban tissue, as well as a vital public and residential space. Street canyons are particularly vulnerable to air pollution due to traffic emissions, low ventilation conditions, and the number of citizens exposed. Tree planting in street canyons is often used as a pollution mitigation strategy, due to the filtering effect of vegetation on airborne pollutants. However, from the aerodynamic point of view, trees can obstruct the wind flow thus reducing canyon ventilation and leading to higher pollutant concentrations. In this framework, we present the results of an experimental study aimed at evaluating how tree planting influences the flow and concentration fields within a street canyon. The study was carried out in a recirculating wind tunnel. An idealised urban district was simulated by an array of square blocks, whose orientation with respect to the incident wind was varied. Within this urban geometry, two rows of model trees were arranged at the sides of a street canyon. Three configurations with different spacing between the trees were considered. A passive scalar was injected from a line source placed at ground level to simulate traffic emissions. Concentration and flow field measurements were performed in several cross-sections of the street canyon. Results showed the effect of trees on the spatial distribution of pollutants. Moreover, a characteristic exchange velocity between the street canyon and the overlying atmosphere was estimated to quantify the overall canyon ventilation under several wind directions and different planting densities. These preliminary results provide city planners with first recommendations for the sustainable design of urban environments. Moreover, the experimental dataset is valuable in validating numerical simulations of air pollution in cities accounting for urban vegetation.
How to cite: Fellini, S., De Giovanni, A., Marro, M., Ridolfi, L., and Salizzoni, P.: Effect of trees on street canyon ventilation, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10661, https://doi.org/10.5194/egusphere-egu2020-10661, 2020.
Due to the overall growth of the world population and to the progressive shift from rural to urban centres, 70% of the world population is expected to live in urban areas in 2050. This trend is alarming when related to the constant decline of urban air quality at the global level. To cope with this rapid urbanization, solutions for sustainable cities are extensively sought. In this framework, the mitigation of air pollution in street canyons plays a crucial role. The street canyon (a street flanked by high buildings on both sides) is the fundamental unit of the urban tissue, as well as a vital public and residential space. Street canyons are particularly vulnerable to air pollution due to traffic emissions, low ventilation conditions, and the number of citizens exposed. Tree planting in street canyons is often used as a pollution mitigation strategy, due to the filtering effect of vegetation on airborne pollutants. However, from the aerodynamic point of view, trees can obstruct the wind flow thus reducing canyon ventilation and leading to higher pollutant concentrations. In this framework, we present the results of an experimental study aimed at evaluating how tree planting influences the flow and concentration fields within a street canyon. The study was carried out in a recirculating wind tunnel. An idealised urban district was simulated by an array of square blocks, whose orientation with respect to the incident wind was varied. Within this urban geometry, two rows of model trees were arranged at the sides of a street canyon. Three configurations with different spacing between the trees were considered. A passive scalar was injected from a line source placed at ground level to simulate traffic emissions. Concentration and flow field measurements were performed in several cross-sections of the street canyon. Results showed the effect of trees on the spatial distribution of pollutants. Moreover, a characteristic exchange velocity between the street canyon and the overlying atmosphere was estimated to quantify the overall canyon ventilation under several wind directions and different planting densities. These preliminary results provide city planners with first recommendations for the sustainable design of urban environments. Moreover, the experimental dataset is valuable in validating numerical simulations of air pollution in cities accounting for urban vegetation.
How to cite: Fellini, S., De Giovanni, A., Marro, M., Ridolfi, L., and Salizzoni, P.: Effect of trees on street canyon ventilation, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10661, https://doi.org/10.5194/egusphere-egu2020-10661, 2020.
EGU2020-11378 | Displays | ITS2.12/HS12.24
Out of the Lab, Into the Frying Pan: Understanding the Effect of Natural Groundwater Conditions on Bio-Based Ground Improvement StrategiesCaitlyn Hall, Bruce Rittmann, Leon van Paassen, and Edward Kavazanjian
We are developing a biogeochemical model for microbial denitrification-driven ground improvement to account for t the complexities expected in the field, including microbial inhibition and competition. We will use this model to support Microbially Induced Desaturation and Precipitation (MIDP) via denitrification as a bio-based ground improvement strategy alternative considering different treatment recipes and natural groundwater composition. Current ground improvement techniques have limited utility underneath or near existing structures. Developing alternatives is becoming increasingly important as urbanization increases. Large, centralized populations and infrastructure are more vulnerable to threats by natural disasters and geologic hazards such as earthquake-induced liquefaction and flooding. Bio-based ground stabilization techniques may be less disruptive to deploy and monitor, allowing application underneath existing structures. MIDP is a two-stage ground-improvement process in which biogenic gas desaturation provides immediate improvement while calcium carbonate precipitation provides long term stability. MIDP influences the geochemical environment and the hydro-mechanical behavior of soils through biogenic gas production, precipitation of calcium carbonate, and biomass growth. All three components alter the biogeochemical environment and subsurface permeability, thereby affecting the transport of substrates and subsequent product formation. The products of MIDP mitigate liquefaction at the lab-scale. MIDP experimentation and modeling have primarily considered only the use of de-ionized water and simplified water composition. However, denitrifying microorganisms compete with alternative electron acceptors, like sulfate and iron, and are influenced by the environment’s pH and salinity which may impede the MIDP treatment. Our biogeochemical model can predict the products and by-products of MIDP treatment considering realistic groundwater conditions. The results of this model will be used to develop comprehensive treatment plans for upcoming field trials to demonstrate treatment effectiveness and develop best practices for future application.
How to cite: Hall, C., Rittmann, B., van Paassen, L., and Kavazanjian, E.: Out of the Lab, Into the Frying Pan: Understanding the Effect of Natural Groundwater Conditions on Bio-Based Ground Improvement Strategies, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11378, https://doi.org/10.5194/egusphere-egu2020-11378, 2020.
We are developing a biogeochemical model for microbial denitrification-driven ground improvement to account for t the complexities expected in the field, including microbial inhibition and competition. We will use this model to support Microbially Induced Desaturation and Precipitation (MIDP) via denitrification as a bio-based ground improvement strategy alternative considering different treatment recipes and natural groundwater composition. Current ground improvement techniques have limited utility underneath or near existing structures. Developing alternatives is becoming increasingly important as urbanization increases. Large, centralized populations and infrastructure are more vulnerable to threats by natural disasters and geologic hazards such as earthquake-induced liquefaction and flooding. Bio-based ground stabilization techniques may be less disruptive to deploy and monitor, allowing application underneath existing structures. MIDP is a two-stage ground-improvement process in which biogenic gas desaturation provides immediate improvement while calcium carbonate precipitation provides long term stability. MIDP influences the geochemical environment and the hydro-mechanical behavior of soils through biogenic gas production, precipitation of calcium carbonate, and biomass growth. All three components alter the biogeochemical environment and subsurface permeability, thereby affecting the transport of substrates and subsequent product formation. The products of MIDP mitigate liquefaction at the lab-scale. MIDP experimentation and modeling have primarily considered only the use of de-ionized water and simplified water composition. However, denitrifying microorganisms compete with alternative electron acceptors, like sulfate and iron, and are influenced by the environment’s pH and salinity which may impede the MIDP treatment. Our biogeochemical model can predict the products and by-products of MIDP treatment considering realistic groundwater conditions. The results of this model will be used to develop comprehensive treatment plans for upcoming field trials to demonstrate treatment effectiveness and develop best practices for future application.
How to cite: Hall, C., Rittmann, B., van Paassen, L., and Kavazanjian, E.: Out of the Lab, Into the Frying Pan: Understanding the Effect of Natural Groundwater Conditions on Bio-Based Ground Improvement Strategies, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11378, https://doi.org/10.5194/egusphere-egu2020-11378, 2020.
EGU2020-17174 | Displays | ITS2.12/HS12.24
Spatial Dynamic Modeling of Ecological and Agricultural Land Preservation Strategies for Sustainable Urban Development in Nanjing, ChinaZipan Cai, Si Chen, and Vladimir Cvetkovic
In the context of accelerated urbanization, ecological and agricultural lands are continuously sacrificed for urban construction, which may severely affect the urban ecological environment and the health of citizens in cities in the long-term. To explore the sustainable development of cities, it is of considerable significance to study the complex and non-linear coupling relationship between urban expansion and the ecological environment. Different from static quantitative analysis, this paper will establish a spatial dynamic modeling approach couples the urban land-use change and ecosystem services. The spatial dynamic modeling approach combines a network-based analysis method with accurate environmental assessments, which includes a causal change mechanism that simplifies the complex interaction between the urban system and the surrounding environment. Because the model can use a pre-determined cell transformation rules to simulate the conversion probability of land cells at a specific point in time, it provides the opportunity to test the impact of changes in different policy scenarios. In the phase of the environmental impact assessment, the change probability will be converted into an environmental impact based on the calculation of the ecosystem services values under different development scenarios. Taking Nanjing, a rapidly developing city in China as an example, this paper will set up a variety of sustainable development policy scenarios based on the feedback relationship of local land use driving factors. We will test and evaluate the “what-if” consequences through a comparative study to help design the optimal environmental regulation scheme. Planning and decision support will be made to further guide the rational allocation of land use parcel and land development intensity towards a sustainable development future. As a result, this study can support policy decision makings on urban land-use planning and achieve ecological and agricultural land preservation strategies.
How to cite: Cai, Z., Chen, S., and Cvetkovic, V.: Spatial Dynamic Modeling of Ecological and Agricultural Land Preservation Strategies for Sustainable Urban Development in Nanjing, China , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-17174, https://doi.org/10.5194/egusphere-egu2020-17174, 2020.
In the context of accelerated urbanization, ecological and agricultural lands are continuously sacrificed for urban construction, which may severely affect the urban ecological environment and the health of citizens in cities in the long-term. To explore the sustainable development of cities, it is of considerable significance to study the complex and non-linear coupling relationship between urban expansion and the ecological environment. Different from static quantitative analysis, this paper will establish a spatial dynamic modeling approach couples the urban land-use change and ecosystem services. The spatial dynamic modeling approach combines a network-based analysis method with accurate environmental assessments, which includes a causal change mechanism that simplifies the complex interaction between the urban system and the surrounding environment. Because the model can use a pre-determined cell transformation rules to simulate the conversion probability of land cells at a specific point in time, it provides the opportunity to test the impact of changes in different policy scenarios. In the phase of the environmental impact assessment, the change probability will be converted into an environmental impact based on the calculation of the ecosystem services values under different development scenarios. Taking Nanjing, a rapidly developing city in China as an example, this paper will set up a variety of sustainable development policy scenarios based on the feedback relationship of local land use driving factors. We will test and evaluate the “what-if” consequences through a comparative study to help design the optimal environmental regulation scheme. Planning and decision support will be made to further guide the rational allocation of land use parcel and land development intensity towards a sustainable development future. As a result, this study can support policy decision makings on urban land-use planning and achieve ecological and agricultural land preservation strategies.
How to cite: Cai, Z., Chen, S., and Cvetkovic, V.: Spatial Dynamic Modeling of Ecological and Agricultural Land Preservation Strategies for Sustainable Urban Development in Nanjing, China , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-17174, https://doi.org/10.5194/egusphere-egu2020-17174, 2020.
EGU2020-18356 | Displays | ITS2.12/HS12.24
The climate impact of municipal land policyFranziska Koebsch, Ulrike Huth, and Petra Kahle
Ecosystems can store significant amounts of carbon dioxide and are therefore often considered as nature-based solutions to combat climate change. However, anthropogenic perturbations can turn these natural sinks to substantial sources for greenhouse gases. The climate impact of land use and land use change is well recognized on national and international level. Yet, municipalities which implement concrete measures on the ground, lack a tool to quantify the climate effect of their land use decisions.
Using the German city of Rostock (200.000 inhabitants) as an example, we present an approach to evaluate the climate effect of different land use trajectories in urban areas. The approach makes uses of municipal land use maps and complies with the IPCC inventory guidelines. Based on this emission assessment we can provide generic recommendations to exploit nature-based solutions for climate protection in municipal land policy.
How to cite: Koebsch, F., Huth, U., and Kahle, P.: The climate impact of municipal land policy, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18356, https://doi.org/10.5194/egusphere-egu2020-18356, 2020.
Ecosystems can store significant amounts of carbon dioxide and are therefore often considered as nature-based solutions to combat climate change. However, anthropogenic perturbations can turn these natural sinks to substantial sources for greenhouse gases. The climate impact of land use and land use change is well recognized on national and international level. Yet, municipalities which implement concrete measures on the ground, lack a tool to quantify the climate effect of their land use decisions.
Using the German city of Rostock (200.000 inhabitants) as an example, we present an approach to evaluate the climate effect of different land use trajectories in urban areas. The approach makes uses of municipal land use maps and complies with the IPCC inventory guidelines. Based on this emission assessment we can provide generic recommendations to exploit nature-based solutions for climate protection in municipal land policy.
How to cite: Koebsch, F., Huth, U., and Kahle, P.: The climate impact of municipal land policy, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18356, https://doi.org/10.5194/egusphere-egu2020-18356, 2020.
EGU2020-20297 | Displays | ITS2.12/HS12.24
A new tool to accurately calculate root reinforcement: the Root Bundle Model software RBM++Ilenia Murgia, Denis Cohen, Filippo Giadrossich, Gian Franco Capra, and Massimiliano Schwarz
The influence of vegetation on the hydro-geomorphological response is widely recognized, and root reinforcement mechanisms are an important component of slope stability models. The calculation of this essential information is very complex because of the multiple interactions in the root-soil system, but also because of several mechanical characteristics that influence the tension and compression behaviour of the root itself.
This contribution has two aims. The first one is to show parameters of root reinforcement effects of Robinia pseudoacacia (L.), a tree commonly used for the mitigation of rainfall-induced landslides at small scale. This species is very widespread because it is able to grow on marginal areas, such as abandoned hillside sites, or on infrastructures, such as road and railway scarps, but its characterization represents a gap in knowledge in the literature. Field pullout tests were performed to collect input data for the quantification of root reinforcement using the Root Bundle Model with Weibull survival function (RBMw, Schwarz et al, 2013). Recent studies have shown how the RBMw is a very efficient model for the evaluation of root reinforcement by considering the heterogeneity of both root mechanical characteristics and their distribution in the soil. However, due to the model complexity and the need for information difficult to obtain, other simpler but less accurate approaches, such as the Wu model, have been preferred.
For this reason, the second aim of the work is to present a new tool written in C++, and called RBM++, easy to use that enables anyone, from Universities to private companies, to quantify the effect of roots on slope stability. RBM++ allows the calculation of root reinforcement using two different methods: the first one by entering own data of the mechanical parameters of the roots, estimated beforehand with pullout tests in the field, and the root distribution in the soil; the second one by selecting the tree species and the data related to the spatial root distribution. For the first method, it is necessary to use a pullout machine to obtain the data. Because this instrument is not commonly available the model has the option to use default parameters for nine tree species based on values found in the literature.
Output from RBM++ comes in tabular format and with a plot that shows, via the graphical user interface, the spatial distribution of forces as a function of the distance from the tree trunk and size of the tree.
RBM++ makes it easier to share and exchange knowledge related to root reinforcement. Therefore, it will allow the realization of a database containing standard data on root mechanical behavior of tree species commonly used for shallow landslide mitigation.
How to cite: Murgia, I., Cohen, D., Giadrossich, F., Capra, G. F., and Schwarz, M.: A new tool to accurately calculate root reinforcement: the Root Bundle Model software RBM++, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20297, https://doi.org/10.5194/egusphere-egu2020-20297, 2020.
The influence of vegetation on the hydro-geomorphological response is widely recognized, and root reinforcement mechanisms are an important component of slope stability models. The calculation of this essential information is very complex because of the multiple interactions in the root-soil system, but also because of several mechanical characteristics that influence the tension and compression behaviour of the root itself.
This contribution has two aims. The first one is to show parameters of root reinforcement effects of Robinia pseudoacacia (L.), a tree commonly used for the mitigation of rainfall-induced landslides at small scale. This species is very widespread because it is able to grow on marginal areas, such as abandoned hillside sites, or on infrastructures, such as road and railway scarps, but its characterization represents a gap in knowledge in the literature. Field pullout tests were performed to collect input data for the quantification of root reinforcement using the Root Bundle Model with Weibull survival function (RBMw, Schwarz et al, 2013). Recent studies have shown how the RBMw is a very efficient model for the evaluation of root reinforcement by considering the heterogeneity of both root mechanical characteristics and their distribution in the soil. However, due to the model complexity and the need for information difficult to obtain, other simpler but less accurate approaches, such as the Wu model, have been preferred.
For this reason, the second aim of the work is to present a new tool written in C++, and called RBM++, easy to use that enables anyone, from Universities to private companies, to quantify the effect of roots on slope stability. RBM++ allows the calculation of root reinforcement using two different methods: the first one by entering own data of the mechanical parameters of the roots, estimated beforehand with pullout tests in the field, and the root distribution in the soil; the second one by selecting the tree species and the data related to the spatial root distribution. For the first method, it is necessary to use a pullout machine to obtain the data. Because this instrument is not commonly available the model has the option to use default parameters for nine tree species based on values found in the literature.
Output from RBM++ comes in tabular format and with a plot that shows, via the graphical user interface, the spatial distribution of forces as a function of the distance from the tree trunk and size of the tree.
RBM++ makes it easier to share and exchange knowledge related to root reinforcement. Therefore, it will allow the realization of a database containing standard data on root mechanical behavior of tree species commonly used for shallow landslide mitigation.
How to cite: Murgia, I., Cohen, D., Giadrossich, F., Capra, G. F., and Schwarz, M.: A new tool to accurately calculate root reinforcement: the Root Bundle Model software RBM++, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20297, https://doi.org/10.5194/egusphere-egu2020-20297, 2020.
ITS2.13/AS4.29 – Climatic, environmental and societal impacts of volcanic activity
EGU2020-8333 | Displays | ITS2.13/AS4.29
Volcanically induced stratospheric water vapor changesClarissa Kroll, Alon Azulay, Hauke Schmidt, and Claudia Timmreck
Stratospheric water vapor (SWV) is important not only for stratospheric ozone chemistry but also due to its influence on the atmospheric radiation budget.
After volcanic eruptions, SWV is known to increase due to two different mechanisms: First, water within the volcanic plume is directly injected into the stratosphere during the eruption itself. Second, the volcanic aerosols lead to a warming of the lower stratosphere including the tropopause layer. The increased temperature of the cold point allows an increased water vapor transit from the troposphere to the stratosphere. Not much is known about this process as it is obscured by internal variability and observations are scare.
To better understand the increased SWV entry via the indirect pathway after volcanic eruptions we employ a suite of large volcanically perturbed ensemble simulations of the MPI-ESM1.2-LR for five different eruptions strengths (2.5 Mt, 5 Mt, 10 Mt, 20 Mt and 40 Mt sulfur). Each ensemble consists of 100 realizations for a time period of 3 years.
Our work mainly focuses on the tropical tropopause layer (TTL) quantifying changes in relevant parameters such as the atmospheric temperature profile and the consequent increase in SWV. A maximum increase of up to 4 ppmm in the first two years after the eruption is found in the case of the 40 Mt eruption. Furthermore the large ensemble size additionally allows for an analysis of the statistical significance and influence of variability, showing that SWV increases can already be detected for the 2.5 Mt eruption in the ensemble mean, for single ensemble members the internal variability dominates the SWV entry up to an eruption strength of 10 Mt to 20 Mt depending on the season and time after the eruption. The study is complemented by investigations using the 1D radiative convective equilibrium model konrad to understand the radiative effects of the SWV increase.
How to cite: Kroll, C., Azulay, A., Schmidt, H., and Timmreck, C.: Volcanically induced stratospheric water vapor changes , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8333, https://doi.org/10.5194/egusphere-egu2020-8333, 2020.
Stratospheric water vapor (SWV) is important not only for stratospheric ozone chemistry but also due to its influence on the atmospheric radiation budget.
After volcanic eruptions, SWV is known to increase due to two different mechanisms: First, water within the volcanic plume is directly injected into the stratosphere during the eruption itself. Second, the volcanic aerosols lead to a warming of the lower stratosphere including the tropopause layer. The increased temperature of the cold point allows an increased water vapor transit from the troposphere to the stratosphere. Not much is known about this process as it is obscured by internal variability and observations are scare.
To better understand the increased SWV entry via the indirect pathway after volcanic eruptions we employ a suite of large volcanically perturbed ensemble simulations of the MPI-ESM1.2-LR for five different eruptions strengths (2.5 Mt, 5 Mt, 10 Mt, 20 Mt and 40 Mt sulfur). Each ensemble consists of 100 realizations for a time period of 3 years.
Our work mainly focuses on the tropical tropopause layer (TTL) quantifying changes in relevant parameters such as the atmospheric temperature profile and the consequent increase in SWV. A maximum increase of up to 4 ppmm in the first two years after the eruption is found in the case of the 40 Mt eruption. Furthermore the large ensemble size additionally allows for an analysis of the statistical significance and influence of variability, showing that SWV increases can already be detected for the 2.5 Mt eruption in the ensemble mean, for single ensemble members the internal variability dominates the SWV entry up to an eruption strength of 10 Mt to 20 Mt depending on the season and time after the eruption. The study is complemented by investigations using the 1D radiative convective equilibrium model konrad to understand the radiative effects of the SWV increase.
How to cite: Kroll, C., Azulay, A., Schmidt, H., and Timmreck, C.: Volcanically induced stratospheric water vapor changes , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8333, https://doi.org/10.5194/egusphere-egu2020-8333, 2020.
EGU2020-12957 | Displays | ITS2.13/AS4.29
Persistent draining of the stratospheric 10Be reservoir after the Samalas volcanic eruption (1257 CE)Mélanie Baroni, Edouard Bard, Jean-Robert Petit, Sophie Viseur, and Aster Team
More than 2,000 analyses of beryllium‐10 (10Be) and sulphate concentrations were performed at a nominal subannual resolution on an ice core covering the last millennium as well as on shorter records from three sites in Antarctica (Dome C, South Pole, and Vostok) to better understand the increase in 10Be deposition during stratospheric volcanic eruptions.
A significant increase in 10Be concentration is observed in 14 of the 26 volcanic events studied. The slope and intercept of the linear regression between 10Be and sulphate concentrations provide different and complementary information. Slope is an indicator of the efficiency of the draining of 10Be atoms by volcanic aerosols depending on the amount of sulphur dioxide (SO2) released and on the altitude it reaches in the stratosphere. The intercept provides an appreciation of the 10Be production in the stratospheric reservoir, ultimately depending on solar modulation (Baroni et al., 2019, JGR).
Among all the identified events, the Samalas event (1257 CE) stands out as the biggest eruption of the last millennium with the lowest positive slope. It released (158 ± 12) Tg of SO2 up to an altitude of 43 km in the stratosphere (Lavigne et al., 2013, PNAS ; Vidal et al., 2016, Sci. Rep.). We hypothesize that the persistence of volcanic aerosols in the stratosphere after the Samalas eruption has drained the stratospheric 10Be reservoir for a decade.
The persistence of Samalas sulphate aerosols might be due to the increase of SO2 lifetime because of: (i) the exhaustion of the OH reservoir required for sulphate formation (e.g. (Bekki, 1995, GRL; Bekki et al., 1996, GRL; Savarino et al., 2003, JGR); and/or, (ii) the evaporation followed by photolysis of gaseous sulphuric acid back to SO2 at altitudes higher than 30 km (Delaygue et al., 2015, Tellus; Rinsland et al., 1995, GRL). In addition, the lifetime of air masses increases to 5 years above 30 km altitude compared with 1 year for aerosols and air masses in the lower stratosphere (Delaygue et al., 2015, Tellus). When this high-altitude SO2 finally returns below the 30 km limit, it could be oxidized back to sulphate and forms new sulphate aerosols. These processes could imply that the 10Be reservoir is washed out over a long time period following the end of the eruption of Samalas.
This would run counter to modelling studies that predict the formation of large particle sizes and their rapid fall out due to the large amount of SO2, which would limit the climatic impact of Samalas-type eruptions (Pinto et al., 1989, JGR; Timmreck et al., 2010, 2009, GRL).
How to cite: Baroni, M., Bard, E., Petit, J.-R., Viseur, S., and Team, A.: Persistent draining of the stratospheric 10Be reservoir after the Samalas volcanic eruption (1257 CE), EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12957, https://doi.org/10.5194/egusphere-egu2020-12957, 2020.
More than 2,000 analyses of beryllium‐10 (10Be) and sulphate concentrations were performed at a nominal subannual resolution on an ice core covering the last millennium as well as on shorter records from three sites in Antarctica (Dome C, South Pole, and Vostok) to better understand the increase in 10Be deposition during stratospheric volcanic eruptions.
A significant increase in 10Be concentration is observed in 14 of the 26 volcanic events studied. The slope and intercept of the linear regression between 10Be and sulphate concentrations provide different and complementary information. Slope is an indicator of the efficiency of the draining of 10Be atoms by volcanic aerosols depending on the amount of sulphur dioxide (SO2) released and on the altitude it reaches in the stratosphere. The intercept provides an appreciation of the 10Be production in the stratospheric reservoir, ultimately depending on solar modulation (Baroni et al., 2019, JGR).
Among all the identified events, the Samalas event (1257 CE) stands out as the biggest eruption of the last millennium with the lowest positive slope. It released (158 ± 12) Tg of SO2 up to an altitude of 43 km in the stratosphere (Lavigne et al., 2013, PNAS ; Vidal et al., 2016, Sci. Rep.). We hypothesize that the persistence of volcanic aerosols in the stratosphere after the Samalas eruption has drained the stratospheric 10Be reservoir for a decade.
The persistence of Samalas sulphate aerosols might be due to the increase of SO2 lifetime because of: (i) the exhaustion of the OH reservoir required for sulphate formation (e.g. (Bekki, 1995, GRL; Bekki et al., 1996, GRL; Savarino et al., 2003, JGR); and/or, (ii) the evaporation followed by photolysis of gaseous sulphuric acid back to SO2 at altitudes higher than 30 km (Delaygue et al., 2015, Tellus; Rinsland et al., 1995, GRL). In addition, the lifetime of air masses increases to 5 years above 30 km altitude compared with 1 year for aerosols and air masses in the lower stratosphere (Delaygue et al., 2015, Tellus). When this high-altitude SO2 finally returns below the 30 km limit, it could be oxidized back to sulphate and forms new sulphate aerosols. These processes could imply that the 10Be reservoir is washed out over a long time period following the end of the eruption of Samalas.
This would run counter to modelling studies that predict the formation of large particle sizes and their rapid fall out due to the large amount of SO2, which would limit the climatic impact of Samalas-type eruptions (Pinto et al., 1989, JGR; Timmreck et al., 2010, 2009, GRL).
How to cite: Baroni, M., Bard, E., Petit, J.-R., Viseur, S., and Team, A.: Persistent draining of the stratospheric 10Be reservoir after the Samalas volcanic eruption (1257 CE), EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12957, https://doi.org/10.5194/egusphere-egu2020-12957, 2020.
EGU2020-5254 | Displays | ITS2.13/AS4.29
Large variations in volcanic aerosol forcing efficiency due to eruption source parameters and rapid adjustmentsLauren Marshall, Christopher Smith, Piers Forster, Thomas Aubry, and Anja Schmidt
The relationship between volcanic stratospheric aerosol optical depth (SAOD) and volcanic forcing is key to quantify the climate impacts of volcanic eruptions. In their fifth assessment report, the Intergovernmental Panel on Climate Change uses a single scaling factor between volcanic SAOD and effective radiative forcing (ERF) based on climate model simulations of the 1991 Mt. Pinatubo eruption, which may not be appropriate for eruptions of different magnitudes. Using a large-ensemble of aerosol-chemistry-climate simulations of eruptions with different SO2 emissions, latitudes, emission altitudes and seasons, we find that the effective radiative forcing is on average 21% less than the instantaneous radiative forcing, predominantly due to a positive shortwave cloud adjustment. In our model, the volcanic SAOD to ERF relationship is non-unique and depends strongly on eruption latitude and season. We recommend a power law fit in the form of ERF = -15.1 × SAOD0.88 to convert SAOD (in the range of 0.01-0.7) to ERF.
How to cite: Marshall, L., Smith, C., Forster, P., Aubry, T., and Schmidt, A.: Large variations in volcanic aerosol forcing efficiency due to eruption source parameters and rapid adjustments, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5254, https://doi.org/10.5194/egusphere-egu2020-5254, 2020.
The relationship between volcanic stratospheric aerosol optical depth (SAOD) and volcanic forcing is key to quantify the climate impacts of volcanic eruptions. In their fifth assessment report, the Intergovernmental Panel on Climate Change uses a single scaling factor between volcanic SAOD and effective radiative forcing (ERF) based on climate model simulations of the 1991 Mt. Pinatubo eruption, which may not be appropriate for eruptions of different magnitudes. Using a large-ensemble of aerosol-chemistry-climate simulations of eruptions with different SO2 emissions, latitudes, emission altitudes and seasons, we find that the effective radiative forcing is on average 21% less than the instantaneous radiative forcing, predominantly due to a positive shortwave cloud adjustment. In our model, the volcanic SAOD to ERF relationship is non-unique and depends strongly on eruption latitude and season. We recommend a power law fit in the form of ERF = -15.1 × SAOD0.88 to convert SAOD (in the range of 0.01-0.7) to ERF.
How to cite: Marshall, L., Smith, C., Forster, P., Aubry, T., and Schmidt, A.: Large variations in volcanic aerosol forcing efficiency due to eruption source parameters and rapid adjustments, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5254, https://doi.org/10.5194/egusphere-egu2020-5254, 2020.
EGU2020-2067 | Displays | ITS2.13/AS4.29
ITCZ shift and extratropical teleconnections drive ENSO response to volcanic eruptionsFrancesco S.R. Pausata, Davide Zanchettin, Christina Karamperidou, Rodrigo Caballero, and David S. Battisti
The mechanisms through which volcanic eruptions impact the El Niño-Southern Oscillation (ENSO) state are still controversial. Previous studies have invoked direct radiative forcing, an ocean dynamical thermostat (ODT) mechanism and shifts of the Intertropical Convergence Zone (ITCZ), among others, to explain the ENSO response to tropical eruptions. Here, these mechanisms are tested using ensemble simulations with an Earth System Model in which volcanic aerosols from a Tambora-like eruption are confined either in the Northern or the Southern Hemisphere. We show that the primary drivers of the ENSO response are the shifts of the ITCZ together with extratropical circulation changes, which affect the tropics; the ODT mechanism does not operate in our simulations. Our study highlights the importance of initial conditions in the ENSO response to tropical volcanic eruptions and provides explanations for the predominance of post-eruption El Niño events and for the occasional post-eruption La Niña in observations and reconstructions.
How to cite: Pausata, F. S. R., Zanchettin, D., Karamperidou, C., Caballero, R., and Battisti, D. S.: ITCZ shift and extratropical teleconnections drive ENSO response to volcanic eruptions, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2067, https://doi.org/10.5194/egusphere-egu2020-2067, 2020.
The mechanisms through which volcanic eruptions impact the El Niño-Southern Oscillation (ENSO) state are still controversial. Previous studies have invoked direct radiative forcing, an ocean dynamical thermostat (ODT) mechanism and shifts of the Intertropical Convergence Zone (ITCZ), among others, to explain the ENSO response to tropical eruptions. Here, these mechanisms are tested using ensemble simulations with an Earth System Model in which volcanic aerosols from a Tambora-like eruption are confined either in the Northern or the Southern Hemisphere. We show that the primary drivers of the ENSO response are the shifts of the ITCZ together with extratropical circulation changes, which affect the tropics; the ODT mechanism does not operate in our simulations. Our study highlights the importance of initial conditions in the ENSO response to tropical volcanic eruptions and provides explanations for the predominance of post-eruption El Niño events and for the occasional post-eruption La Niña in observations and reconstructions.
How to cite: Pausata, F. S. R., Zanchettin, D., Karamperidou, C., Caballero, R., and Battisti, D. S.: ITCZ shift and extratropical teleconnections drive ENSO response to volcanic eruptions, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2067, https://doi.org/10.5194/egusphere-egu2020-2067, 2020.
EGU2020-12338 | Displays | ITS2.13/AS4.29
Influence of regional anthropogenic changes over Nile region on the climate system during the late Holocene (~2500 years before present)Ram Singh, Allegra N. LeGrande, and Kostas Tsigaridis
The societal impacts of climate change during the late Holocene leads to regional anthropogenic changes over the Nile region floodplain and could have acted in tandem with natural factors like major volcanic eruptions on the regional climate system to magnify the local climatic impacts. This study aims to explore and investigate the sensitivity of climatic changes to the regional anthropogenic changes due to various factors over the Nile river floodplains during the late-Holocene (2.5K years before present). The GISS ModelE Earth system model will be used to simulate the various scenarios of regional increasing/decreasing river fraction, changes in vegetation type and cover, along with changes in land surface type against the no-changes scenario in absence of volcanic eruptions. The spatial coverage of the Nile river basin is estimated using the GIS shapefile based on elevation data from Shuttle Radar Topography Mission (SRTM) at 3 Arc-seconds (approx. 90-meter) horizontal resolution. The extent of flooding in the model grid (2.0°x2.5° in latitude and longitude) is estimated using the existing high-resolution (0.125°x0.125°) gridded topographic elevation information and mapped over the Nile river floodplains. This study also focuses on evaluating the NASA GISS ModelE for resolving the climate feedbacks and response on climate system due to anthropogenic changes and volcanic eruptions. It is also aimed to analyze and quantify the impact of various anthropogenic factors over the African monsoon system and rainfall over the region, which feeds the Nile River.
How to cite: Singh, R., LeGrande, A. N., and Tsigaridis, K.: Influence of regional anthropogenic changes over Nile region on the climate system during the late Holocene (~2500 years before present), EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12338, https://doi.org/10.5194/egusphere-egu2020-12338, 2020.
The societal impacts of climate change during the late Holocene leads to regional anthropogenic changes over the Nile region floodplain and could have acted in tandem with natural factors like major volcanic eruptions on the regional climate system to magnify the local climatic impacts. This study aims to explore and investigate the sensitivity of climatic changes to the regional anthropogenic changes due to various factors over the Nile river floodplains during the late-Holocene (2.5K years before present). The GISS ModelE Earth system model will be used to simulate the various scenarios of regional increasing/decreasing river fraction, changes in vegetation type and cover, along with changes in land surface type against the no-changes scenario in absence of volcanic eruptions. The spatial coverage of the Nile river basin is estimated using the GIS shapefile based on elevation data from Shuttle Radar Topography Mission (SRTM) at 3 Arc-seconds (approx. 90-meter) horizontal resolution. The extent of flooding in the model grid (2.0°x2.5° in latitude and longitude) is estimated using the existing high-resolution (0.125°x0.125°) gridded topographic elevation information and mapped over the Nile river floodplains. This study also focuses on evaluating the NASA GISS ModelE for resolving the climate feedbacks and response on climate system due to anthropogenic changes and volcanic eruptions. It is also aimed to analyze and quantify the impact of various anthropogenic factors over the African monsoon system and rainfall over the region, which feeds the Nile River.
How to cite: Singh, R., LeGrande, A. N., and Tsigaridis, K.: Influence of regional anthropogenic changes over Nile region on the climate system during the late Holocene (~2500 years before present), EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12338, https://doi.org/10.5194/egusphere-egu2020-12338, 2020.
EGU2020-22127 | Displays | ITS2.13/AS4.29
Volcanic Impacts on Climate and Society in First Millennium BCE BabyloniaFrancis Ludlow, Conor Kostick, Rhonda McGovern, and Laura Farrelly
This paper capitalizes upon the recent availability of much-improved ice-core chronologies of explosive volcanism for the first millennium BCE in combination with the remarkable record of meteorological data preserved in Babylonian astronomical diaries, written on cuneiform tablets spanning 652-61BC and now housed in the British Museum. These diaries comprise systematic economic data on agricultural prices, weather observations at an hourly resolution, river heights for the Euphrates and other phenomena. Our initial results reveal strong correspondences between multiple previously unrecognized accounts of solar dimming, extreme cold weather and major ice-core volcanic signals. We also observe anomalously high spring floods of the Euphrates at Babylon, following major tropical eruptions, which is consistent with climate modelling of anomalously elevated winter precipitation in the headwaters of the Euphrates and Tigris in northeastern Turkey. With the astronomical diaries also providing systematic meteorological information (unparalleled in resolution and scope until at least the Early Modern period) ranging from wind direction and intensity, to the level of cloud cover and references to atmospheric clarity (clear vs. dusty skies), to the general conditions (temperature and precipitation) for all seasons, these sources can in combination with natural archives such as ice-cores open an unprecedented window into the Middle Eastern climate of the first millennium BCE.
Nor are these or other written sources from the region silent on the societal consequences of extreme weather and other climatic shocks. We will thus finish our paper with a brief case study of responses to the climatic impacts of explosive volcanism during the reign of Esarhaddon, ruler of Assyria, who's reign from 672 BCE suddenly became a troubled one. Contemporary prophecies indicated a loss of cattle, the failure of dates and sesame and the arrival of locusts. Such prophecies were often descriptions of events already occurring and along with predictions dated to 671 of 'darkness in the land', crop failure and famine, there is definite evidence that Esarhaddon resorted to the ritual of placing a substitute (sacrificial) ruler on the throne for 100 days. This did not, however, resolve the dangers perceived by the Assyrian ruler and he repeated the ritual in 670, along with apotropaic rituals against malaria and plague. That year, nevertheless, saw revolt. Herdsmen refused to supply oxen and sheep to the government officials, who could not travel the land without armed escort. Regional governors appropriated revenues and construction workers halted brick production. Esarhaddon acted decisively in late 670, early 669, executing a large number of rebellious Assyrian nobles. 669 and 668 remained troubled, however, with prophecies of locusts and plague among cattle and humans, while in 667 Egypt revolted against Assyria in the context of possible shortages of barely and straw.
This paper is a contribution to the Irish Research Council-funded “Climates of Conflict in Ancient Babylonia” (CLICAB) project.
How to cite: Ludlow, F., Kostick, C., McGovern, R., and Farrelly, L.: Volcanic Impacts on Climate and Society in First Millennium BCE Babylonia, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22127, https://doi.org/10.5194/egusphere-egu2020-22127, 2020.
This paper capitalizes upon the recent availability of much-improved ice-core chronologies of explosive volcanism for the first millennium BCE in combination with the remarkable record of meteorological data preserved in Babylonian astronomical diaries, written on cuneiform tablets spanning 652-61BC and now housed in the British Museum. These diaries comprise systematic economic data on agricultural prices, weather observations at an hourly resolution, river heights for the Euphrates and other phenomena. Our initial results reveal strong correspondences between multiple previously unrecognized accounts of solar dimming, extreme cold weather and major ice-core volcanic signals. We also observe anomalously high spring floods of the Euphrates at Babylon, following major tropical eruptions, which is consistent with climate modelling of anomalously elevated winter precipitation in the headwaters of the Euphrates and Tigris in northeastern Turkey. With the astronomical diaries also providing systematic meteorological information (unparalleled in resolution and scope until at least the Early Modern period) ranging from wind direction and intensity, to the level of cloud cover and references to atmospheric clarity (clear vs. dusty skies), to the general conditions (temperature and precipitation) for all seasons, these sources can in combination with natural archives such as ice-cores open an unprecedented window into the Middle Eastern climate of the first millennium BCE.
Nor are these or other written sources from the region silent on the societal consequences of extreme weather and other climatic shocks. We will thus finish our paper with a brief case study of responses to the climatic impacts of explosive volcanism during the reign of Esarhaddon, ruler of Assyria, who's reign from 672 BCE suddenly became a troubled one. Contemporary prophecies indicated a loss of cattle, the failure of dates and sesame and the arrival of locusts. Such prophecies were often descriptions of events already occurring and along with predictions dated to 671 of 'darkness in the land', crop failure and famine, there is definite evidence that Esarhaddon resorted to the ritual of placing a substitute (sacrificial) ruler on the throne for 100 days. This did not, however, resolve the dangers perceived by the Assyrian ruler and he repeated the ritual in 670, along with apotropaic rituals against malaria and plague. That year, nevertheless, saw revolt. Herdsmen refused to supply oxen and sheep to the government officials, who could not travel the land without armed escort. Regional governors appropriated revenues and construction workers halted brick production. Esarhaddon acted decisively in late 670, early 669, executing a large number of rebellious Assyrian nobles. 669 and 668 remained troubled, however, with prophecies of locusts and plague among cattle and humans, while in 667 Egypt revolted against Assyria in the context of possible shortages of barely and straw.
This paper is a contribution to the Irish Research Council-funded “Climates of Conflict in Ancient Babylonia” (CLICAB) project.
How to cite: Ludlow, F., Kostick, C., McGovern, R., and Farrelly, L.: Volcanic Impacts on Climate and Society in First Millennium BCE Babylonia, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22127, https://doi.org/10.5194/egusphere-egu2020-22127, 2020.
EGU2020-11246 | Displays | ITS2.13/AS4.29
Recent evolution of stratospheric aerosol load from ground-based lidars and satellites: impact of volcanic eruptions and wildfires.Sergey Khaykin, Sophie Godin-Beekmann, Ghassan Taha, Artem Feofilov, Adam Bourassa, Landon Rieger, and Alain Hauchecorne
During the last 2 years (2018-2019) a series of volcanic eruptions led to remarkable enhancements in stratospheric aerosol load. These are eruptions of Ambae (July 2018, Vanuatu), Raikoke (June 2019, Russia) and Ulawun (July 2019, Papua New Guinea). In this study we examine the evolution of the stratospheric aerosol bulk optical properties following these events in consideration of large-scale stratospheric circulation. We use long-term aerosol records by ground-based lidars in both hemispheres together with global observations by various satellite missions (OMPS-LP, SAGE III, OSIRIS, CALIOP) and discuss the consistency between these datasets. In addition, we evaluate the preliminary lower stratosphere aerosol product by ESA Aeolus mission through intercomparison with ground-based lidars.
The 28-yr Observatoire de Haute Provence (OHP) lidar record shows that Raikoke eruption has led to the strongest enhancement of stratospheric aerosol optical depth (SAOD) in the northern extratropics since Pinatubo eruption. Satellite observations suggest that the stratospheric plume of Raikoke has dispersed throughout the entire Northern hemisphere and ascended up to 27 km altitude. The eruption of Ulawun in the tropics has further boosted the stratospheric aerosol load and by Fall 2019, the global mean SAOD was a factor of 2.5 higher than its background level.
At the turn of the year 2020, while both Raikoke and Ulawun aerosols were still present in the stratosphere, a dramatic bushfire event accompanied by vigorous fire-induced thunderstorms (PyroCb) in eastern Australia caused a massive injection of smoke into the stratosphere. The early detections of stratospheric smoke by OMPS-LP suggest that the zonal-mean SAOD perturbation caused by this event exceeds the previous record-breaking PyroCb-related perturbation after the British Columbia fires in August 2017. We use satellite observations of aerosol and trace gases (H2O, CO) to characterize the stratospheric impact of the wildfires and contrast it with that of volcanic eruptions.
How to cite: Khaykin, S., Godin-Beekmann, S., Taha, G., Feofilov, A., Bourassa, A., Rieger, L., and Hauchecorne, A.: Recent evolution of stratospheric aerosol load from ground-based lidars and satellites: impact of volcanic eruptions and wildfires. , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11246, https://doi.org/10.5194/egusphere-egu2020-11246, 2020.
During the last 2 years (2018-2019) a series of volcanic eruptions led to remarkable enhancements in stratospheric aerosol load. These are eruptions of Ambae (July 2018, Vanuatu), Raikoke (June 2019, Russia) and Ulawun (July 2019, Papua New Guinea). In this study we examine the evolution of the stratospheric aerosol bulk optical properties following these events in consideration of large-scale stratospheric circulation. We use long-term aerosol records by ground-based lidars in both hemispheres together with global observations by various satellite missions (OMPS-LP, SAGE III, OSIRIS, CALIOP) and discuss the consistency between these datasets. In addition, we evaluate the preliminary lower stratosphere aerosol product by ESA Aeolus mission through intercomparison with ground-based lidars.
The 28-yr Observatoire de Haute Provence (OHP) lidar record shows that Raikoke eruption has led to the strongest enhancement of stratospheric aerosol optical depth (SAOD) in the northern extratropics since Pinatubo eruption. Satellite observations suggest that the stratospheric plume of Raikoke has dispersed throughout the entire Northern hemisphere and ascended up to 27 km altitude. The eruption of Ulawun in the tropics has further boosted the stratospheric aerosol load and by Fall 2019, the global mean SAOD was a factor of 2.5 higher than its background level.
At the turn of the year 2020, while both Raikoke and Ulawun aerosols were still present in the stratosphere, a dramatic bushfire event accompanied by vigorous fire-induced thunderstorms (PyroCb) in eastern Australia caused a massive injection of smoke into the stratosphere. The early detections of stratospheric smoke by OMPS-LP suggest that the zonal-mean SAOD perturbation caused by this event exceeds the previous record-breaking PyroCb-related perturbation after the British Columbia fires in August 2017. We use satellite observations of aerosol and trace gases (H2O, CO) to characterize the stratospheric impact of the wildfires and contrast it with that of volcanic eruptions.
How to cite: Khaykin, S., Godin-Beekmann, S., Taha, G., Feofilov, A., Bourassa, A., Rieger, L., and Hauchecorne, A.: Recent evolution of stratospheric aerosol load from ground-based lidars and satellites: impact of volcanic eruptions and wildfires. , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11246, https://doi.org/10.5194/egusphere-egu2020-11246, 2020.
EGU2020-2406 | Displays | ITS2.13/AS4.29
Impact of the Ambae, Raikoke and Ulawun eruptions in 2018-2019 on the global stratospheric aerosol layer and climateCorinna Kloss, Pasquale Sellitto, Bernard Legras, Jean-Paul Vernier, Fabrice Jégou, M. Venkat Ratnam, B. Suneel Kumar, B. Lakshmi Madhavan, Maxim Eremenko, and Gwenaël Berthet
Using a combination of satellite, ground-based and in-situ observations, and radiative transfer modelling, we quantify the impact of the most recent moderate volcanic eruptions (Ambae, Vanuatu in July 2018; Raikoke, Russia and Ulawun, New Guinea in June 2019) on the global stratospheric aerosol layer and climate.
For the Ambae volcano (15°S and 167°E), we use the Stratospheric Aerosol and Gas Experiment III (SAGE III), the Ozone Mapping Profiler Suite (OMPS), the Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) and Himawari geostationary satellite observations of the aerosol plume evolution following the Ambae eruption of July 2018. It is shown that the aerosol plume of the main eruption at Ambae in July 2018 was distributed throughout the global stratosphere within the global large-scale circulation (Brewer-Dobson circulation, BDC), to both hemispheres. Ground-based LiDAR observations in Gadanki, India, as well as in-situ Printed Optical Particle Spectrometer (POPS) measurements acquired during the BATAL campaign confirm a widespread perturbation of the stratospheric aerosol layer due to this eruption. Using the UVSPEC radiative transfer model, we also estimate the radiative forcing of this global stratospheric aerosol perturbation. The climate impact is shown to be comparable to that of the well-known and studied recent moderate stratospheric eruptions from Kasatochi (USA, 2008), Sarychev (Russia, 2009) and Nabro (Eritrea, 2011). Top of the atmosphere radiative forcing values between -0.45 and -0.60 W/m2, for the Ambae eruption of July 2018, are found.
In a similar manner the dispersion of the aerosol plume of the Raikoke (48°N and 153°E) and Ulawun (5°S and 151°E) eruptions of June 2019 is analyzed. As both of those eruptions had a stratospheric impact and happened almost simultaneously, it is challenging to completely distinguish both events. Even though the eruptions occurred very recently, first results show that the aerosol plume of the Raikoke eruption resulted in an increase in aerosol extinction values, double as high as compared to that of the Ambae eruption. However, as the eruption occurred on higher latitudes, the main bulk of Raikoke aerosols was transported towards the northern higher latitude’s in the stratosphere within the BDC, as revealed by OMPS, SAGE III and a new detection algorithm for SO2 and sulfate aerosol using IASI (Infrared Atmospheric Sounder Interferometer). Even though the Raikoke eruption had a larger impact on the stratospheric aerosol layer, both events (the eruptions at Raikoke and Ambae) have to be considered in stratospheric aerosol budget and climate studies.
How to cite: Kloss, C., Sellitto, P., Legras, B., Vernier, J.-P., Jégou, F., Ratnam, M. V., Kumar, B. S., Madhavan, B. L., Eremenko, M., and Berthet, G.: Impact of the Ambae, Raikoke and Ulawun eruptions in 2018-2019 on the global stratospheric aerosol layer and climate, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2406, https://doi.org/10.5194/egusphere-egu2020-2406, 2020.
Using a combination of satellite, ground-based and in-situ observations, and radiative transfer modelling, we quantify the impact of the most recent moderate volcanic eruptions (Ambae, Vanuatu in July 2018; Raikoke, Russia and Ulawun, New Guinea in June 2019) on the global stratospheric aerosol layer and climate.
For the Ambae volcano (15°S and 167°E), we use the Stratospheric Aerosol and Gas Experiment III (SAGE III), the Ozone Mapping Profiler Suite (OMPS), the Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) and Himawari geostationary satellite observations of the aerosol plume evolution following the Ambae eruption of July 2018. It is shown that the aerosol plume of the main eruption at Ambae in July 2018 was distributed throughout the global stratosphere within the global large-scale circulation (Brewer-Dobson circulation, BDC), to both hemispheres. Ground-based LiDAR observations in Gadanki, India, as well as in-situ Printed Optical Particle Spectrometer (POPS) measurements acquired during the BATAL campaign confirm a widespread perturbation of the stratospheric aerosol layer due to this eruption. Using the UVSPEC radiative transfer model, we also estimate the radiative forcing of this global stratospheric aerosol perturbation. The climate impact is shown to be comparable to that of the well-known and studied recent moderate stratospheric eruptions from Kasatochi (USA, 2008), Sarychev (Russia, 2009) and Nabro (Eritrea, 2011). Top of the atmosphere radiative forcing values between -0.45 and -0.60 W/m2, for the Ambae eruption of July 2018, are found.
In a similar manner the dispersion of the aerosol plume of the Raikoke (48°N and 153°E) and Ulawun (5°S and 151°E) eruptions of June 2019 is analyzed. As both of those eruptions had a stratospheric impact and happened almost simultaneously, it is challenging to completely distinguish both events. Even though the eruptions occurred very recently, first results show that the aerosol plume of the Raikoke eruption resulted in an increase in aerosol extinction values, double as high as compared to that of the Ambae eruption. However, as the eruption occurred on higher latitudes, the main bulk of Raikoke aerosols was transported towards the northern higher latitude’s in the stratosphere within the BDC, as revealed by OMPS, SAGE III and a new detection algorithm for SO2 and sulfate aerosol using IASI (Infrared Atmospheric Sounder Interferometer). Even though the Raikoke eruption had a larger impact on the stratospheric aerosol layer, both events (the eruptions at Raikoke and Ambae) have to be considered in stratospheric aerosol budget and climate studies.
How to cite: Kloss, C., Sellitto, P., Legras, B., Vernier, J.-P., Jégou, F., Ratnam, M. V., Kumar, B. S., Madhavan, B. L., Eremenko, M., and Berthet, G.: Impact of the Ambae, Raikoke and Ulawun eruptions in 2018-2019 on the global stratospheric aerosol layer and climate, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2406, https://doi.org/10.5194/egusphere-egu2020-2406, 2020.
EGU2020-16028 | Displays | ITS2.13/AS4.29
Dispersion Model Evaluation for the Sulfur Dioxide Plume from the 2019 Raikoke Eruption using Satellite Measurements.Johannes de Leeuw, Anja Schmidt, Claire Witham, Nicolas Theys, Richard Pope, Jim Haywood, Martin Osborne, and Nina Kristiansen
Volcanic eruptions pose a serious threat to the aviation industry causing widespread disruption. To identify any potential impacts, nine Volcanic Ash Advisory Centres (VAACs) provide global monitoring of all eruptions, informing stakeholders how each volcanic eruption might interfere with aviation. Numerical dispersion models represent a vital infrastructure when assessing and forecasting the atmospheric conditions from a volcanic plume.
In this study we investigate the 2019 Raikoke eruption, which emitted approximately 1.5 Tg of sulfur dioxide (SO2) representing the largest volcanic emission of SO2 into the stratosphere since the Nabro eruption in 2011. Using the UK Met Office’s Numerical Atmospheric-dispersion Modelling Environment (NAME), we simulate the evolution of the volcanic gas and aerosol particle plumes (SO2 and sulfate, SO4) across the Northern Hemisphere between 21st June and 17th July. We evaluate the skills and limitations of NAME in terms of modelling volcanic SO2 plumes, by comparing our simulations to high-resolution measurements from the Tropospheric Monitoring Instrument (TROPOMI) on-board the European Space Agency (ESA)’s Sentinel 5 – Precursor (S5P) satellite.
Our comparisons show that NAME accurately simulates the observed location and shape of the SO2 plume in the first few weeks after the eruption. NAME also reproduces the magnitude of the observed SO2 vertical column densities, when emitting 1.5 Tg of SO2, during the first 48 hours after the eruption. On longer timescales, we find that the model-simulated SO2 plume in NAME is more diffuse than in the TROPOMI measurements, resulting in an underestimation of the peak SO2 vertical column densities in the model. This suggests that the diffusion parameters used in NAME are too large in the upper troposphere and lower stratosphere.
Finally, NAME underestimates the total mass of SO2 when compared to estimates from TROPOMI, however emitting 2 Tg of SO2 in the model improves the comparison, resulting in very good agreement with the satellite measurements.
How to cite: de Leeuw, J., Schmidt, A., Witham, C., Theys, N., Pope, R., Haywood, J., Osborne, M., and Kristiansen, N.: Dispersion Model Evaluation for the Sulfur Dioxide Plume from the 2019 Raikoke Eruption using Satellite Measurements., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-16028, https://doi.org/10.5194/egusphere-egu2020-16028, 2020.
Volcanic eruptions pose a serious threat to the aviation industry causing widespread disruption. To identify any potential impacts, nine Volcanic Ash Advisory Centres (VAACs) provide global monitoring of all eruptions, informing stakeholders how each volcanic eruption might interfere with aviation. Numerical dispersion models represent a vital infrastructure when assessing and forecasting the atmospheric conditions from a volcanic plume.
In this study we investigate the 2019 Raikoke eruption, which emitted approximately 1.5 Tg of sulfur dioxide (SO2) representing the largest volcanic emission of SO2 into the stratosphere since the Nabro eruption in 2011. Using the UK Met Office’s Numerical Atmospheric-dispersion Modelling Environment (NAME), we simulate the evolution of the volcanic gas and aerosol particle plumes (SO2 and sulfate, SO4) across the Northern Hemisphere between 21st June and 17th July. We evaluate the skills and limitations of NAME in terms of modelling volcanic SO2 plumes, by comparing our simulations to high-resolution measurements from the Tropospheric Monitoring Instrument (TROPOMI) on-board the European Space Agency (ESA)’s Sentinel 5 – Precursor (S5P) satellite.
Our comparisons show that NAME accurately simulates the observed location and shape of the SO2 plume in the first few weeks after the eruption. NAME also reproduces the magnitude of the observed SO2 vertical column densities, when emitting 1.5 Tg of SO2, during the first 48 hours after the eruption. On longer timescales, we find that the model-simulated SO2 plume in NAME is more diffuse than in the TROPOMI measurements, resulting in an underestimation of the peak SO2 vertical column densities in the model. This suggests that the diffusion parameters used in NAME are too large in the upper troposphere and lower stratosphere.
Finally, NAME underestimates the total mass of SO2 when compared to estimates from TROPOMI, however emitting 2 Tg of SO2 in the model improves the comparison, resulting in very good agreement with the satellite measurements.
How to cite: de Leeuw, J., Schmidt, A., Witham, C., Theys, N., Pope, R., Haywood, J., Osborne, M., and Kristiansen, N.: Dispersion Model Evaluation for the Sulfur Dioxide Plume from the 2019 Raikoke Eruption using Satellite Measurements., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-16028, https://doi.org/10.5194/egusphere-egu2020-16028, 2020.
EGU2020-4791 | Displays | ITS2.13/AS4.29
Update of the volcanic sulfur emission inventory in MOCAGE CTM and its impact on the budget of sulfur species in the atmosphereClaire Lamotte, Virginie Marécal, and Jonathan Guth
Constraining emission inventories into chemistry-transport models (CTM) is essential. In addition to anthropogenic emissions, natural sources of pollutants must be considered. Among them, volcanoes are large emitters of gases, including sulfur dioxide (SO2), a volatile species, causing environmental and health issues.
Volcanic SO2 emission inventories are usually integrated in global CTMs, in order to improve the modelling of chemical species in the atmosphere. Here, we use the model MOCAGE, developed at CNRM, which currently uses Andres & Krasgnoc’s inventory (1998); a temporal average of emission on some 40 volcanoes, monitored through the synergy of satellite data and surface remote sensing instruments, for 25 years (from 1970’s to 1997). However, this inventory is now quite old and is therefore no longer sufficiently accurate.
Thanks to the development of new satellite observations, it has become possible to produce such inventories with an improved accuracy. The global coverage and higher sensitivity of these instruments has allowed to reference more emission sources (hard-to-access volcanoes, small eruptions or even passive degassing). Hence, a new inventory of Carn et al (2016,2017) based on satellite observations has been implemented in MOCAGE. Besides being recent (from 1978 up to 2015), it combines eruption and passive degassing over more than 160 volcanoes. Passive degassing fluxes are provided as annual averages and eruption fluxes as daily total quantities (in case of events). In addition, information on volcanoes vent altitude and eruptive plume heights is available, which has been used to better constraints the model.
We focus our study at the global scale. The years 2013 and 2014 were chosen as the years with the lowest and highest total eruptive emissions respectively, in Carn's inventory. Thus, 2013 highlights mainly the impact of passive degassing, while 2014 provides additional information on eruptions.
For each of the years studied, the sulfur species budget in MOCAGE simulation is increased when the inventory is updated and therefore the relative contribution of volcanic sulfur emissions is larger. We note the global increase in sulfur dioxide and sulfate aerosol burdens; an increase even more significant when the injection heights of the emissions are taken into account.
How to cite: Lamotte, C., Marécal, V., and Guth, J.: Update of the volcanic sulfur emission inventory in MOCAGE CTM and its impact on the budget of sulfur species in the atmosphere, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4791, https://doi.org/10.5194/egusphere-egu2020-4791, 2020.
Constraining emission inventories into chemistry-transport models (CTM) is essential. In addition to anthropogenic emissions, natural sources of pollutants must be considered. Among them, volcanoes are large emitters of gases, including sulfur dioxide (SO2), a volatile species, causing environmental and health issues.
Volcanic SO2 emission inventories are usually integrated in global CTMs, in order to improve the modelling of chemical species in the atmosphere. Here, we use the model MOCAGE, developed at CNRM, which currently uses Andres & Krasgnoc’s inventory (1998); a temporal average of emission on some 40 volcanoes, monitored through the synergy of satellite data and surface remote sensing instruments, for 25 years (from 1970’s to 1997). However, this inventory is now quite old and is therefore no longer sufficiently accurate.
Thanks to the development of new satellite observations, it has become possible to produce such inventories with an improved accuracy. The global coverage and higher sensitivity of these instruments has allowed to reference more emission sources (hard-to-access volcanoes, small eruptions or even passive degassing). Hence, a new inventory of Carn et al (2016,2017) based on satellite observations has been implemented in MOCAGE. Besides being recent (from 1978 up to 2015), it combines eruption and passive degassing over more than 160 volcanoes. Passive degassing fluxes are provided as annual averages and eruption fluxes as daily total quantities (in case of events). In addition, information on volcanoes vent altitude and eruptive plume heights is available, which has been used to better constraints the model.
We focus our study at the global scale. The years 2013 and 2014 were chosen as the years with the lowest and highest total eruptive emissions respectively, in Carn's inventory. Thus, 2013 highlights mainly the impact of passive degassing, while 2014 provides additional information on eruptions.
For each of the years studied, the sulfur species budget in MOCAGE simulation is increased when the inventory is updated and therefore the relative contribution of volcanic sulfur emissions is larger. We note the global increase in sulfur dioxide and sulfate aerosol burdens; an increase even more significant when the injection heights of the emissions are taken into account.
How to cite: Lamotte, C., Marécal, V., and Guth, J.: Update of the volcanic sulfur emission inventory in MOCAGE CTM and its impact on the budget of sulfur species in the atmosphere, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4791, https://doi.org/10.5194/egusphere-egu2020-4791, 2020.
EGU2020-797 | Displays | ITS2.13/AS4.29
Monitoring volcanic SO2 emissions with the Infrared Atmospheric Sounding InterferometerIsabelle Taylor, Elisa Carboni, Tamsin A. Mather, and Roy G. Grainger
Satellite remote sensing has been widely used to make measurements of sulphur dioxide (SO2) emissions from volcanoes. The Infrared Atmospheric Sounding Interferometer (IASI) is one such instrument that has been used to examine the emissions from large explosive eruptions. Much less work has been done using IASI to study the emissions from smaller eruptions, non-eruptive degassing or anthropogenic sources, and similarly it is rarely used for examining long term trends in activity. Now, when there are three IASI instruments in orbit and with over ten years of data, is the perfect opportunity to explore these topics. This study applied a ‘fast’ linear retrieval developed for IASI in Oxford, across the globe for a ten-year period. Global annual averages were dominated by the emissions from large eruptions (e.g. Nabro in 2011) but elevated signals could also be identified from smaller volcanic sources and industrial centres, suggesting the technique has promise for detecting lower level emissions. A systematic approach was then taken, rotating the linear retrieval output for each orbit at over 100 volcanoes worldwide, with the wind direction at the volcano’s vent, or in cases where the plume was emitted at a greater height, using the observed plume direction. This isolates the elevated signal downwind of the volcano. The rotated outputs were then averaged over monthly, annual and multi-annual time periods. Analysis of the upwind and downwind values establishes whether there is an elevated signal and its intensity. An inventory was then constructed from these observations which show how these emissions varied over a ten-year period. Trends in SO2 emission were compared against fluxes generated for the Ozone Monitoring Instrument (OMI) and the number of thermal anomalies detected by the MODVOLC algorithm developed for MODIS. It was identified for example, that long term trends are more easily identified at high altitude volcanoes such as Popocatepetl, Sabancaya and Nevado del Ruiz. This is consistent with the idea that the instrument performs better in regions with lower levels of water vapour (e.g. above the boundary layer).
How to cite: Taylor, I., Carboni, E., Mather, T. A., and Grainger, R. G.: Monitoring volcanic SO2 emissions with the Infrared Atmospheric Sounding Interferometer, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-797, https://doi.org/10.5194/egusphere-egu2020-797, 2020.
Satellite remote sensing has been widely used to make measurements of sulphur dioxide (SO2) emissions from volcanoes. The Infrared Atmospheric Sounding Interferometer (IASI) is one such instrument that has been used to examine the emissions from large explosive eruptions. Much less work has been done using IASI to study the emissions from smaller eruptions, non-eruptive degassing or anthropogenic sources, and similarly it is rarely used for examining long term trends in activity. Now, when there are three IASI instruments in orbit and with over ten years of data, is the perfect opportunity to explore these topics. This study applied a ‘fast’ linear retrieval developed for IASI in Oxford, across the globe for a ten-year period. Global annual averages were dominated by the emissions from large eruptions (e.g. Nabro in 2011) but elevated signals could also be identified from smaller volcanic sources and industrial centres, suggesting the technique has promise for detecting lower level emissions. A systematic approach was then taken, rotating the linear retrieval output for each orbit at over 100 volcanoes worldwide, with the wind direction at the volcano’s vent, or in cases where the plume was emitted at a greater height, using the observed plume direction. This isolates the elevated signal downwind of the volcano. The rotated outputs were then averaged over monthly, annual and multi-annual time periods. Analysis of the upwind and downwind values establishes whether there is an elevated signal and its intensity. An inventory was then constructed from these observations which show how these emissions varied over a ten-year period. Trends in SO2 emission were compared against fluxes generated for the Ozone Monitoring Instrument (OMI) and the number of thermal anomalies detected by the MODVOLC algorithm developed for MODIS. It was identified for example, that long term trends are more easily identified at high altitude volcanoes such as Popocatepetl, Sabancaya and Nevado del Ruiz. This is consistent with the idea that the instrument performs better in regions with lower levels of water vapour (e.g. above the boundary layer).
How to cite: Taylor, I., Carboni, E., Mather, T. A., and Grainger, R. G.: Monitoring volcanic SO2 emissions with the Infrared Atmospheric Sounding Interferometer, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-797, https://doi.org/10.5194/egusphere-egu2020-797, 2020.
EGU2020-16190 | Displays | ITS2.13/AS4.29
‘This advice is absurd’: issues with providing generic advice on community protection from chronic volcanic degassingClaire J. Horwell and Tamar Elias
Around the world, there are a number of volcanoes which are passively degassing, chronically exposing communities to potentially-harmful gases and aerosols. The medical evidence, to date, is unclear about the long-term health impacts of such exposures, but there is evidence that people experience an exacerbation of existing respiratory disease, such as asthma, bronchitis, and COPD. In addition, there are a range of physiological and psychological symptoms which even otherwise healthy people experience, which can impact their lives and livelihoods. In Hawaii, prior to the end of the 2018 Lower East Rift Zone eruption crisis of Kīlauea Volcano, communities downwind of the vents were frequently exposed to volcanic pollution or ‘vog’, with exposures worsening during the 2018 crisis. Local emergency and health agencies provided generic advice on measures to reduce exposure but the usefulness and uptake of the advice was unknown. A survey of Hawai’i island residents in 2015, highlighted the range and severity of symptoms that they perceived to be caused by vog exposures, and exposed a lack of application of the official advice. Some respondents described how their lifestyles (e.g., the open structure of their homes and availability of air conditioning) didn’t allow them to implement key strategies such as closing doors and windows and staying indoors. The perceived irrelevance of official advice, and a perception, by some, that vog information was suppressed due to political pressures, led to mistrust in the official agencies by a subset of the population. The survey also revealed undocumented strategies that individuals were using to protect themselves and cope with symptoms of vog exposure. In partnership with local agencies, we rewrote the guidance to be more applicable to the local situation. Revised guidance incorporated successful local practices, where medical evidence of efficacy could be found. We also developed an online interagency ‘vog dashboard’ that provided a comprehensive source for vog information and advice. The ‘Vog Talk’ Facebook page was also initiated to provide a forum for informal discussion amongst community members and between communities and agency representatives. During the 2018 eruption crisis, these resources were extensively utilised and were considered primary sources of information for Hawaii residents, tourists and the world’s media. The experience in Hawaii demonstrates the importance of a multi-disciplinary approach to engaging communities, with health management professionals, physical and social scientists, and community representatives working together to ensure that issued advice is trusted, relevant and practical.
How to cite: Horwell, C. J. and Elias, T.: ‘This advice is absurd’: issues with providing generic advice on community protection from chronic volcanic degassing, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-16190, https://doi.org/10.5194/egusphere-egu2020-16190, 2020.
Around the world, there are a number of volcanoes which are passively degassing, chronically exposing communities to potentially-harmful gases and aerosols. The medical evidence, to date, is unclear about the long-term health impacts of such exposures, but there is evidence that people experience an exacerbation of existing respiratory disease, such as asthma, bronchitis, and COPD. In addition, there are a range of physiological and psychological symptoms which even otherwise healthy people experience, which can impact their lives and livelihoods. In Hawaii, prior to the end of the 2018 Lower East Rift Zone eruption crisis of Kīlauea Volcano, communities downwind of the vents were frequently exposed to volcanic pollution or ‘vog’, with exposures worsening during the 2018 crisis. Local emergency and health agencies provided generic advice on measures to reduce exposure but the usefulness and uptake of the advice was unknown. A survey of Hawai’i island residents in 2015, highlighted the range and severity of symptoms that they perceived to be caused by vog exposures, and exposed a lack of application of the official advice. Some respondents described how their lifestyles (e.g., the open structure of their homes and availability of air conditioning) didn’t allow them to implement key strategies such as closing doors and windows and staying indoors. The perceived irrelevance of official advice, and a perception, by some, that vog information was suppressed due to political pressures, led to mistrust in the official agencies by a subset of the population. The survey also revealed undocumented strategies that individuals were using to protect themselves and cope with symptoms of vog exposure. In partnership with local agencies, we rewrote the guidance to be more applicable to the local situation. Revised guidance incorporated successful local practices, where medical evidence of efficacy could be found. We also developed an online interagency ‘vog dashboard’ that provided a comprehensive source for vog information and advice. The ‘Vog Talk’ Facebook page was also initiated to provide a forum for informal discussion amongst community members and between communities and agency representatives. During the 2018 eruption crisis, these resources were extensively utilised and were considered primary sources of information for Hawaii residents, tourists and the world’s media. The experience in Hawaii demonstrates the importance of a multi-disciplinary approach to engaging communities, with health management professionals, physical and social scientists, and community representatives working together to ensure that issued advice is trusted, relevant and practical.
How to cite: Horwell, C. J. and Elias, T.: ‘This advice is absurd’: issues with providing generic advice on community protection from chronic volcanic degassing, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-16190, https://doi.org/10.5194/egusphere-egu2020-16190, 2020.
EGU2020-9858 | Displays | ITS2.13/AS4.29
Solubility of metals in aerosol samples from Mount Etna during the EPL-REFLECT campaignChiara Giorio, Sara D'Aronco, Lidia Soldà, Salvatore Giammanco, Alessandro La Spina, Giuseppe Salerno, Alessia Donatucci, Tommaso Caltabiano, and Pasquale Sellitto
Volcanoes emit a chemically complex cocktail of gases and aerosols into the atmosphere, which can affect Earth’s climate (1) and human health. The vast majority of volcanogenic fatalities involve the obvious thermal and physical injuries resulting from an eruption, but many of the emissions from volcanoes are toxic and include compounds such as sulfates and metals, which are known to disrupt biological systems (2). Yet, there is a lack of knowledge on the toxicity of compounds found in volcanic plumes and their fate in the atmosphere.
Research has focussed on the impacts of large-magnitude explosive eruptions. While emissions from many non-explosive eruptions are continuous and prolonged, their climatic and potential effects on human health have not been studied extensively. Once the plume disperses in the atmosphere, the aerosol particle components can mix and interact with oxidants and organic compounds present in the atmosphere. How these chemical components interact and how the interactions affect the Earth’s climate, particle toxicity and human health is largely unknown especially for trace metals.
In the framework of the EPL-REFLECT (Etna Plume Lab – near-source estimations of Radiative EFfects of voLcanic aErosols for Climate and air quality sTudies), a field campaign on Mount Etna was done in July 2019 in which samples of atmospheric aerosol were collected during non-explosive degassing activity. Samples were collected both at the crater and in a transect following the volcanic plume down slope to the closest inhabited areas. Samples were analysed for trace metals and organic compounds, including solubility tests (3) to assess how tropospheric processing of the aerosol affects metal bioavailability and potentially the toxicity of the aerosol.
(1) von Glasow, R. 2010. Atmospheric chemistry in volcanic plumes. Proceedings of the National Academy of Sciences, vol. 107, pp. 6594–6599., DOI: 10.1073/pnas.0913164107
(2) Weinstein, P., Horwell, C.J., Cook, A. 2013. Volcanic Emissions and Health. In: Essentials of Medical Geology, Springer Netherlands, Dordrecht, pp. 217–238., DOI: 10.1007/978-94-007-4375-5_10
(3) Tapparo, A., Di Marco, V., Badocco, D., D’Aronco, S., Soldà, L., Pastore, P., Mahon, B.M., Kalberer, M., Giorio, C. 2019. Formation of metal-organic ligand complexes affects solubility of metals in airborne particles at an urban site in the Po Valley. Chemosphere, in press., DOI: 10.1016/j.chemosphere.2019.125025
How to cite: Giorio, C., D'Aronco, S., Soldà, L., Giammanco, S., La Spina, A., Salerno, G., Donatucci, A., Caltabiano, T., and Sellitto, P.: Solubility of metals in aerosol samples from Mount Etna during the EPL-REFLECT campaign, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9858, https://doi.org/10.5194/egusphere-egu2020-9858, 2020.
Volcanoes emit a chemically complex cocktail of gases and aerosols into the atmosphere, which can affect Earth’s climate (1) and human health. The vast majority of volcanogenic fatalities involve the obvious thermal and physical injuries resulting from an eruption, but many of the emissions from volcanoes are toxic and include compounds such as sulfates and metals, which are known to disrupt biological systems (2). Yet, there is a lack of knowledge on the toxicity of compounds found in volcanic plumes and their fate in the atmosphere.
Research has focussed on the impacts of large-magnitude explosive eruptions. While emissions from many non-explosive eruptions are continuous and prolonged, their climatic and potential effects on human health have not been studied extensively. Once the plume disperses in the atmosphere, the aerosol particle components can mix and interact with oxidants and organic compounds present in the atmosphere. How these chemical components interact and how the interactions affect the Earth’s climate, particle toxicity and human health is largely unknown especially for trace metals.
In the framework of the EPL-REFLECT (Etna Plume Lab – near-source estimations of Radiative EFfects of voLcanic aErosols for Climate and air quality sTudies), a field campaign on Mount Etna was done in July 2019 in which samples of atmospheric aerosol were collected during non-explosive degassing activity. Samples were collected both at the crater and in a transect following the volcanic plume down slope to the closest inhabited areas. Samples were analysed for trace metals and organic compounds, including solubility tests (3) to assess how tropospheric processing of the aerosol affects metal bioavailability and potentially the toxicity of the aerosol.
(1) von Glasow, R. 2010. Atmospheric chemistry in volcanic plumes. Proceedings of the National Academy of Sciences, vol. 107, pp. 6594–6599., DOI: 10.1073/pnas.0913164107
(2) Weinstein, P., Horwell, C.J., Cook, A. 2013. Volcanic Emissions and Health. In: Essentials of Medical Geology, Springer Netherlands, Dordrecht, pp. 217–238., DOI: 10.1007/978-94-007-4375-5_10
(3) Tapparo, A., Di Marco, V., Badocco, D., D’Aronco, S., Soldà, L., Pastore, P., Mahon, B.M., Kalberer, M., Giorio, C. 2019. Formation of metal-organic ligand complexes affects solubility of metals in airborne particles at an urban site in the Po Valley. Chemosphere, in press., DOI: 10.1016/j.chemosphere.2019.125025
How to cite: Giorio, C., D'Aronco, S., Soldà, L., Giammanco, S., La Spina, A., Salerno, G., Donatucci, A., Caltabiano, T., and Sellitto, P.: Solubility of metals in aerosol samples from Mount Etna during the EPL-REFLECT campaign, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9858, https://doi.org/10.5194/egusphere-egu2020-9858, 2020.
EGU2020-405 | Displays | ITS2.13/AS4.29
Spatial and temporal variations in ambient SO2 and PM2.5 levels influenced by Kīlauea Volcano, Hawai'i, 2007 - 2018Rachel Whitty, Evgenia Ilyinskaya, Emily Mason, Penny Wieser, Emma Liu, Anja Schmidt, Tjarda Roberts, Melissa Pfeffer, Barbara Brooks, Tamsin Mather, Marie Edmonds, Tamar Elias, David Schneider, Clive Oppenheimer, Adrian Dybwad, Patricia Nadeau, and Christoph Kern
The 2018 eruption of Kīlauea volcano, Hawai'i, resulted in enormous gas emissions from the Lower East Rift Zone (LERZ) of the volcano. This led to important changes to air quality in downwind communities. We analyse and present measurements of atmospheric sulfur dioxide (SO2) and aerosol particulate matter < 2.5 µm (PM2.5) collected by the Hawai'i Department of Health (HDOH) and National Park Service (NPS) operational air quality monitoring networks between 2007 and 2018; and a community-operated network of low-cost PM2.5 sensors on the Island of Hawai'i. During this period, the two largest observed increases in Kīlauea's volcanic emissions were: the summit eruption that began in 2008 (Kīlauea emissions averaged 5 – 6 kt/day SO2 over the course of the eruption) and the LERZ eruption in May-August 2018 when SO2 emission rates likely reached 200 kt/day in June. Here we focus on characterising the airborne pollutants arising from the 2018 LERZ eruption and the spatial distribution and severity of air pollution events across the Island of Hawai'i. The LERZ eruption caused the most frequent and severe exceedances of Environmental Protection Agency 24-hour-mean PM2.5 air quality thresholds in Hawai'i since 2010. In Kona, for example, there were eight exceedances during the 2018 LERZ eruption, where there had been no exceedances in the previous eight years as measured by the HDOH and NPS networks. SO2 air pollution during the LERZ eruption was most severe in communities in the south and west of the island, with maximum 24-hour-mean mass concentrations of 728 µg/m3 recorded in Ocean View (100 km west of the LERZ emission source) in May 2018. Data from the low-cost sensor network correlated well with data from the HDOH PM2.5 instruments (Kona station, R2 = 0.89), demonstrating that these low-cost sensors provide a viable means to rapidly augment reference-grade instrument networks during crises.
How to cite: Whitty, R., Ilyinskaya, E., Mason, E., Wieser, P., Liu, E., Schmidt, A., Roberts, T., Pfeffer, M., Brooks, B., Mather, T., Edmonds, M., Elias, T., Schneider, D., Oppenheimer, C., Dybwad, A., Nadeau, P., and Kern, C.: Spatial and temporal variations in ambient SO2 and PM2.5 levels influenced by Kīlauea Volcano, Hawai'i, 2007 - 2018, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-405, https://doi.org/10.5194/egusphere-egu2020-405, 2020.
The 2018 eruption of Kīlauea volcano, Hawai'i, resulted in enormous gas emissions from the Lower East Rift Zone (LERZ) of the volcano. This led to important changes to air quality in downwind communities. We analyse and present measurements of atmospheric sulfur dioxide (SO2) and aerosol particulate matter < 2.5 µm (PM2.5) collected by the Hawai'i Department of Health (HDOH) and National Park Service (NPS) operational air quality monitoring networks between 2007 and 2018; and a community-operated network of low-cost PM2.5 sensors on the Island of Hawai'i. During this period, the two largest observed increases in Kīlauea's volcanic emissions were: the summit eruption that began in 2008 (Kīlauea emissions averaged 5 – 6 kt/day SO2 over the course of the eruption) and the LERZ eruption in May-August 2018 when SO2 emission rates likely reached 200 kt/day in June. Here we focus on characterising the airborne pollutants arising from the 2018 LERZ eruption and the spatial distribution and severity of air pollution events across the Island of Hawai'i. The LERZ eruption caused the most frequent and severe exceedances of Environmental Protection Agency 24-hour-mean PM2.5 air quality thresholds in Hawai'i since 2010. In Kona, for example, there were eight exceedances during the 2018 LERZ eruption, where there had been no exceedances in the previous eight years as measured by the HDOH and NPS networks. SO2 air pollution during the LERZ eruption was most severe in communities in the south and west of the island, with maximum 24-hour-mean mass concentrations of 728 µg/m3 recorded in Ocean View (100 km west of the LERZ emission source) in May 2018. Data from the low-cost sensor network correlated well with data from the HDOH PM2.5 instruments (Kona station, R2 = 0.89), demonstrating that these low-cost sensors provide a viable means to rapidly augment reference-grade instrument networks during crises.
How to cite: Whitty, R., Ilyinskaya, E., Mason, E., Wieser, P., Liu, E., Schmidt, A., Roberts, T., Pfeffer, M., Brooks, B., Mather, T., Edmonds, M., Elias, T., Schneider, D., Oppenheimer, C., Dybwad, A., Nadeau, P., and Kern, C.: Spatial and temporal variations in ambient SO2 and PM2.5 levels influenced by Kīlauea Volcano, Hawai'i, 2007 - 2018, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-405, https://doi.org/10.5194/egusphere-egu2020-405, 2020.
EGU2020-4 | Displays | ITS2.13/AS4.29
Where is the Toba eruption in the Vostok ice core? Clues from tephra, O and S isotopesJoel Savarino, Elsa Gautier, Nicolas Caillon, Emmanuelle Albalat, Francis Albarède, Shohei Hattori, Jean-Robert Petit, and Vladimir Lipenkov
The ca. 74 ka BP ‘‘super-eruption’’ of Toba volcano in Sumatra is the largest known Quaternary eruption. It expelled an estimated of 2800 km3 of dense rock equivalent, creating a caldera of 100 x 30 km. The eruption is estimated to have been 3500 greater than the Tambora eruption that created the “year without summer” in 1816 in Europe (Oppenheimer, 2002). However, the consequences of this “mega-eruption” on the climate and human evolution that could be expected for such eruption are still debated and uncertain. There is no evidence that this eruption has triggered any catastrophic climate change such as a “nuclear winter”. One of such lack of evidence lies in the ice.
In the ice core community, this eruption still remains a mystery. Indeed, the estimated size of the eruption should have left a gigantic mark in the ice, at least in the form of a huge sulfuric acid layer but none of the ice records covering this period show any such singularity. The sulfate record seems so common that it is in fact difficult to allocate a specific sulfate peak to this event.
In an effort to synchronize the Vostok ice core and the EPICA Dome C core, (Svensson et al., 2013) have identified three possible sulfuric acid layers for the Toba eruption in the Vostok ice core. In order to see if one of such event could have been the Toba eruption, we have performed the sulfur & oxygen isotope analysis of these three sulfuric acid layers in the hope that it could reveal some particularity. The sulfur results show that 1- all these three events have injected their products in the stratosphere and 2- the sulfur isotopic compositions of these three events share a common array, array that is in lines with other stratospheric eruptions, however one of the three acid layers shows an extremely and unusual weak oxygen anomaly, potentially indicating a major eruption. In order to remove the last doubts about the existence or not of one or a series of eruptions related to TOBA, the geochemical analysis of volcanic glasses trapped in the ice will be performed and presented.
How to cite: Savarino, J., Gautier, E., Caillon, N., Albalat, E., Albarède, F., Hattori, S., Petit, J.-R., and Lipenkov, V.: Where is the Toba eruption in the Vostok ice core? Clues from tephra, O and S isotopes, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4, https://doi.org/10.5194/egusphere-egu2020-4, 2020.
The ca. 74 ka BP ‘‘super-eruption’’ of Toba volcano in Sumatra is the largest known Quaternary eruption. It expelled an estimated of 2800 km3 of dense rock equivalent, creating a caldera of 100 x 30 km. The eruption is estimated to have been 3500 greater than the Tambora eruption that created the “year without summer” in 1816 in Europe (Oppenheimer, 2002). However, the consequences of this “mega-eruption” on the climate and human evolution that could be expected for such eruption are still debated and uncertain. There is no evidence that this eruption has triggered any catastrophic climate change such as a “nuclear winter”. One of such lack of evidence lies in the ice.
In the ice core community, this eruption still remains a mystery. Indeed, the estimated size of the eruption should have left a gigantic mark in the ice, at least in the form of a huge sulfuric acid layer but none of the ice records covering this period show any such singularity. The sulfate record seems so common that it is in fact difficult to allocate a specific sulfate peak to this event.
In an effort to synchronize the Vostok ice core and the EPICA Dome C core, (Svensson et al., 2013) have identified three possible sulfuric acid layers for the Toba eruption in the Vostok ice core. In order to see if one of such event could have been the Toba eruption, we have performed the sulfur & oxygen isotope analysis of these three sulfuric acid layers in the hope that it could reveal some particularity. The sulfur results show that 1- all these three events have injected their products in the stratosphere and 2- the sulfur isotopic compositions of these three events share a common array, array that is in lines with other stratospheric eruptions, however one of the three acid layers shows an extremely and unusual weak oxygen anomaly, potentially indicating a major eruption. In order to remove the last doubts about the existence or not of one or a series of eruptions related to TOBA, the geochemical analysis of volcanic glasses trapped in the ice will be performed and presented.
How to cite: Savarino, J., Gautier, E., Caillon, N., Albalat, E., Albarède, F., Hattori, S., Petit, J.-R., and Lipenkov, V.: Where is the Toba eruption in the Vostok ice core? Clues from tephra, O and S isotopes, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4, https://doi.org/10.5194/egusphere-egu2020-4, 2020.
EGU2020-4131 | Displays | ITS2.13/AS4.29
Toba volcano super eruption destroyed the ozone layer and caused a human population bottleneckSergey Osipov, Georgiy Stenchikov, Kostas Tsigaridis, Allegra LeGrande, Susanne Bauer, Mohamed Fnais, and Jos Lelieveld
Volcanic eruptions trigger a broad spectrum of climatic responses. For example, the Mount Pinatubo eruption in 1991 forced an El Niño and global cooling, and the Tambora eruption in 1815 caused the "Year Without a Summer." Especially grand eruptions such as Toba around 74,000 years ago can push the Earth's climate into a volcanic winter state, significantly lowering the surface temperature and precipitation globally. Here we present a new, previously overlooked element of the volcanic effects spectrum: the radiative mechanism of stratospheric ozone depletion. We found that the volcanic plume of Toba enhanced the UV optical depth and suppressed the primary formation of stratospheric ozone from O2 photolysis. Sulfate aerosols additionally reflect the photons needed to break the O2 bond (λ < 242 nm), otherwise controlled by ozone absorption and Rayleigh scattering alone during volcanically quiescent conditions. Our NASA GISS ModelE simulations of the Toba eruption reveal up to 50% global ozone loss due to the overall photochemistry perturbations of the sulfate aerosols. We also consider and quantify the radiative effects of SO2, which partially compensated for the ozone loss by inhibiting the photolytic O3 sink.
Our analysis shows that the magnitude of the ozone loss and UV-induced health-hazardous effects after the Toba eruption are similar to those in the aftermath of a potential nuclear conflict. These findings suggest a “Toba ozone catastrophe" as a likely contributor to the historic population decline in this period, consistent with a genetic bottleneck in human evolution.
How to cite: Osipov, S., Stenchikov, G., Tsigaridis, K., LeGrande, A., Bauer, S., Fnais, M., and Lelieveld, J.: Toba volcano super eruption destroyed the ozone layer and caused a human population bottleneck, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4131, https://doi.org/10.5194/egusphere-egu2020-4131, 2020.
Volcanic eruptions trigger a broad spectrum of climatic responses. For example, the Mount Pinatubo eruption in 1991 forced an El Niño and global cooling, and the Tambora eruption in 1815 caused the "Year Without a Summer." Especially grand eruptions such as Toba around 74,000 years ago can push the Earth's climate into a volcanic winter state, significantly lowering the surface temperature and precipitation globally. Here we present a new, previously overlooked element of the volcanic effects spectrum: the radiative mechanism of stratospheric ozone depletion. We found that the volcanic plume of Toba enhanced the UV optical depth and suppressed the primary formation of stratospheric ozone from O2 photolysis. Sulfate aerosols additionally reflect the photons needed to break the O2 bond (λ < 242 nm), otherwise controlled by ozone absorption and Rayleigh scattering alone during volcanically quiescent conditions. Our NASA GISS ModelE simulations of the Toba eruption reveal up to 50% global ozone loss due to the overall photochemistry perturbations of the sulfate aerosols. We also consider and quantify the radiative effects of SO2, which partially compensated for the ozone loss by inhibiting the photolytic O3 sink.
Our analysis shows that the magnitude of the ozone loss and UV-induced health-hazardous effects after the Toba eruption are similar to those in the aftermath of a potential nuclear conflict. These findings suggest a “Toba ozone catastrophe" as a likely contributor to the historic population decline in this period, consistent with a genetic bottleneck in human evolution.
How to cite: Osipov, S., Stenchikov, G., Tsigaridis, K., LeGrande, A., Bauer, S., Fnais, M., and Lelieveld, J.: Toba volcano super eruption destroyed the ozone layer and caused a human population bottleneck, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4131, https://doi.org/10.5194/egusphere-egu2020-4131, 2020.
EGU2020-5038 | Displays | ITS2.13/AS4.29
The 4.2 ka cal BP major eruption of Cerro Blanco, Central AndesJose-Luis Fernandez-Turiel, Francisco-Jose Perez-Torrado, Alejandro Rodríguez-Gonzalez, Norma Ratto, Marta Rejas, and Agustin Lobo
The major eruption of the Cerro Blanco Volcanic Complex (CBVC), in the Central Volcanic Zone of the Andes, NW Argentina, dated at 4410–4150 a cal BP, was investigated confirming that is the most important of the three major Holocene felsic eruptive events identified in the southern Puna (Fernandez-Turiel et al., 2019). Identification of pre–, syn–, and post–caldera products of CBVC allowed us to estimate the distribution of the Plinian fallout during the paroxysmal syn–caldera phase of the eruption. Results provide evidence for a major rhyolitic explosive eruption that spread volcanic deposits over an area of about 500,000 km2, accumulating >100 km3 of tephra (bulk volume). This last value exceeds the lower threshold of Volcanic Explosive Index (VEI) of 7. Ash-fall deposits mantled the region at distances >400 km from source and thick pyroclastic-flow deposits filled neighbouring valleys up to several tens of kilometres from the vent. This eruption is the largest documented during the past five millennia in the Central Volcanic Zone of the Andes, and is probably one of the largest Holocene explosive eruptions in the world.
The implications of the findings of the present work reach far beyond having some chronostratigraphic markers. Further interdisciplinary research should be performed in order to draw general conclusions on these impacts in local environments and the disruptive consequences for local communities. This is invaluable not just for understanding how the system may have been affected over time, but also for evaluating volcanic hazards and risk mitigation measures related to potential future large explosive eruptions.
Financial support was provided by the ASH and QUECA Projects (MINECO, CGL2008–00099 and CGL2011–23307). We acknowledge the assistance in the analytical work of labGEOTOP Geochemistry Laboratory (infrastructure co–funded by ERDF–EU Ref. CSIC08–4E–001) and DRX Laboratory (infrastructure co–funded by ERDF–EU Ref. CSIC10–4E–141) (J. Ibañez, J. Elvira and S. Alvarez) of ICTJA-CSIC, and EPMA and SEM Laboratories of CCiTUB (X. Llovet and J. Garcia Veigas). This study was carried out in the framework of the Research Consolidated Groups GEOVOL (Canary Islands Government, ULPGC) and GEOPAM (Generalitat de Catalunya, 2017 SGR 1494).
Fernandez–Turiel, J.L., Perez–Torrado, F.J., Rodriguez–Gonzalez, A., Saavedra, J., Carracedo, J.C., Rejas, M., Lobo, A., Osterrieth, M., Carrizo, J.I., Esteban, G., Gallardo, J., Ratto, N., 2019. The large eruption 4.2 ka cal BP in Cerro Blanco, Central Volcanic Zone, Andes: Insights to the Holocene eruptive deposits in the southern Puna and adjacent regions. Estudios Geologicos 75, e088.
How to cite: Fernandez-Turiel, J.-L., Perez-Torrado, F.-J., Rodríguez-Gonzalez, A., Ratto, N., Rejas, M., and Lobo, A.: The 4.2 ka cal BP major eruption of Cerro Blanco, Central Andes, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5038, https://doi.org/10.5194/egusphere-egu2020-5038, 2020.
The major eruption of the Cerro Blanco Volcanic Complex (CBVC), in the Central Volcanic Zone of the Andes, NW Argentina, dated at 4410–4150 a cal BP, was investigated confirming that is the most important of the three major Holocene felsic eruptive events identified in the southern Puna (Fernandez-Turiel et al., 2019). Identification of pre–, syn–, and post–caldera products of CBVC allowed us to estimate the distribution of the Plinian fallout during the paroxysmal syn–caldera phase of the eruption. Results provide evidence for a major rhyolitic explosive eruption that spread volcanic deposits over an area of about 500,000 km2, accumulating >100 km3 of tephra (bulk volume). This last value exceeds the lower threshold of Volcanic Explosive Index (VEI) of 7. Ash-fall deposits mantled the region at distances >400 km from source and thick pyroclastic-flow deposits filled neighbouring valleys up to several tens of kilometres from the vent. This eruption is the largest documented during the past five millennia in the Central Volcanic Zone of the Andes, and is probably one of the largest Holocene explosive eruptions in the world.
The implications of the findings of the present work reach far beyond having some chronostratigraphic markers. Further interdisciplinary research should be performed in order to draw general conclusions on these impacts in local environments and the disruptive consequences for local communities. This is invaluable not just for understanding how the system may have been affected over time, but also for evaluating volcanic hazards and risk mitigation measures related to potential future large explosive eruptions.
Financial support was provided by the ASH and QUECA Projects (MINECO, CGL2008–00099 and CGL2011–23307). We acknowledge the assistance in the analytical work of labGEOTOP Geochemistry Laboratory (infrastructure co–funded by ERDF–EU Ref. CSIC08–4E–001) and DRX Laboratory (infrastructure co–funded by ERDF–EU Ref. CSIC10–4E–141) (J. Ibañez, J. Elvira and S. Alvarez) of ICTJA-CSIC, and EPMA and SEM Laboratories of CCiTUB (X. Llovet and J. Garcia Veigas). This study was carried out in the framework of the Research Consolidated Groups GEOVOL (Canary Islands Government, ULPGC) and GEOPAM (Generalitat de Catalunya, 2017 SGR 1494).
Fernandez–Turiel, J.L., Perez–Torrado, F.J., Rodriguez–Gonzalez, A., Saavedra, J., Carracedo, J.C., Rejas, M., Lobo, A., Osterrieth, M., Carrizo, J.I., Esteban, G., Gallardo, J., Ratto, N., 2019. The large eruption 4.2 ka cal BP in Cerro Blanco, Central Volcanic Zone, Andes: Insights to the Holocene eruptive deposits in the southern Puna and adjacent regions. Estudios Geologicos 75, e088.
How to cite: Fernandez-Turiel, J.-L., Perez-Torrado, F.-J., Rodríguez-Gonzalez, A., Ratto, N., Rejas, M., and Lobo, A.: The 4.2 ka cal BP major eruption of Cerro Blanco, Central Andes, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5038, https://doi.org/10.5194/egusphere-egu2020-5038, 2020.
EGU2020-19275 | Displays | ITS2.13/AS4.29
Impact of volcanic halogens on the ozone layer and climate, a look to the past to highlight the presentHélène Balcone-Boissard, Thiébaut D'Augustin, Georges Boudon, Slimane Bekki, Magali Bonifacie, Omar Boudouma, Anne-Sophie Bouvier, Guillaume Carazzo, Etienne Deloule, Michel Fialin, and Nicolas Rividi
Explosive eruptions of the Plinian type inject large amounts of particles (pumice, ash, aerosols) and volatile species into the atmosphere. They result from the rapid discharge of a magma chamber and involve large volumes of magma (from a km3 to hundreds of km3). Such eruptions correspond to a rapid ascent of magma in the conduit driven by the exsolution of volatile species. If the magma supply is continuous, this jet produces a convective eruptive column that can reach tens of km in height and transports gas and particles (pumice, ash, aerosols) directly into the stratosphere. Depending on the latitude of the volcano, the volume of implied magma, the height of the eruptive plume and the composition of the released gaseous and particulate mixture, these events can strongly affect the environment at the local or even at a global scale. Almost all studies on global impacts of volcanic eruptions have largely focused on the sulfur component. Volcanoes are also responsible for the emission of halogens which have a crucial impact on the ozone layer and therefore the climate.
The objective of our project is to revisit the issue of the impact of volcanism on the atmosphere and climate by considering not only the sulfur component but also the halogen component. We will provide field work-based constraints on the strength of halogen (Cl and Br) emissions and on degassing processes for key eruptions, we will characterise the dynamics of volcanic plumes, notably the vertical distribution of emissions and we will explore and quantify the respective impacts of sulfur and halogen emissions on the ozone layer and climate.
Here we will shed light on the methodology that will combine field campaign, laboratory analysis of collected samples and a hierarchy of modelling tools to study. We use an approach combining field studies, petrological characterization, geochemical measurements including isotopic data, estimation of the volume of involved magma and the height of injection of gases and particles by modelling the eruptive plume dynamic and numerical simulation of the impacts at the plume scale and at the global scale. The first halogen budget will also be presented.
How to cite: Balcone-Boissard, H., D'Augustin, T., Boudon, G., Bekki, S., Bonifacie, M., Boudouma, O., Bouvier, A.-S., Carazzo, G., Deloule, E., Fialin, M., and Rividi, N.: Impact of volcanic halogens on the ozone layer and climate, a look to the past to highlight the present , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19275, https://doi.org/10.5194/egusphere-egu2020-19275, 2020.
Explosive eruptions of the Plinian type inject large amounts of particles (pumice, ash, aerosols) and volatile species into the atmosphere. They result from the rapid discharge of a magma chamber and involve large volumes of magma (from a km3 to hundreds of km3). Such eruptions correspond to a rapid ascent of magma in the conduit driven by the exsolution of volatile species. If the magma supply is continuous, this jet produces a convective eruptive column that can reach tens of km in height and transports gas and particles (pumice, ash, aerosols) directly into the stratosphere. Depending on the latitude of the volcano, the volume of implied magma, the height of the eruptive plume and the composition of the released gaseous and particulate mixture, these events can strongly affect the environment at the local or even at a global scale. Almost all studies on global impacts of volcanic eruptions have largely focused on the sulfur component. Volcanoes are also responsible for the emission of halogens which have a crucial impact on the ozone layer and therefore the climate.
The objective of our project is to revisit the issue of the impact of volcanism on the atmosphere and climate by considering not only the sulfur component but also the halogen component. We will provide field work-based constraints on the strength of halogen (Cl and Br) emissions and on degassing processes for key eruptions, we will characterise the dynamics of volcanic plumes, notably the vertical distribution of emissions and we will explore and quantify the respective impacts of sulfur and halogen emissions on the ozone layer and climate.
Here we will shed light on the methodology that will combine field campaign, laboratory analysis of collected samples and a hierarchy of modelling tools to study. We use an approach combining field studies, petrological characterization, geochemical measurements including isotopic data, estimation of the volume of involved magma and the height of injection of gases and particles by modelling the eruptive plume dynamic and numerical simulation of the impacts at the plume scale and at the global scale. The first halogen budget will also be presented.
How to cite: Balcone-Boissard, H., D'Augustin, T., Boudon, G., Bekki, S., Bonifacie, M., Boudouma, O., Bouvier, A.-S., Carazzo, G., Deloule, E., Fialin, M., and Rividi, N.: Impact of volcanic halogens on the ozone layer and climate, a look to the past to highlight the present , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19275, https://doi.org/10.5194/egusphere-egu2020-19275, 2020.
EGU2020-8656 | Displays | ITS2.13/AS4.29
Revisiting the climate impact of the ~13,000 yr BP Laacher See eruptionUlrike Niemeier, Felix Riede, Claudia Timmreck, and Anke Zernack
The large VEI= 6 explosive eruption of the Laacher See volcano dated to c. 13,000 yrs BP (Reinig et al., 2020) marks the end of explosive volcanism in the East Eifel volcanic zone (Germany). It has previously been argued that this eruption temporarily impacted Northern Hemisphere climate (Graf and Timmreck, 2001), environments (Baales et al., 2002) and human communities (e.g. Blong et al., 2018). It has also recently been suggested again that the eruption may in fact be implicated in the onset of the Younger Dryas. Recent advances in the modelling of volcanically-induced climatic forcing warrant renewed attention to the eruption’s potential influence on Northern Hemisphere climate. Detailed reconstructions of its eruption dynamics have been proposed. The eruption might have lasted several weeks, most likely with a short (~10h) intense initial phase. A bipartite NE- and S- plume deposited tephra to the north-east the volcano towards the Baltic Sea and to the south towards Italy (Riede et al., 2011).
In revisiting the eruption’s potential influence on Northern Hemisphere climate, we here present revised model simulations of the radiative impacts of the LSE using a global stratospheric aerosol model and new sulphur dioxide (SO2) emission estimates. The simulations were performed with the general circulation model MAECHAM5-HAM, which is coupled to an aerosol microphysical model. This allows us to simulate the evolution of the volcanic sulfur cloud and the transport of the ash cloud. The position of the observed deposits of the LSE depend on the weather and the wind direction during the eruption, demanding specific weather conditions to simulate similar locations of the observed deposits. Our models provide significantly improved insights into the meteorological situation during the eruption event as well as its impacts on Northern Hemisphere climate, with attendant implications for ecological and cultural impacts.
References
Baales, M., Jöris, O., Street, M., Bittmann, F., Weninger, B. and Wiethold, J.: Impact of the Late Glacial Eruption of the Laacher See Volcano, Central Rhineland, Germany, Quaternary Research, 58(3), 273–288, doi:10.1006/qres.2002.2379, 2002.
Blong, R. J., Riede, F. and Chen, Q.: A fuzzy logic methodology for assessing the resilience of past communities to tephra fall: a Laacher See eruption 13,000 year BP case, Volcanica, 1(1), 63–84, doi:https://doi.org/10.30909/vol.01.01.6384, 2018.
Graf, H.-F. and Timmreck, C.: A general climate model simulation of the aerosol radiative effects of the Laacher See eruption (10,900 B.C.), Journal of Geophysical Research, 106(14), 14747–14756, doi:0148-0227/01/2001JD900152, 2001.
Reinig, F., Cherubini, P., Engels, S., Esper, J., Guidobaldi, G., Jöris, O., Lane, C., Nievergelt, D., Oppenheimer, C., Park, C., Pfanz, H., Riede, F., Schmincke, H.-U., Street, M., Wacker, L. and Büntgen, U.: Towards a dendrochronologically refined date of the Laacher See eruption around 13,000 years ago, Quaternary Science Reviews, 229, 106128, doi:10.1016/j.quascirev.2019.106128, 2020.
Riede, F., Bazely, O., Newton, A. J. and Lane, C. S.: A Laacher See-eruption supplement to Tephrabase: Investigating distal tephra fallout dynamics, Quaternary International, 246(1–2), 134–144, doi:doi: 10.1016/j.quaint.2011.06.029, 2011.
How to cite: Niemeier, U., Riede, F., Timmreck, C., and Zernack, A.: Revisiting the climate impact of the ~13,000 yr BP Laacher See eruption, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8656, https://doi.org/10.5194/egusphere-egu2020-8656, 2020.
The large VEI= 6 explosive eruption of the Laacher See volcano dated to c. 13,000 yrs BP (Reinig et al., 2020) marks the end of explosive volcanism in the East Eifel volcanic zone (Germany). It has previously been argued that this eruption temporarily impacted Northern Hemisphere climate (Graf and Timmreck, 2001), environments (Baales et al., 2002) and human communities (e.g. Blong et al., 2018). It has also recently been suggested again that the eruption may in fact be implicated in the onset of the Younger Dryas. Recent advances in the modelling of volcanically-induced climatic forcing warrant renewed attention to the eruption’s potential influence on Northern Hemisphere climate. Detailed reconstructions of its eruption dynamics have been proposed. The eruption might have lasted several weeks, most likely with a short (~10h) intense initial phase. A bipartite NE- and S- plume deposited tephra to the north-east the volcano towards the Baltic Sea and to the south towards Italy (Riede et al., 2011).
In revisiting the eruption’s potential influence on Northern Hemisphere climate, we here present revised model simulations of the radiative impacts of the LSE using a global stratospheric aerosol model and new sulphur dioxide (SO2) emission estimates. The simulations were performed with the general circulation model MAECHAM5-HAM, which is coupled to an aerosol microphysical model. This allows us to simulate the evolution of the volcanic sulfur cloud and the transport of the ash cloud. The position of the observed deposits of the LSE depend on the weather and the wind direction during the eruption, demanding specific weather conditions to simulate similar locations of the observed deposits. Our models provide significantly improved insights into the meteorological situation during the eruption event as well as its impacts on Northern Hemisphere climate, with attendant implications for ecological and cultural impacts.
References
Baales, M., Jöris, O., Street, M., Bittmann, F., Weninger, B. and Wiethold, J.: Impact of the Late Glacial Eruption of the Laacher See Volcano, Central Rhineland, Germany, Quaternary Research, 58(3), 273–288, doi:10.1006/qres.2002.2379, 2002.
Blong, R. J., Riede, F. and Chen, Q.: A fuzzy logic methodology for assessing the resilience of past communities to tephra fall: a Laacher See eruption 13,000 year BP case, Volcanica, 1(1), 63–84, doi:https://doi.org/10.30909/vol.01.01.6384, 2018.
Graf, H.-F. and Timmreck, C.: A general climate model simulation of the aerosol radiative effects of the Laacher See eruption (10,900 B.C.), Journal of Geophysical Research, 106(14), 14747–14756, doi:0148-0227/01/2001JD900152, 2001.
Reinig, F., Cherubini, P., Engels, S., Esper, J., Guidobaldi, G., Jöris, O., Lane, C., Nievergelt, D., Oppenheimer, C., Park, C., Pfanz, H., Riede, F., Schmincke, H.-U., Street, M., Wacker, L. and Büntgen, U.: Towards a dendrochronologically refined date of the Laacher See eruption around 13,000 years ago, Quaternary Science Reviews, 229, 106128, doi:10.1016/j.quascirev.2019.106128, 2020.
Riede, F., Bazely, O., Newton, A. J. and Lane, C. S.: A Laacher See-eruption supplement to Tephrabase: Investigating distal tephra fallout dynamics, Quaternary International, 246(1–2), 134–144, doi:doi: 10.1016/j.quaint.2011.06.029, 2011.
How to cite: Niemeier, U., Riede, F., Timmreck, C., and Zernack, A.: Revisiting the climate impact of the ~13,000 yr BP Laacher See eruption, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8656, https://doi.org/10.5194/egusphere-egu2020-8656, 2020.
EGU2020-4045 | Displays | ITS2.13/AS4.29
On the dependency of simulated volcanically-forced variability to model configurationClaudia Timmreck, Matthew Toohey, and Davide Zanchettin
Several uncertainties affect the simulation of the climatic response to strong volcanic forcing by coupled climate models, which primarily stem from model specificities and intrinsic variability. To better understand the relative contribution of both sources of uncertainties, the Model Intercomparison Project on the climatic response to Volcanic forcing (VolMIP) has been initiated as part of the CMIP6 protocol. VolMIP has defined a coordinated set of idealized volcanic perturbation experiments with prescription of the same volcanic forcing and coherent sampling of initial conditions to be performed to the different participating coupled climate models. However, as the VolMIP effort focuses on comparison across different models, an open question remains about how different configurations of the same model affect the comparability of results.
Here, we present first results of CMIP6 VolMIP simulations performed with the MPIESM1.2 in two resolutions. The low resolution (LR) configuration employs an atmospheric resolution of T63 (~200 km), and nominal ocean resolution of 1.5°. The high resolution (HR) configuration employs twice of the horizontal resolution of its atmospheric component (T127 ~100 km) with a spontaneously generated QBO, and an eddy-permitting ocean resolution of 0.4°.
In this contribution we illustrate results from the volc-pinatubo experiments, which focus on the assessment of uncertainty in the seasonal-to-interannual climatic response to an idealized 1991 Pinatubo-like eruption, and from the volc-long experiments, which are designed to investigate the long-term dynamical climate response to volcanic eruptions. We compare responses of different climate variables, e.g. near-surface air temperature, precipitation and sea ice on global and regional scale. Special emphasis will be placed on the volcanic impact on the tropical hydrological cycle.
How to cite: Timmreck, C., Toohey, M., and Zanchettin, D.: On the dependency of simulated volcanically-forced variability to model configuration , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4045, https://doi.org/10.5194/egusphere-egu2020-4045, 2020.
Several uncertainties affect the simulation of the climatic response to strong volcanic forcing by coupled climate models, which primarily stem from model specificities and intrinsic variability. To better understand the relative contribution of both sources of uncertainties, the Model Intercomparison Project on the climatic response to Volcanic forcing (VolMIP) has been initiated as part of the CMIP6 protocol. VolMIP has defined a coordinated set of idealized volcanic perturbation experiments with prescription of the same volcanic forcing and coherent sampling of initial conditions to be performed to the different participating coupled climate models. However, as the VolMIP effort focuses on comparison across different models, an open question remains about how different configurations of the same model affect the comparability of results.
Here, we present first results of CMIP6 VolMIP simulations performed with the MPIESM1.2 in two resolutions. The low resolution (LR) configuration employs an atmospheric resolution of T63 (~200 km), and nominal ocean resolution of 1.5°. The high resolution (HR) configuration employs twice of the horizontal resolution of its atmospheric component (T127 ~100 km) with a spontaneously generated QBO, and an eddy-permitting ocean resolution of 0.4°.
In this contribution we illustrate results from the volc-pinatubo experiments, which focus on the assessment of uncertainty in the seasonal-to-interannual climatic response to an idealized 1991 Pinatubo-like eruption, and from the volc-long experiments, which are designed to investigate the long-term dynamical climate response to volcanic eruptions. We compare responses of different climate variables, e.g. near-surface air temperature, precipitation and sea ice on global and regional scale. Special emphasis will be placed on the volcanic impact on the tropical hydrological cycle.
How to cite: Timmreck, C., Toohey, M., and Zanchettin, D.: On the dependency of simulated volcanically-forced variability to model configuration , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4045, https://doi.org/10.5194/egusphere-egu2020-4045, 2020.
EGU2020-3464 | Displays | ITS2.13/AS4.29
Interactive stratospheric aerosol model experiments suggest a strong impact of climate change on the aerosol evolution and radiative forcing from future eruptions.Thomas Aubry, Anja Schmidt, and Jim Haywood
Radiative forcing from stratospheric volcanic sulfate aerosols is a key driver of climate variability. However, climate change may also impact volcanic forcing which remains largely unexplored. Atmospheric processes indeed control virtually all mechanisms that govern volcanic forcing, such as the rise of the volcanic column, the chemical and microphysical evolution of volcanic aerosols and their transport in the atmosphere.
Accordingly, we present novel numerical experiments combining chemistry-climate and volcanic plume modelling to investigate how climate change will affect volcanic forcing. We compare the aerosol evolution and radiative forcing following two eruption cases in two different climates (historical 1990’s and SSP5 8.5 2090’s). We chose two tropical eruptions: i) a strong intensity (i.e., mass flux), Pinatubo-like eruption emitting 10 Tg of sulfur dioxide (SO2); and ii) a moderate intensity eruption emitting 1 Tg of SO2, similar to eruptions such as those of Merapi in 2010, Nabro in 2011 or Kelud in 2014, which have had major impacts on the stratospheric aerosol background and are thought to have contributed to the global temperature hiatus in the early 21st century. The chemistry-climate model that we use (UM_UKCA version 11.2) has the capacity to interactively simulate the chemical and microphysical evolution of stratospheric sulfate aerosol given an initial injection of SO2. Furthermore, we use a plume model to calculate SO2 injection heights for a given eruption intensity and atmospheric conditions simulated by UM-UKCA.
In our experiments, the peak stratospheric aerosol optical depth (SAOD) of the high-intensity, Pinatubo-like eruption increases by 10% in the SSP5 8.5 2090 climate compared to the historical 1990 climate. Furthermore, the peak global-mean top-of-the-atmosphere radiative forcing of the same eruption increases by 30%. In contrast, the peak SAOD of the moderate intensity eruption decreases by a factor of 4 (with radiative forcing being small compared to simulated natural variability). Our results thus suggest that volcanic forcing will become more extreme and polarized in the future, with the forcing associated with moderate-intensity and relatively frequent eruptions being muted, but the forcing associated with high-intensity and relatively rare eruptions being amplified. We analyze which mechanisms are responsible for the simulated impacts of climate change on volcanic forcing, and discuss potential additional feedbacks expected in our future ocean-atmosphere coupled simulations.
How to cite: Aubry, T., Schmidt, A., and Haywood, J.: Interactive stratospheric aerosol model experiments suggest a strong impact of climate change on the aerosol evolution and radiative forcing from future eruptions., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3464, https://doi.org/10.5194/egusphere-egu2020-3464, 2020.
Radiative forcing from stratospheric volcanic sulfate aerosols is a key driver of climate variability. However, climate change may also impact volcanic forcing which remains largely unexplored. Atmospheric processes indeed control virtually all mechanisms that govern volcanic forcing, such as the rise of the volcanic column, the chemical and microphysical evolution of volcanic aerosols and their transport in the atmosphere.
Accordingly, we present novel numerical experiments combining chemistry-climate and volcanic plume modelling to investigate how climate change will affect volcanic forcing. We compare the aerosol evolution and radiative forcing following two eruption cases in two different climates (historical 1990’s and SSP5 8.5 2090’s). We chose two tropical eruptions: i) a strong intensity (i.e., mass flux), Pinatubo-like eruption emitting 10 Tg of sulfur dioxide (SO2); and ii) a moderate intensity eruption emitting 1 Tg of SO2, similar to eruptions such as those of Merapi in 2010, Nabro in 2011 or Kelud in 2014, which have had major impacts on the stratospheric aerosol background and are thought to have contributed to the global temperature hiatus in the early 21st century. The chemistry-climate model that we use (UM_UKCA version 11.2) has the capacity to interactively simulate the chemical and microphysical evolution of stratospheric sulfate aerosol given an initial injection of SO2. Furthermore, we use a plume model to calculate SO2 injection heights for a given eruption intensity and atmospheric conditions simulated by UM-UKCA.
In our experiments, the peak stratospheric aerosol optical depth (SAOD) of the high-intensity, Pinatubo-like eruption increases by 10% in the SSP5 8.5 2090 climate compared to the historical 1990 climate. Furthermore, the peak global-mean top-of-the-atmosphere radiative forcing of the same eruption increases by 30%. In contrast, the peak SAOD of the moderate intensity eruption decreases by a factor of 4 (with radiative forcing being small compared to simulated natural variability). Our results thus suggest that volcanic forcing will become more extreme and polarized in the future, with the forcing associated with moderate-intensity and relatively frequent eruptions being muted, but the forcing associated with high-intensity and relatively rare eruptions being amplified. We analyze which mechanisms are responsible for the simulated impacts of climate change on volcanic forcing, and discuss potential additional feedbacks expected in our future ocean-atmosphere coupled simulations.
How to cite: Aubry, T., Schmidt, A., and Haywood, J.: Interactive stratospheric aerosol model experiments suggest a strong impact of climate change on the aerosol evolution and radiative forcing from future eruptions., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3464, https://doi.org/10.5194/egusphere-egu2020-3464, 2020.
EGU2020-289 | Displays | ITS2.13/AS4.29
How volcanism impact on the variability of the South American Monsoon System and the associated Atlantic Subtropical CellLaura Sobral Verona, Ilana Wainer, and Myriam Khodri
Large volcanic eruptions can affect the global climate through changes in atmospheric and ocean circulation. Understanding the influence of volcanic eruptions on the hydroclimate over monsoon regions is of great scientific and social importance. The South America Monsoon System (SAMS) is the most important climatic feature of the continent. Both the Intertropical and the South Atlantic wind convergence zones (ITCZ and SACZ, respectively) are fundamental components of the SAMS. They show variations on a broad range of scales, dependent on complex multi-system interactions with the adjacent Atlantic Ocean and teleconnections. Also driven by the winds, the Atlantic Subtropical Cell (STC) is the link between the subduction zone in the subtropical gyre with the tropics. Hence, the STC influence equatorial sea surface temperature variability on interannual to decadal scales in the tropical Atlantic Ocean. In order to improve our understanding of the responses of the ocean-atmosphere system to the volcanic forcing, we aim to identify the dominant mechanisms of seasonal-to-interdecadal variability of the SAMS and the Atlantic STC after large Pinatubo-like (1991) and Tambora-like (1815) eruptions relying on the VolMIP model intercomparison project experiments.
How to cite: Sobral Verona, L., Wainer, I., and Khodri, M.: How volcanism impact on the variability of the South American Monsoon System and the associated Atlantic Subtropical Cell, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-289, https://doi.org/10.5194/egusphere-egu2020-289, 2020.
Large volcanic eruptions can affect the global climate through changes in atmospheric and ocean circulation. Understanding the influence of volcanic eruptions on the hydroclimate over monsoon regions is of great scientific and social importance. The South America Monsoon System (SAMS) is the most important climatic feature of the continent. Both the Intertropical and the South Atlantic wind convergence zones (ITCZ and SACZ, respectively) are fundamental components of the SAMS. They show variations on a broad range of scales, dependent on complex multi-system interactions with the adjacent Atlantic Ocean and teleconnections. Also driven by the winds, the Atlantic Subtropical Cell (STC) is the link between the subduction zone in the subtropical gyre with the tropics. Hence, the STC influence equatorial sea surface temperature variability on interannual to decadal scales in the tropical Atlantic Ocean. In order to improve our understanding of the responses of the ocean-atmosphere system to the volcanic forcing, we aim to identify the dominant mechanisms of seasonal-to-interdecadal variability of the SAMS and the Atlantic STC after large Pinatubo-like (1991) and Tambora-like (1815) eruptions relying on the VolMIP model intercomparison project experiments.
How to cite: Sobral Verona, L., Wainer, I., and Khodri, M.: How volcanism impact on the variability of the South American Monsoon System and the associated Atlantic Subtropical Cell, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-289, https://doi.org/10.5194/egusphere-egu2020-289, 2020.
EGU2020-291 | Displays | ITS2.13/AS4.29
Antarctic climate response to large volcanic eruptions in the historical periodNatália Silva, Ilana Wainer, and Myriam Khodri
Large tropical volcanic eruptions are well known to change the global climate and maybe even interfere with some natural modes of variability such as El Niño Southern Oscillation. As they inject a high amount of sulfur gas into the stratosphere, sulfate aerosol loading increases a few months after the eruption, which is then transported globally. Large tropical events may, therefore, affect extratropical climate variability. For example, temperature changes have been identified in Antarctica after the Pinatubo eruption in 1991, as warming in the peninsula. However, a causal link with the eruption and, more generally, a possible influence of large tropical volcanic eruptions on the Southern Hemisphere climate are still open questions. In this study we aim to focus on the five biggest eruptions of the historical period (Krakatau — Aug/1883, Santa María — Oct/1902, Mt Agung — Mar/1963, El Chichón — Apr/1982 and Pinatubo — Jun/1991) by assessing two CMIP6 class models (IPSL-CM6A-LR Large Ensemble and BESM) and two Reanalyses (NOAA 20th Century Reanalysis and ECMWF's ERA 20th Century).
How to cite: Silva, N., Wainer, I., and Khodri, M.: Antarctic climate response to large volcanic eruptions in the historical period, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-291, https://doi.org/10.5194/egusphere-egu2020-291, 2020.
Large tropical volcanic eruptions are well known to change the global climate and maybe even interfere with some natural modes of variability such as El Niño Southern Oscillation. As they inject a high amount of sulfur gas into the stratosphere, sulfate aerosol loading increases a few months after the eruption, which is then transported globally. Large tropical events may, therefore, affect extratropical climate variability. For example, temperature changes have been identified in Antarctica after the Pinatubo eruption in 1991, as warming in the peninsula. However, a causal link with the eruption and, more generally, a possible influence of large tropical volcanic eruptions on the Southern Hemisphere climate are still open questions. In this study we aim to focus on the five biggest eruptions of the historical period (Krakatau — Aug/1883, Santa María — Oct/1902, Mt Agung — Mar/1963, El Chichón — Apr/1982 and Pinatubo — Jun/1991) by assessing two CMIP6 class models (IPSL-CM6A-LR Large Ensemble and BESM) and two Reanalyses (NOAA 20th Century Reanalysis and ECMWF's ERA 20th Century).
How to cite: Silva, N., Wainer, I., and Khodri, M.: Antarctic climate response to large volcanic eruptions in the historical period, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-291, https://doi.org/10.5194/egusphere-egu2020-291, 2020.
EGU2020-1132 | Displays | ITS2.13/AS4.29
Investigating the volcanic impacts on Tropical South Atlantic modes of variability for the Historical period using the IPSL-CM6-LR Large Ensemble and INPE-BESMEduardo Lobo Lopes, Ilana Elazari Klein Coaracy Wainer, and Myriam Khodri
In this study we investigate the the South Atlantic Ocean response to large tropical volcanic eruptions for the historial periods. In particular, we analyse the changes in the coupling of the ocean and the atmosphere over that ocean basin triggered by changes in the amount of incoming shortwave radiation.
The analysis consists of averaging the response of the five biggest eruptions in the last 200 years, namely, Krakatoa (1883), Santa Maria (1902), Agung (1963), El Chichón (1982) and Pinatubo (1991), represented by the IPSL-CMP6-LR Large Ensemble, from the Institut Pierre Simon Laplace, and the BESM-CMIP6, from INPE-CPTEC. We perform the same analysis on reanalysis products as well, such as the HadISST and NOAA's ERSSTv5.
In order to capture the interannual change in the climate variability, we use two climate indices that assess the coupling of ocean and atmosphere over this timescale, namely, the Atlantic Meridional Mode (AMM) and the South Atlantic Subtropical Dipole (SASD). We compute their time series from the model output and calculate their regression to the SST and precipitation fields.
Such analysis should yield more insights on how the interaction between the ocean and the atmosphere responds to external forcings, providing a better understanding of the processes that control the climate variability over the South Atlantic Ocean basin.
How to cite: Lobo Lopes, E., Elazari Klein Coaracy Wainer, I., and Khodri, M.: Investigating the volcanic impacts on Tropical South Atlantic modes of variability for the Historical period using the IPSL-CM6-LR Large Ensemble and INPE-BESM, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1132, https://doi.org/10.5194/egusphere-egu2020-1132, 2020.
In this study we investigate the the South Atlantic Ocean response to large tropical volcanic eruptions for the historial periods. In particular, we analyse the changes in the coupling of the ocean and the atmosphere over that ocean basin triggered by changes in the amount of incoming shortwave radiation.
The analysis consists of averaging the response of the five biggest eruptions in the last 200 years, namely, Krakatoa (1883), Santa Maria (1902), Agung (1963), El Chichón (1982) and Pinatubo (1991), represented by the IPSL-CMP6-LR Large Ensemble, from the Institut Pierre Simon Laplace, and the BESM-CMIP6, from INPE-CPTEC. We perform the same analysis on reanalysis products as well, such as the HadISST and NOAA's ERSSTv5.
In order to capture the interannual change in the climate variability, we use two climate indices that assess the coupling of ocean and atmosphere over this timescale, namely, the Atlantic Meridional Mode (AMM) and the South Atlantic Subtropical Dipole (SASD). We compute their time series from the model output and calculate their regression to the SST and precipitation fields.
Such analysis should yield more insights on how the interaction between the ocean and the atmosphere responds to external forcings, providing a better understanding of the processes that control the climate variability over the South Atlantic Ocean basin.
How to cite: Lobo Lopes, E., Elazari Klein Coaracy Wainer, I., and Khodri, M.: Investigating the volcanic impacts on Tropical South Atlantic modes of variability for the Historical period using the IPSL-CM6-LR Large Ensemble and INPE-BESM, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1132, https://doi.org/10.5194/egusphere-egu2020-1132, 2020.
EGU2020-162 | Displays | ITS2.13/AS4.29
Trace element emissions during the 2018 Kilauea Lower East Rift Zone eruptionEmily Mason, Penny Wieser, Emma Liu, Evgenia Ilyinskaya, Marie Edmonds, Rachel C W Whitty, Tamsin Mather, Tamar Elias, Patricia Amanda Nadeau, Christoph Kern, David J Schneider, and Clive Oppenheimer
The 2018 eruption on the Lower East Rift Zone of Kilauea volcano, Hawai’i released unprecedented fluxes of gases (>200 kt/d SO2) and aerosol into the troposphere [1,2]. The eruption affected air quality across the island and lava flows reached the ocean, forming a halogen-rich plume as lava rapidly boiled and evaporated seawater.
We present the at-source composition – gas and size-segregated aerosol – of both the magmatic plume (emitted from ‘Fissure 8’, F8) and the lava-seawater interaction plume (ocean entry, OE), including major gas species, and major and trace elements in non-silicate aerosol. Trace metal and metalloid (TMM) emissions during the 2018 eruption were the highest recorded for Kilauea, and the magmatic ‘fingerprint’ of TMMs (X/SO2 ratios) in the 2018 plume is consistent with measurements made at the summit lava lake in 2008 [3], and with other rift and hotspot volcanoes [4,5].
We show that the OE plume composition predominantly reflects seawater composition with a small contribution from plagioclase +/- ash. However, elevated concentrations of some TMMs (Bi, Cd, Cu, Zn, Ag) with affinity for Cl-speciation in the gas phase cannot be accounted for by the silicate correction and therefore may derive from degassing of lava in the presence of elevated Cl-. In the case of silver and copper, concentrations in the OE plume are elevated above both the F8 plume and seawater.
At-vent speciation of TMMs in the F8 plume during oxidation (following a correction for ash contributions) was assessed using a Gibbs Energy Minimization algorithm (HSC chemistry, Outotec Research). We also demonstrate the sensitivity of speciation in the plume to the concentration of common ligand-forming elements, chlorine and sulfur. These results could be used as initial conditions in atmospheric reaction models to investigate how plume composition evolves as low-temperature chemistry takes over.
References:
[1] Neal C et al. (2019) Science
[2] Kern C et al. (2019) AGU Fall meeting abstract V43C-0209
[3] Mather T et al. (2012) GCA 83:292-323
[4] Zelenzki et al. (2013) Chem Geol 357:95-116
[5] Gauthier P-J et al. (2016) J Geophys 121:1610-1630
How to cite: Mason, E., Wieser, P., Liu, E., Ilyinskaya, E., Edmonds, M., Whitty, R. C. W., Mather, T., Elias, T., Nadeau, P. A., Kern, C., Schneider, D. J., and Oppenheimer, C.: Trace element emissions during the 2018 Kilauea Lower East Rift Zone eruption, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-162, https://doi.org/10.5194/egusphere-egu2020-162, 2020.
The 2018 eruption on the Lower East Rift Zone of Kilauea volcano, Hawai’i released unprecedented fluxes of gases (>200 kt/d SO2) and aerosol into the troposphere [1,2]. The eruption affected air quality across the island and lava flows reached the ocean, forming a halogen-rich plume as lava rapidly boiled and evaporated seawater.
We present the at-source composition – gas and size-segregated aerosol – of both the magmatic plume (emitted from ‘Fissure 8’, F8) and the lava-seawater interaction plume (ocean entry, OE), including major gas species, and major and trace elements in non-silicate aerosol. Trace metal and metalloid (TMM) emissions during the 2018 eruption were the highest recorded for Kilauea, and the magmatic ‘fingerprint’ of TMMs (X/SO2 ratios) in the 2018 plume is consistent with measurements made at the summit lava lake in 2008 [3], and with other rift and hotspot volcanoes [4,5].
We show that the OE plume composition predominantly reflects seawater composition with a small contribution from plagioclase +/- ash. However, elevated concentrations of some TMMs (Bi, Cd, Cu, Zn, Ag) with affinity for Cl-speciation in the gas phase cannot be accounted for by the silicate correction and therefore may derive from degassing of lava in the presence of elevated Cl-. In the case of silver and copper, concentrations in the OE plume are elevated above both the F8 plume and seawater.
At-vent speciation of TMMs in the F8 plume during oxidation (following a correction for ash contributions) was assessed using a Gibbs Energy Minimization algorithm (HSC chemistry, Outotec Research). We also demonstrate the sensitivity of speciation in the plume to the concentration of common ligand-forming elements, chlorine and sulfur. These results could be used as initial conditions in atmospheric reaction models to investigate how plume composition evolves as low-temperature chemistry takes over.
References:
[1] Neal C et al. (2019) Science
[2] Kern C et al. (2019) AGU Fall meeting abstract V43C-0209
[3] Mather T et al. (2012) GCA 83:292-323
[4] Zelenzki et al. (2013) Chem Geol 357:95-116
[5] Gauthier P-J et al. (2016) J Geophys 121:1610-1630
How to cite: Mason, E., Wieser, P., Liu, E., Ilyinskaya, E., Edmonds, M., Whitty, R. C. W., Mather, T., Elias, T., Nadeau, P. A., Kern, C., Schneider, D. J., and Oppenheimer, C.: Trace element emissions during the 2018 Kilauea Lower East Rift Zone eruption, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-162, https://doi.org/10.5194/egusphere-egu2020-162, 2020.
EGU2020-1008 | Displays | ITS2.13/AS4.29
A novel technique for studying volcanic gas chemistry and dispersion on short time scalesChristopher Fuchs, Jonas Kuhn, Nicole Bobrowski, and Ulrich Platt
Volcanic gas emissions, in particular, of sulphur and halogen species, play an important role in atmospheric chemistry. Due to the complex reaction kinetics of halogen radicals inside the volcanic plume, many properties like e.g. chemistry limiting factors and timescales of reactions, are still not well understood.
Imaging techniques based on optical remote sensing can get valuable insights into the study of both volcanic degassing fluxes and chemical conversions within the plume that continuously mixes with the atmosphere. However, state-of-the-art techniques are either too slow to resolve plume chemistry processes on its intrinsic time scales (e.g. DOAS) or show many cross sensitivities and hence are limited to rather high trace gas concentrations (e.g. SO2 cameras).
We introduce a novel technique for volcanic trace gas imaging, which, by employing a Fabry-Perot interferometer (FPI), uses detailed spectral information for the detection of the target trace gas. Cross sensitivities are thereby drastically reduced, allowing for the detection of much lower SO2 concentrations and imaging of other trace gas species like, e.g., BrO, OClO. Furthermore, the inherent calibration of the new techniques avoids the requirement of additional DOAS measurements or gas cells for calibration.
We present the first measurements of volcanic SO2 with an imaging Fabry-Perot interferometer correlation spectroscopy (IFPICS) prototype. The sensitivity of ≈ 1019 cm2 molec-1 is comparable to filter based SO2 cameras, whereas the selectivity is much higher (e.g. no ozone interference). This will largely increase the accuracy of SO2 emission rates, which are routinely used to approximate fluxes of other volcanic gas emissions into the atmosphere.
Additionally, sensitivity studies for further trace gases combining laboratory measurements and radiation transfer modelling show promising prospected BrO detection limits of < 1014 molec cm-², corresponding to mixing ratios of 10 to 100 ppt in volcanic plumes. The direct visualisation of BrO within the volcanic plume mixing with the ambient atmosphere will give important insights into the plume’s halogen chemistry and, thereby, its impact on the atmosphere.
How to cite: Fuchs, C., Kuhn, J., Bobrowski, N., and Platt, U.: A novel technique for studying volcanic gas chemistry and dispersion on short time scales, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1008, https://doi.org/10.5194/egusphere-egu2020-1008, 2020.
Volcanic gas emissions, in particular, of sulphur and halogen species, play an important role in atmospheric chemistry. Due to the complex reaction kinetics of halogen radicals inside the volcanic plume, many properties like e.g. chemistry limiting factors and timescales of reactions, are still not well understood.
Imaging techniques based on optical remote sensing can get valuable insights into the study of both volcanic degassing fluxes and chemical conversions within the plume that continuously mixes with the atmosphere. However, state-of-the-art techniques are either too slow to resolve plume chemistry processes on its intrinsic time scales (e.g. DOAS) or show many cross sensitivities and hence are limited to rather high trace gas concentrations (e.g. SO2 cameras).
We introduce a novel technique for volcanic trace gas imaging, which, by employing a Fabry-Perot interferometer (FPI), uses detailed spectral information for the detection of the target trace gas. Cross sensitivities are thereby drastically reduced, allowing for the detection of much lower SO2 concentrations and imaging of other trace gas species like, e.g., BrO, OClO. Furthermore, the inherent calibration of the new techniques avoids the requirement of additional DOAS measurements or gas cells for calibration.
We present the first measurements of volcanic SO2 with an imaging Fabry-Perot interferometer correlation spectroscopy (IFPICS) prototype. The sensitivity of ≈ 1019 cm2 molec-1 is comparable to filter based SO2 cameras, whereas the selectivity is much higher (e.g. no ozone interference). This will largely increase the accuracy of SO2 emission rates, which are routinely used to approximate fluxes of other volcanic gas emissions into the atmosphere.
Additionally, sensitivity studies for further trace gases combining laboratory measurements and radiation transfer modelling show promising prospected BrO detection limits of < 1014 molec cm-², corresponding to mixing ratios of 10 to 100 ppt in volcanic plumes. The direct visualisation of BrO within the volcanic plume mixing with the ambient atmosphere will give important insights into the plume’s halogen chemistry and, thereby, its impact on the atmosphere.
How to cite: Fuchs, C., Kuhn, J., Bobrowski, N., and Platt, U.: A novel technique for studying volcanic gas chemistry and dispersion on short time scales, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1008, https://doi.org/10.5194/egusphere-egu2020-1008, 2020.
EGU2020-2623 | Displays | ITS2.13/AS4.29
Volcanic ash chemical aging from multiple observational constraints for the Pinatubo eruptionMohamed AbdelKader, Georgiy Stenchikov, Christoph Bruhl, and Jos Lelieveld
Condensation of sulfuric acid formed from the co-injected sulfur dioxide on volcanic ash particles, so-called chemical aging, increases the particle size and changes their microphysical and optical properties. The larger aged particles have a higher removal rate, which reduces their lifetime. On the other hand, the aging increases the scattering cross-section, and therefore the ash optical depth is increasing due to aging. The uptake of sulfuric acid by volcanic ash delays the formation of new sulfate particles depending on the level of aging, which is characterized by the number of sulfuric acid layers coating a single ash particle (i.e., monolayers). Both the formation of sulfate aerosols and sulfuric acid uptake by ash particles affect the development of a volcanic plume and its radiative impact.
We employ the ECHAM5/MESSy atmospheric chemistry general circulation model (EMAC) to simulate the chemical aging of volcanic ash in the 1991 Pinatubo eruption volcanic plume. We emit 17Mt of SO2 and 75Mt of fine ash. Two aerosol modes represent ash size distribution: accumulation and coarse with 0.23 and 3.4 um median radii, respectively. We allow the sulfuric acid to condense on the ash particles and assume different levels of aging (from not aged to highly aged). We use independent observations for sulfur dioxide, volcanic ash mass, volcanic ash optical depth, and plume coverage area from the Advanced Very-High-Resolution Radiometer (AVHRR) observations and total optical depth from the Stratospheric Aerosol and Gas Experiment II (SAGE II). We constrain the number of monolayers on ash particles by testing simulated ash surface area and optical depth calculated within a fully coupled online stratospheric-tropospheric chemistry model against observations. The level of volcanic ash aging strongly affects the surface area of the volcanic ash plume, ranging from 3x106 km2 to 6x106 km2, compared to 3.8x106km2 from AVHRR retrievals. The volcanic ash optical depth, averaged over the volcanic plume area, ranges between 2 and 3.6. Using five monolayer coating assumption allows us to better reproduce the observed SO2 mass, its decay rate, total plume surface area, and ash optical depth. Most of the coarse ash particles are removed within a week after the eruption reducing the amount of sulfuric acid within the volcanic plume. The smaller particles have much longer residence time and continue to uptake sulfuric acid for more than three months.
How to cite: AbdelKader, M., Stenchikov, G., Bruhl, C., and Lelieveld, J.: Volcanic ash chemical aging from multiple observational constraints for the Pinatubo eruption, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2623, https://doi.org/10.5194/egusphere-egu2020-2623, 2020.
Condensation of sulfuric acid formed from the co-injected sulfur dioxide on volcanic ash particles, so-called chemical aging, increases the particle size and changes their microphysical and optical properties. The larger aged particles have a higher removal rate, which reduces their lifetime. On the other hand, the aging increases the scattering cross-section, and therefore the ash optical depth is increasing due to aging. The uptake of sulfuric acid by volcanic ash delays the formation of new sulfate particles depending on the level of aging, which is characterized by the number of sulfuric acid layers coating a single ash particle (i.e., monolayers). Both the formation of sulfate aerosols and sulfuric acid uptake by ash particles affect the development of a volcanic plume and its radiative impact.
We employ the ECHAM5/MESSy atmospheric chemistry general circulation model (EMAC) to simulate the chemical aging of volcanic ash in the 1991 Pinatubo eruption volcanic plume. We emit 17Mt of SO2 and 75Mt of fine ash. Two aerosol modes represent ash size distribution: accumulation and coarse with 0.23 and 3.4 um median radii, respectively. We allow the sulfuric acid to condense on the ash particles and assume different levels of aging (from not aged to highly aged). We use independent observations for sulfur dioxide, volcanic ash mass, volcanic ash optical depth, and plume coverage area from the Advanced Very-High-Resolution Radiometer (AVHRR) observations and total optical depth from the Stratospheric Aerosol and Gas Experiment II (SAGE II). We constrain the number of monolayers on ash particles by testing simulated ash surface area and optical depth calculated within a fully coupled online stratospheric-tropospheric chemistry model against observations. The level of volcanic ash aging strongly affects the surface area of the volcanic ash plume, ranging from 3x106 km2 to 6x106 km2, compared to 3.8x106km2 from AVHRR retrievals. The volcanic ash optical depth, averaged over the volcanic plume area, ranges between 2 and 3.6. Using five monolayer coating assumption allows us to better reproduce the observed SO2 mass, its decay rate, total plume surface area, and ash optical depth. Most of the coarse ash particles are removed within a week after the eruption reducing the amount of sulfuric acid within the volcanic plume. The smaller particles have much longer residence time and continue to uptake sulfuric acid for more than three months.
How to cite: AbdelKader, M., Stenchikov, G., Bruhl, C., and Lelieveld, J.: Volcanic ash chemical aging from multiple observational constraints for the Pinatubo eruption, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2623, https://doi.org/10.5194/egusphere-egu2020-2623, 2020.
EGU2020-4816 | Displays | ITS2.13/AS4.29
Observation and Quantification of CO2 passive degassing at sulphur Banks from Kilauea Volcano using thermal Infrared Multispectral ImagingStephane Boubanga Tombet, Sylvain Gatti, Andreas Eisele, and Vince Morton
The formation of Earth atmosphere and oceans have been primarily deeply influenced by volcanic emissions. In addition, the planet radiative balance and stratospheric chemistry can be affected by materials injected into the atmosphere by large explosive eruptions. Volcanic emission often contain water vapor (H2O), carbon dioxide (CO2), and depending on the type of volcano they may contain varying proportions of toxic/corrosive gases such as Sulphur dioxide (SO2), hydrogen fluoride (HF) and silicon tetrafluoride (SiF4). CO2 is generally the most abundant gas with the lowest solubility among the volatile compounds of magmatic liquids and the less susceptible than most other magmatic substances such as SO2 and HF. Thanks to those properties, the volcanic CO2 emission rates could play an important role for assessing volcanic hazards and for constraining the role of magma degassing in the biogeochemical cycle of carbon. However, measurements of CO2 emission rates from volcanoes remain challenging, mainly due to the difficulty of measuring volcanic CO2 against the high level of CO2 in the atmosphere. Thermal Infrared (TIR) imaging is now a well-established tool for the monitoring of volcanic activity since many volcanic gases such as CO2 and SO2 are infrared-active molecules. High speed broadband cameras give valuable insight into the physical processes taking place during volcanic activity, while spectrally resolved cameras allow to assess the composition of volcanic gases.
In this work we conducted TIR imaging and quantification of CO2 passive degassing at Sulphur Banks from Kilauea volcano using Telops Midwave Infrared time-resolved multispectral imager. The imager allows synchronized acquisition on eight channels, at a high frame rate, using a motorized filter wheel. Using appropriate spectral filters measurements allows estimation of the gas emissivity parameters in addition to providing selectivity regarding the chemical nature of the emitted gases. Our results show CO2 measurements within the volcano’s plume from its distinct spectral feature. Quantitative chemical maps with local CO2 concentrations of few hundreds of ppm was derived and mass flow rates of few g/s were also estimated. The results show that thermal infrared multispectral imaging provides unique insights for volcanology studies.
How to cite: Boubanga Tombet, S., Gatti, S., Eisele, A., and Morton, V.: Observation and Quantification of CO2 passive degassing at sulphur Banks from Kilauea Volcano using thermal Infrared Multispectral Imaging, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4816, https://doi.org/10.5194/egusphere-egu2020-4816, 2020.
The formation of Earth atmosphere and oceans have been primarily deeply influenced by volcanic emissions. In addition, the planet radiative balance and stratospheric chemistry can be affected by materials injected into the atmosphere by large explosive eruptions. Volcanic emission often contain water vapor (H2O), carbon dioxide (CO2), and depending on the type of volcano they may contain varying proportions of toxic/corrosive gases such as Sulphur dioxide (SO2), hydrogen fluoride (HF) and silicon tetrafluoride (SiF4). CO2 is generally the most abundant gas with the lowest solubility among the volatile compounds of magmatic liquids and the less susceptible than most other magmatic substances such as SO2 and HF. Thanks to those properties, the volcanic CO2 emission rates could play an important role for assessing volcanic hazards and for constraining the role of magma degassing in the biogeochemical cycle of carbon. However, measurements of CO2 emission rates from volcanoes remain challenging, mainly due to the difficulty of measuring volcanic CO2 against the high level of CO2 in the atmosphere. Thermal Infrared (TIR) imaging is now a well-established tool for the monitoring of volcanic activity since many volcanic gases such as CO2 and SO2 are infrared-active molecules. High speed broadband cameras give valuable insight into the physical processes taking place during volcanic activity, while spectrally resolved cameras allow to assess the composition of volcanic gases.
In this work we conducted TIR imaging and quantification of CO2 passive degassing at Sulphur Banks from Kilauea volcano using Telops Midwave Infrared time-resolved multispectral imager. The imager allows synchronized acquisition on eight channels, at a high frame rate, using a motorized filter wheel. Using appropriate spectral filters measurements allows estimation of the gas emissivity parameters in addition to providing selectivity regarding the chemical nature of the emitted gases. Our results show CO2 measurements within the volcano’s plume from its distinct spectral feature. Quantitative chemical maps with local CO2 concentrations of few hundreds of ppm was derived and mass flow rates of few g/s were also estimated. The results show that thermal infrared multispectral imaging provides unique insights for volcanology studies.
How to cite: Boubanga Tombet, S., Gatti, S., Eisele, A., and Morton, V.: Observation and Quantification of CO2 passive degassing at sulphur Banks from Kilauea Volcano using thermal Infrared Multispectral Imaging, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4816, https://doi.org/10.5194/egusphere-egu2020-4816, 2020.
EGU2020-7036 | Displays | ITS2.13/AS4.29
New strategies for chemistry-transport modelling of volcanic plumes: application to the case of Mount Etna eruption in March 18, 2012Mathieu Lachatre, Sylvain Mailler, Laurent Menut, Solene Turquety, Pasquale Sellitto, Henda Guermaz, Giuseppe Salerno, and Elisa Carboni
Atmospheric modelling allows to study large spatial scale events such as volcanic eruptions, which can emit large amounts of plume-confined particulate matter and gases, to evaluate their transport in the atmosphere and their subsequent impacts. However, to study more precisely these events, different issues have to be addressed. One notable example of these issues is the well-known excessive numerical diffusion in the atmospheric column in Eulerian models leading to excessive plume dispersion misrepresentation of the plume three-dimensional morphology and subsequent geographical extent of its impacts. Mount Etna volcano’s moderate eruption of March 18, 2012, which released about 3kT of sulphure dioxide in the atmosphere, has been simulated in this study with the CHIMERE chemistry-transport model. The simulated plume has been observed and tracked with satellite instruments (OMI and IASI) for several days during its transport over the Mediterranean Sea in order to compare with model outputs.
Sensitivity tests have been performed to evaluate the impact of injection altitude and profile shape on the subsequent trajectory of the plume. It was shown that altitude is the most sensitive parameter when results remain weakly sensitive to the vertical shape of injection.
In order to effectively address the problem of excessive numerical diffusion, we have included a new antidiffusive transport scheme in the vertical direction and a new strategy to use directly the vertical wind field provided by the forcing meteorological model. We show that both these improvements permit a substantial reduction in numerical diffusion. The use of the new antidiffusive vertical scheme has brought the strongest improvement in our model outputs. To a lesser extent, a more realistic representation of the vertical wind field has also been shown to reduce volcanic plume spreading. In summary, we show that these two improvements bring an improvement in the representation of the plume which is as strong as the improvement brought by increasing the number of vertical levels, but without an additional burden in computational power.
This study has been supported by AID (Agence de l'Innovation de Défense) under grant TROMPET.
How to cite: Lachatre, M., Mailler, S., Menut, L., Turquety, S., Sellitto, P., Guermaz, H., Salerno, G., and Carboni, E.: New strategies for chemistry-transport modelling of volcanic plumes: application to the case of Mount Etna eruption in March 18, 2012, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7036, https://doi.org/10.5194/egusphere-egu2020-7036, 2020.
Atmospheric modelling allows to study large spatial scale events such as volcanic eruptions, which can emit large amounts of plume-confined particulate matter and gases, to evaluate their transport in the atmosphere and their subsequent impacts. However, to study more precisely these events, different issues have to be addressed. One notable example of these issues is the well-known excessive numerical diffusion in the atmospheric column in Eulerian models leading to excessive plume dispersion misrepresentation of the plume three-dimensional morphology and subsequent geographical extent of its impacts. Mount Etna volcano’s moderate eruption of March 18, 2012, which released about 3kT of sulphure dioxide in the atmosphere, has been simulated in this study with the CHIMERE chemistry-transport model. The simulated plume has been observed and tracked with satellite instruments (OMI and IASI) for several days during its transport over the Mediterranean Sea in order to compare with model outputs.
Sensitivity tests have been performed to evaluate the impact of injection altitude and profile shape on the subsequent trajectory of the plume. It was shown that altitude is the most sensitive parameter when results remain weakly sensitive to the vertical shape of injection.
In order to effectively address the problem of excessive numerical diffusion, we have included a new antidiffusive transport scheme in the vertical direction and a new strategy to use directly the vertical wind field provided by the forcing meteorological model. We show that both these improvements permit a substantial reduction in numerical diffusion. The use of the new antidiffusive vertical scheme has brought the strongest improvement in our model outputs. To a lesser extent, a more realistic representation of the vertical wind field has also been shown to reduce volcanic plume spreading. In summary, we show that these two improvements bring an improvement in the representation of the plume which is as strong as the improvement brought by increasing the number of vertical levels, but without an additional burden in computational power.
This study has been supported by AID (Agence de l'Innovation de Défense) under grant TROMPET.
How to cite: Lachatre, M., Mailler, S., Menut, L., Turquety, S., Sellitto, P., Guermaz, H., Salerno, G., and Carboni, E.: New strategies for chemistry-transport modelling of volcanic plumes: application to the case of Mount Etna eruption in March 18, 2012, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7036, https://doi.org/10.5194/egusphere-egu2020-7036, 2020.
EGU2020-7548 | Displays | ITS2.13/AS4.29
Raikoke aerosol clouds observed from Tbilisi, Georgia and Halle, Belgium using ground-based twilight sky brightness spectral measurements.Nina Mateshvili, Didier Fussen, Iuri Mateshvili, Filip Vanhellemont, Christine Bingen, Tamar Paatashvili, Erkki Kyrölä, Charles Robert, and Emmanuel Dekemper
Raikoke volcano (Kuril Islands, Russia) eruption on 21 June 2019 sent an ash plume at 10-13 km altitude, which is higher than the local tropopause. Volcanic aerosols were transported around the globe, causing spectacular purple twilights. We will present ground-based measurements of monochromatic twilight sky brightnesses at 780 nm wavelength in two geographical points: Tbilisi, Georgia (41° 43’ N, 44° 47° E) and Halle, Belgium (50° 44′ N, 4° 14′ E). Aerosol extinction vertical profiles in the lower stratosphere-upper troposphere were retrieved with the help of the Levenberg–Marquardt algorithm. Monte Carlo code Siro was used to design a forward model. Raikoke aerosols observed above the both sites have shown essentially cloudy and variable structure. Multiple layers were observed between 10 and 17 km with extinction up to 0.01 km-1. We will present Raikoke aerosol cloud evolution in the period July 2019 –January 2020.
How to cite: Mateshvili, N., Fussen, D., Mateshvili, I., Vanhellemont, F., Bingen, C., Paatashvili, T., Kyrölä, E., Robert, C., and Dekemper, E.: Raikoke aerosol clouds observed from Tbilisi, Georgia and Halle, Belgium using ground-based twilight sky brightness spectral measurements., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7548, https://doi.org/10.5194/egusphere-egu2020-7548, 2020.
Raikoke volcano (Kuril Islands, Russia) eruption on 21 June 2019 sent an ash plume at 10-13 km altitude, which is higher than the local tropopause. Volcanic aerosols were transported around the globe, causing spectacular purple twilights. We will present ground-based measurements of monochromatic twilight sky brightnesses at 780 nm wavelength in two geographical points: Tbilisi, Georgia (41° 43’ N, 44° 47° E) and Halle, Belgium (50° 44′ N, 4° 14′ E). Aerosol extinction vertical profiles in the lower stratosphere-upper troposphere were retrieved with the help of the Levenberg–Marquardt algorithm. Monte Carlo code Siro was used to design a forward model. Raikoke aerosols observed above the both sites have shown essentially cloudy and variable structure. Multiple layers were observed between 10 and 17 km with extinction up to 0.01 km-1. We will present Raikoke aerosol cloud evolution in the period July 2019 –January 2020.
How to cite: Mateshvili, N., Fussen, D., Mateshvili, I., Vanhellemont, F., Bingen, C., Paatashvili, T., Kyrölä, E., Robert, C., and Dekemper, E.: Raikoke aerosol clouds observed from Tbilisi, Georgia and Halle, Belgium using ground-based twilight sky brightness spectral measurements., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7548, https://doi.org/10.5194/egusphere-egu2020-7548, 2020.
EGU2020-8337 | Displays | ITS2.13/AS4.29
Small-scale volcanic aerosols variability, processes and direct radiative impact at Mount Etna during the EPL-RADIO/REFLECT campaignsPasquale Sellitto, Giuseppe Salerno, Alessandro La Spina, Tommaso Caltabiano, Simona Scollo, Antonella Boselli, Giuseppe Leto, Ricardo Zanmar Sanchez, Alessia Sannino, Suzanne Crumeyrolle, Benjamin Hanoune, Chiara Giorio, Salvatore Giammanco, Tjarda Roberts, Alcide di Sarra, Bernard Legras, and Pierre Briole
The aerosol properties of Mount Etna’s passive degassing plume and its short-term processes and radiative impact were studied in detail during the EPL-RADIO/REFLECT campaigns (summer 2016, 17 and 19), using a synergistic combination of remote-sensing and in situ observations, and radiative transfer modelling. Summit observations show extremely high particulate matter concentrations, with no evidence of secondary sulphate aerosols (SA) formation. Marked indications of secondary SA formation, i.e. by the conversion of volcanic SO2 emissions, are found at larger spatial scales (<20 km downwind craters). Using portable photometers, the first mapping of small-scale spatial variability of the average size and burden of volcanic aerosols is obtained, as well as different longitudinal, perpendicular and vertical sections. A substantial variability of the plume properties is found at these spatial scales, revealing that processes (e.g. new particle formation and coarse aerosols sedimentation) are at play, which are not represented with current regional scale modelling and satellite observations. Vertical structures of typical passive degassing plumes are also obtained using observations from a fixed LiDAR station constrained with quasi-simultaneous photometric observations. These observations are used as input to radiative transfer calculations, to obtain the shortwave top of the atmosphere (TOA) and surface radiative effects of the plume. Moreover, the radiative impact of Mount Etna’s emissions is studied using a medium-term time series (a few months during summer 2019) of coupled aerosol optical properties and surface radiative flux at a fixed station on Etna’s eastern flank. These are the first available estimations in the literature of the radiative impact of a passive degassing volcanic plume and are here critically discussed. Cases of co-existent volcanic aerosol layers and aerosols from other sources (Saharan dust transport events, wildfire from South Italy and marine aerosols) are also presented and discussed.
How to cite: Sellitto, P., Salerno, G., La Spina, A., Caltabiano, T., Scollo, S., Boselli, A., Leto, G., Zanmar Sanchez, R., Sannino, A., Crumeyrolle, S., Hanoune, B., Giorio, C., Giammanco, S., Roberts, T., di Sarra, A., Legras, B., and Briole, P.: Small-scale volcanic aerosols variability, processes and direct radiative impact at Mount Etna during the EPL-RADIO/REFLECT campaigns, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8337, https://doi.org/10.5194/egusphere-egu2020-8337, 2020.
The aerosol properties of Mount Etna’s passive degassing plume and its short-term processes and radiative impact were studied in detail during the EPL-RADIO/REFLECT campaigns (summer 2016, 17 and 19), using a synergistic combination of remote-sensing and in situ observations, and radiative transfer modelling. Summit observations show extremely high particulate matter concentrations, with no evidence of secondary sulphate aerosols (SA) formation. Marked indications of secondary SA formation, i.e. by the conversion of volcanic SO2 emissions, are found at larger spatial scales (<20 km downwind craters). Using portable photometers, the first mapping of small-scale spatial variability of the average size and burden of volcanic aerosols is obtained, as well as different longitudinal, perpendicular and vertical sections. A substantial variability of the plume properties is found at these spatial scales, revealing that processes (e.g. new particle formation and coarse aerosols sedimentation) are at play, which are not represented with current regional scale modelling and satellite observations. Vertical structures of typical passive degassing plumes are also obtained using observations from a fixed LiDAR station constrained with quasi-simultaneous photometric observations. These observations are used as input to radiative transfer calculations, to obtain the shortwave top of the atmosphere (TOA) and surface radiative effects of the plume. Moreover, the radiative impact of Mount Etna’s emissions is studied using a medium-term time series (a few months during summer 2019) of coupled aerosol optical properties and surface radiative flux at a fixed station on Etna’s eastern flank. These are the first available estimations in the literature of the radiative impact of a passive degassing volcanic plume and are here critically discussed. Cases of co-existent volcanic aerosol layers and aerosols from other sources (Saharan dust transport events, wildfire from South Italy and marine aerosols) are also presented and discussed.
How to cite: Sellitto, P., Salerno, G., La Spina, A., Caltabiano, T., Scollo, S., Boselli, A., Leto, G., Zanmar Sanchez, R., Sannino, A., Crumeyrolle, S., Hanoune, B., Giorio, C., Giammanco, S., Roberts, T., di Sarra, A., Legras, B., and Briole, P.: Small-scale volcanic aerosols variability, processes and direct radiative impact at Mount Etna during the EPL-RADIO/REFLECT campaigns, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8337, https://doi.org/10.5194/egusphere-egu2020-8337, 2020.
EGU2020-8470 | Displays | ITS2.13/AS4.29
Modelling new particle formation in a passive volcanic plume using a new parameterisation in WRF-Chem - effects on climate-relevant variables at the regional scaleCéline Planche, Clémence Rose, Sandra Banson, Aurelia Lupascu, Mathieu Gouhier, and Karine Sellegri
New particle formation (NPF) is an important source of aerosol particles at global scale, including, in particular, cloud condensation nuclei (CCN). NPF has been observed worldwide in a broad variety of environments, but some specific conditions, such as those encountered in volcanic plumes, remain poorly documented in the literature. Yet, these conditions could promote the occurrence of the process, as recently evidenced in the volcanic eruption plume of the Piton de la Fournaise (Rose at al. 2019); a dominant fraction of the volcanic particles was moreover found to be of secondary origin in the plume, further highlighting the importance of the particle formation and growth processes associated to the volcanic plume eruption. A deeper comprehension of such natural processes is thus essential to assess their climate-related effects at present days but also to better define pre-industrial conditions and their variability in climate model simulations.
Sulfuric acid (SA) is commonly accepted as one of the main precursors for atmospheric NPF, and its role could be even more important in volcanic plume conditions, as recently evidenced by the airborne measurements conducted in the passive volcanic plumes of Etna and Stromboli (Sahyoun et al., 2019). Indeed, the flights performed in the frame of the STRAP campaign have allowed direct measurement of SA in such conditions for the first time, and have highlighted a strong connection between the cluster formation rate and SA concentration. Following these observations, the objective of the present work was to further quantify the formation of new particles in a volcanic plume and assess the effects of the process at a regional scale. For that purpose, the new parameterisation of nucleation derived by Sahyoun et al. (2019) was introduced in the model WRF-Chem, further optimized for the description of NPF. The flight ETNA13 described in detail in Sahyoun et al. (2019) was used as a case study to evaluate the effect of the new parameterisation on the cluster formation rate and particle number concentration in various size ranges, including CCN (i.e. climate-relevant) sizes.
References:
Sahyoun, M., Freney, E., Brito, J., Duplissy, J., Gouhier, M., Colomb, A., Dupuy, R., Bourianne, T., Nowak, J. B., Yan, C., Petäjä, T., Kulmala, M., Schwarzenboeck, A., Planche, C., and Sellegri, K.: Evidence of new particle formation within Etna and Stromboli volcanic plumes and its parameterization from airborne in-situ measurements, J. Geophys. Res.-Atmos., 124, 5650–5668, https://doi.org/10.1029/2018JD028882, 2019.
Rose, C., Foucart, B., Picard, D., Colomb, A., Metzger, J.-M., Tulet, P., and Sellegri, K.: New particle formation in the volcanic eruption plume of the Piton de la Fournaise: specific features from a long-term dataset, Atmos. Chem. Phys., 19, 13243–13265, https://doi.org/10.5194/acp-19-13243-2019, 2019.
How to cite: Planche, C., Rose, C., Banson, S., Lupascu, A., Gouhier, M., and Sellegri, K.: Modelling new particle formation in a passive volcanic plume using a new parameterisation in WRF-Chem - effects on climate-relevant variables at the regional scale, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8470, https://doi.org/10.5194/egusphere-egu2020-8470, 2020.
New particle formation (NPF) is an important source of aerosol particles at global scale, including, in particular, cloud condensation nuclei (CCN). NPF has been observed worldwide in a broad variety of environments, but some specific conditions, such as those encountered in volcanic plumes, remain poorly documented in the literature. Yet, these conditions could promote the occurrence of the process, as recently evidenced in the volcanic eruption plume of the Piton de la Fournaise (Rose at al. 2019); a dominant fraction of the volcanic particles was moreover found to be of secondary origin in the plume, further highlighting the importance of the particle formation and growth processes associated to the volcanic plume eruption. A deeper comprehension of such natural processes is thus essential to assess their climate-related effects at present days but also to better define pre-industrial conditions and their variability in climate model simulations.
Sulfuric acid (SA) is commonly accepted as one of the main precursors for atmospheric NPF, and its role could be even more important in volcanic plume conditions, as recently evidenced by the airborne measurements conducted in the passive volcanic plumes of Etna and Stromboli (Sahyoun et al., 2019). Indeed, the flights performed in the frame of the STRAP campaign have allowed direct measurement of SA in such conditions for the first time, and have highlighted a strong connection between the cluster formation rate and SA concentration. Following these observations, the objective of the present work was to further quantify the formation of new particles in a volcanic plume and assess the effects of the process at a regional scale. For that purpose, the new parameterisation of nucleation derived by Sahyoun et al. (2019) was introduced in the model WRF-Chem, further optimized for the description of NPF. The flight ETNA13 described in detail in Sahyoun et al. (2019) was used as a case study to evaluate the effect of the new parameterisation on the cluster formation rate and particle number concentration in various size ranges, including CCN (i.e. climate-relevant) sizes.
References:
Sahyoun, M., Freney, E., Brito, J., Duplissy, J., Gouhier, M., Colomb, A., Dupuy, R., Bourianne, T., Nowak, J. B., Yan, C., Petäjä, T., Kulmala, M., Schwarzenboeck, A., Planche, C., and Sellegri, K.: Evidence of new particle formation within Etna and Stromboli volcanic plumes and its parameterization from airborne in-situ measurements, J. Geophys. Res.-Atmos., 124, 5650–5668, https://doi.org/10.1029/2018JD028882, 2019.
Rose, C., Foucart, B., Picard, D., Colomb, A., Metzger, J.-M., Tulet, P., and Sellegri, K.: New particle formation in the volcanic eruption plume of the Piton de la Fournaise: specific features from a long-term dataset, Atmos. Chem. Phys., 19, 13243–13265, https://doi.org/10.5194/acp-19-13243-2019, 2019.
How to cite: Planche, C., Rose, C., Banson, S., Lupascu, A., Gouhier, M., and Sellegri, K.: Modelling new particle formation in a passive volcanic plume using a new parameterisation in WRF-Chem - effects on climate-relevant variables at the regional scale, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8470, https://doi.org/10.5194/egusphere-egu2020-8470, 2020.
EGU2020-19656 | Displays | ITS2.13/AS4.29
Self-limiting atmospheric lifetime of environmentally reactive elements in volcanic plumesEvgenia Ilyinskaya, Emily Mason, Penny Wieser, Lacey Holland, Emma Liu, Tamsin A. Mather, Marie Edmonds, Rachel Whitty, Tamar Elias, Patricia Nadeau, David Schneider, Jim McQuaid, Sarah Allen, Clive Oppenheimer, Christoph Kern, and David Damby
Volcanoes are a large global source of almost every element, including ~20 environmentally reactive trace elements classified as metal pollutants (e.g. selenium, cadmium and lead). Fluxes of metal pollutants from individual eruptions can be comparable to total anthropogenic emissions from large countries such as China.
The 2018 Lower East Rift Zone eruption of Kīlauea, Hawaii produced exceptionally high emission rates of major and trace chemical species compared to other basaltic eruptions over 3 months (200 kt/day of SO2; Kern et al. 2019). We tracked the volcanic plume from vent to exposed communities over 0-240 km distance using in-situ sampling and atmospheric dispersion modelling. This is the first time that trace elements in volcanic emissions (~60 species) are mapped over such distances. In 2019, we repeated the field campaign during a no-eruption period and showed that volcanic emissions had caused 3-5 orders of magnitude increase in airborne metal pollutant concentrations across the Island of Hawai’i.
We show that the volatility of the elements (the ease with which they are degassed from the magma) controls their particle-phase speciation, which in turn determines how fast they are depleted from the plume after emission. Elements with high magmatic volatilities (e.g. selenium, cadmium and lead) have up to 6 orders of magnitude higher depletion rates compared to non-volatile elements (e.g. magnesium, aluminium and rare earth metals).
Previous research and hazard mitigation efforts on volcanic emissions have focussed on sulphur and it has been assumed that other pollutants follow the same dispersion patterns. Our results show that the atmospheric fate of sulphur, and therefore the associated hazard distribution, does not represent an accurate guide to the behaviour and potential impacts of other species in volcanic emissions. Metal pollutants are predominantly volatile in volcanic plumes, and their rapid deposition (self-limited by their volatility) places disproportionate environmental burdens on the populated areas in the immediate vicinity of the active and, in turn, reduces the impacts on far-field communities.
Reference: Kern, C., T. Elias, P. Nadeau, A. H. Lerner, C. A. Werner, M. Cappos, L. E. Clor, P. J. Kelly, V. J. Realmuto, N. Theys, S. A. Carn, AGU, 2019; https://agu.confex.com/agu/fm19/meetingapp.cgi/Paper/507140.
How to cite: Ilyinskaya, E., Mason, E., Wieser, P., Holland, L., Liu, E., Mather, T. A., Edmonds, M., Whitty, R., Elias, T., Nadeau, P., Schneider, D., McQuaid, J., Allen, S., Oppenheimer, C., Kern, C., and Damby, D.: Self-limiting atmospheric lifetime of environmentally reactive elements in volcanic plumes, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19656, https://doi.org/10.5194/egusphere-egu2020-19656, 2020.
Volcanoes are a large global source of almost every element, including ~20 environmentally reactive trace elements classified as metal pollutants (e.g. selenium, cadmium and lead). Fluxes of metal pollutants from individual eruptions can be comparable to total anthropogenic emissions from large countries such as China.
The 2018 Lower East Rift Zone eruption of Kīlauea, Hawaii produced exceptionally high emission rates of major and trace chemical species compared to other basaltic eruptions over 3 months (200 kt/day of SO2; Kern et al. 2019). We tracked the volcanic plume from vent to exposed communities over 0-240 km distance using in-situ sampling and atmospheric dispersion modelling. This is the first time that trace elements in volcanic emissions (~60 species) are mapped over such distances. In 2019, we repeated the field campaign during a no-eruption period and showed that volcanic emissions had caused 3-5 orders of magnitude increase in airborne metal pollutant concentrations across the Island of Hawai’i.
We show that the volatility of the elements (the ease with which they are degassed from the magma) controls their particle-phase speciation, which in turn determines how fast they are depleted from the plume after emission. Elements with high magmatic volatilities (e.g. selenium, cadmium and lead) have up to 6 orders of magnitude higher depletion rates compared to non-volatile elements (e.g. magnesium, aluminium and rare earth metals).
Previous research and hazard mitigation efforts on volcanic emissions have focussed on sulphur and it has been assumed that other pollutants follow the same dispersion patterns. Our results show that the atmospheric fate of sulphur, and therefore the associated hazard distribution, does not represent an accurate guide to the behaviour and potential impacts of other species in volcanic emissions. Metal pollutants are predominantly volatile in volcanic plumes, and their rapid deposition (self-limited by their volatility) places disproportionate environmental burdens on the populated areas in the immediate vicinity of the active and, in turn, reduces the impacts on far-field communities.
Reference: Kern, C., T. Elias, P. Nadeau, A. H. Lerner, C. A. Werner, M. Cappos, L. E. Clor, P. J. Kelly, V. J. Realmuto, N. Theys, S. A. Carn, AGU, 2019; https://agu.confex.com/agu/fm19/meetingapp.cgi/Paper/507140.
How to cite: Ilyinskaya, E., Mason, E., Wieser, P., Holland, L., Liu, E., Mather, T. A., Edmonds, M., Whitty, R., Elias, T., Nadeau, P., Schneider, D., McQuaid, J., Allen, S., Oppenheimer, C., Kern, C., and Damby, D.: Self-limiting atmospheric lifetime of environmentally reactive elements in volcanic plumes, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19656, https://doi.org/10.5194/egusphere-egu2020-19656, 2020.
EGU2020-20415 | Displays | ITS2.13/AS4.29
Etesian winds after major volcanic eruptionsStergios Misios, Ioannis Logothetis, Mads F. Knudsen, Christoffer Karoff, and Kleareti Tourpali
Etesians winds are northerly winds in the lower atmosphere, blowing over the Aegean basin from early summer to early autumn, regulating summer time heating levels. The interannual variability of Etesians is thought to be linked to the extended Indian Summer Monsoon and tropical Pacific Region. Here, we are investigating the response of Etesians to major volcanic eruptions with the aid of ensembles of historical simulations. Specifically, we are making use of the CESM Last Millennium and Large Ensemble simulations to investigate modelled Etesian changes in the post-eruption one to three years. We find consistent changes for all major eruptions over the last millennium of reduced amplitude peaking in the first year after the eruption. Interestingly, the Laki eruption shows a similar signal to the other major tropical Eruptions. Modelled results are compared to signals in the observational record and a possible mechanism connecting Etesians to the Indian Monsoon region is discussed.
How to cite: Misios, S., Logothetis, I., Knudsen, M. F., Karoff, C., and Tourpali, K.: Etesian winds after major volcanic eruptions, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20415, https://doi.org/10.5194/egusphere-egu2020-20415, 2020.
Etesians winds are northerly winds in the lower atmosphere, blowing over the Aegean basin from early summer to early autumn, regulating summer time heating levels. The interannual variability of Etesians is thought to be linked to the extended Indian Summer Monsoon and tropical Pacific Region. Here, we are investigating the response of Etesians to major volcanic eruptions with the aid of ensembles of historical simulations. Specifically, we are making use of the CESM Last Millennium and Large Ensemble simulations to investigate modelled Etesian changes in the post-eruption one to three years. We find consistent changes for all major eruptions over the last millennium of reduced amplitude peaking in the first year after the eruption. Interestingly, the Laki eruption shows a similar signal to the other major tropical Eruptions. Modelled results are compared to signals in the observational record and a possible mechanism connecting Etesians to the Indian Monsoon region is discussed.
How to cite: Misios, S., Logothetis, I., Knudsen, M. F., Karoff, C., and Tourpali, K.: Etesian winds after major volcanic eruptions, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20415, https://doi.org/10.5194/egusphere-egu2020-20415, 2020.
EGU2020-21721 | Displays | ITS2.13/AS4.29
Recovered measurements of the 1960s stratospheric aerosol layer for new constraints for volcanic forcing in the years after 1963 AgungGraham Mann, Juan Carlos Antuna Marrero, Amanda Maycock, Christine McKenna, Sarah Shallcross, Sandip Dhomse, Larry Thomason, Beiping Luo, Terry Deshler, and James Rosen
The WCRP-SPARC initiative on stratospheric sulphur (SSiRC) has begun a new activity to recover past observational datasets of the stratospheric aerosol layer.
The data rescue activity aims to provide additional constraints for volcanic impacts on climate and is organised into three time-periods:
- The quiescent period prior to the major eruption 1963 Agung eruption,
- The period of strong volcanic activity during 1963-1969,
- The Jul-Dec 1991 period after Pinatubo when the SAGE-II signal was saturated.
A new page within the SSiRC website gives further information on the datasets within this activity ( http://www.sparc-ssirc.org --> Activities --> Data Rescue).
In this presentation, we explain the 1963-1969 component of the data rescue, and compare the CMIP5 and CMIP6 volcanic aerosol datasets during this period, post-Agung interactive stratospheric aerosol model simulations and a preliminary analysis of 15-year global-mean surface temperature trends from CMIP6 historical integrations for 1950-1980.
The 1960s was a strongly volcanically active decade, with the major 1963 Agung eruption and tropical stratosphere-injecting eruptions in 1965 (Taal), 1966 (Awu) and 1968 (Fernandina) generating a prolonged period of strong natural surface cooling.
Less than a year after the Agung eruption, the first in-situ measurements of a major volcanic aerosol cloud were made with dust-sondes from Minneapolis measuring aerosol particle concentrations with 10 soundings between 1963 and 1965 (6 in 1963-4).
Global surveys with the U-2 aircraft were equipped with impactors to measure stratospheric aerosol particle size distribution and composition, for example detecting the presence of volcanic ash within the Agung volcanic plume.
Early ground-based active remote sensing measurements (lidar, searchlight) also measured the vertical profile of the Agung-enhanced stratospheric aerosol layer.
The main purpose of the SSiRC data rescue is to provide constraints for interactive stratospheric aerosol models, aligning with the ISA-MIP activity, which could potentially lead to new volcanic forcing datasets for climate models, ultimately thereby aiming to improve attribution of anthropogenic change and future projections.
How to cite: Mann, G., Antuna Marrero, J. C., Maycock, A., McKenna, C., Shallcross, S., Dhomse, S., Thomason, L., Luo, B., Deshler, T., and Rosen, J.: Recovered measurements of the 1960s stratospheric aerosol layer for new constraints for volcanic forcing in the years after 1963 Agung, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21721, https://doi.org/10.5194/egusphere-egu2020-21721, 2020.
The WCRP-SPARC initiative on stratospheric sulphur (SSiRC) has begun a new activity to recover past observational datasets of the stratospheric aerosol layer.
The data rescue activity aims to provide additional constraints for volcanic impacts on climate and is organised into three time-periods:
- The quiescent period prior to the major eruption 1963 Agung eruption,
- The period of strong volcanic activity during 1963-1969,
- The Jul-Dec 1991 period after Pinatubo when the SAGE-II signal was saturated.
A new page within the SSiRC website gives further information on the datasets within this activity ( http://www.sparc-ssirc.org --> Activities --> Data Rescue).
In this presentation, we explain the 1963-1969 component of the data rescue, and compare the CMIP5 and CMIP6 volcanic aerosol datasets during this period, post-Agung interactive stratospheric aerosol model simulations and a preliminary analysis of 15-year global-mean surface temperature trends from CMIP6 historical integrations for 1950-1980.
The 1960s was a strongly volcanically active decade, with the major 1963 Agung eruption and tropical stratosphere-injecting eruptions in 1965 (Taal), 1966 (Awu) and 1968 (Fernandina) generating a prolonged period of strong natural surface cooling.
Less than a year after the Agung eruption, the first in-situ measurements of a major volcanic aerosol cloud were made with dust-sondes from Minneapolis measuring aerosol particle concentrations with 10 soundings between 1963 and 1965 (6 in 1963-4).
Global surveys with the U-2 aircraft were equipped with impactors to measure stratospheric aerosol particle size distribution and composition, for example detecting the presence of volcanic ash within the Agung volcanic plume.
Early ground-based active remote sensing measurements (lidar, searchlight) also measured the vertical profile of the Agung-enhanced stratospheric aerosol layer.
The main purpose of the SSiRC data rescue is to provide constraints for interactive stratospheric aerosol models, aligning with the ISA-MIP activity, which could potentially lead to new volcanic forcing datasets for climate models, ultimately thereby aiming to improve attribution of anthropogenic change and future projections.
How to cite: Mann, G., Antuna Marrero, J. C., Maycock, A., McKenna, C., Shallcross, S., Dhomse, S., Thomason, L., Luo, B., Deshler, T., and Rosen, J.: Recovered measurements of the 1960s stratospheric aerosol layer for new constraints for volcanic forcing in the years after 1963 Agung, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21721, https://doi.org/10.5194/egusphere-egu2020-21721, 2020.
EGU2020-17415 | Displays | ITS2.13/AS4.29
The influence of volcanic eruptions on circulation regimes over the North Atlantic and their impact on European climateYang Feng, Myriam Khodri, Laurent Li, Marie-Alexandrine Sicre, and Nicolas Lebas
Large volcanic eruptions influence climate on both annual and decadal time scales due to dynamical interactions of different climate components in the Earth's system. It is well established that the North Atlantic Oscillation (NAO) tends to shift towards its positive phase during the winter season in the first 1–2 years after large tropical volcanic eruptions, causing warming over Europe. However, other North Atlantic circulation regimes such as Atlantic Ridge or zonal regime have received less attention. This study explores the volcanic fingerprint in terms of patterns and mechanisms on the North Atlantic atmospheric circulation in IPSL-CM6A-LR model simulations for tropical eruptions of the last millennium using dedicated sensitivity experiments and observations.
How to cite: Feng, Y., Khodri, M., Li, L., Sicre, M.-A., and Lebas, N.: The influence of volcanic eruptions on circulation regimes over the North Atlantic and their impact on European climate, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-17415, https://doi.org/10.5194/egusphere-egu2020-17415, 2020.
Large volcanic eruptions influence climate on both annual and decadal time scales due to dynamical interactions of different climate components in the Earth's system. It is well established that the North Atlantic Oscillation (NAO) tends to shift towards its positive phase during the winter season in the first 1–2 years after large tropical volcanic eruptions, causing warming over Europe. However, other North Atlantic circulation regimes such as Atlantic Ridge or zonal regime have received less attention. This study explores the volcanic fingerprint in terms of patterns and mechanisms on the North Atlantic atmospheric circulation in IPSL-CM6A-LR model simulations for tropical eruptions of the last millennium using dedicated sensitivity experiments and observations.
How to cite: Feng, Y., Khodri, M., Li, L., Sicre, M.-A., and Lebas, N.: The influence of volcanic eruptions on circulation regimes over the North Atlantic and their impact on European climate, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-17415, https://doi.org/10.5194/egusphere-egu2020-17415, 2020.
ITS2.15/BG2.25 – Pan-Eurasian EXperiment (PEEX) – Observation, Modelling and Assessment in the Arctic-Boreal Domain
EGU2020-11193 | Displays | ITS2.15/BG2.25
Centennial scale environmental change at key arctic observational sitesTorben Røjle Christensen, Kerstin Rasmussen, Jakob Abermann, Katrine Raundrup, Kirsten Christoffersen, Birger Ulf Hansen, and Marie Frost Arndal
The Arctic is changing in response to ongoing warming. Multiple effects have been documented in terms of sea-ice distribution, land-ice volume and ecosystems both in the marine and terrestrial realm, which are clear responses to the overall global warming. Targeted efforts documenting individual components of the arctic system build the base-line for quantification of these effects.
Comprehensive ecosystem observational programs covering both glacial, terrestrial and marine components are rare in the Arctic but one such, the Greenland Ecosystem Monitoring (GEM) program, has now been operational for nearly 25 years at three main sites in Greenland. Zackenberg valley and Young Sund in NE Greenland is representing the high-arctic environment, Disko Island on the central west coast of Greenland at the border between the high and the low-arctic and Nuuk-Kobbefjord in SW Greenland the low-arctic.
The GEM program at all three sites cover inter-annual variation in ecosystem dynamics of glacial, terrestrial and marine ecosystems with data gathered from more than 2000 parameters some of which being automatically recorded at very high frequencies (up to 20 Hz for micro-meteorological measurements). This present-day detailed, comprehensive and high-frequency monitoring of ecosystem dynamics calls for the question: Which historical sources may be used in order to anchor the environmental status of the monitored areas back in time?
For the composite landscape dynamics including glacier, terrestrial and near-coastal environments it is of great value to study visual, mainly photographic evidences that are available from different parts of the portfolio of arctic exploration during the 19th and 20th centuries. We will in this presentation review available historical archival data (early photographs, paintings, drawings) from the GEM monitoring locations and their immediate surroundings.
The different historical setting over the centennial timescale is briefly discussed and particular illustrative records from the individual sites are shown. The evidence of change hereby shown at the centennial time scale is evaluated in the perspective of results from decadal scale present day monitoring.
How to cite: Christensen, T. R., Rasmussen, K., Abermann, J., Raundrup, K., Christoffersen, K., Hansen, B. U., and Arndal, M. F.: Centennial scale environmental change at key arctic observational sites, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11193, https://doi.org/10.5194/egusphere-egu2020-11193, 2020.
The Arctic is changing in response to ongoing warming. Multiple effects have been documented in terms of sea-ice distribution, land-ice volume and ecosystems both in the marine and terrestrial realm, which are clear responses to the overall global warming. Targeted efforts documenting individual components of the arctic system build the base-line for quantification of these effects.
Comprehensive ecosystem observational programs covering both glacial, terrestrial and marine components are rare in the Arctic but one such, the Greenland Ecosystem Monitoring (GEM) program, has now been operational for nearly 25 years at three main sites in Greenland. Zackenberg valley and Young Sund in NE Greenland is representing the high-arctic environment, Disko Island on the central west coast of Greenland at the border between the high and the low-arctic and Nuuk-Kobbefjord in SW Greenland the low-arctic.
The GEM program at all three sites cover inter-annual variation in ecosystem dynamics of glacial, terrestrial and marine ecosystems with data gathered from more than 2000 parameters some of which being automatically recorded at very high frequencies (up to 20 Hz for micro-meteorological measurements). This present-day detailed, comprehensive and high-frequency monitoring of ecosystem dynamics calls for the question: Which historical sources may be used in order to anchor the environmental status of the monitored areas back in time?
For the composite landscape dynamics including glacier, terrestrial and near-coastal environments it is of great value to study visual, mainly photographic evidences that are available from different parts of the portfolio of arctic exploration during the 19th and 20th centuries. We will in this presentation review available historical archival data (early photographs, paintings, drawings) from the GEM monitoring locations and their immediate surroundings.
The different historical setting over the centennial timescale is briefly discussed and particular illustrative records from the individual sites are shown. The evidence of change hereby shown at the centennial time scale is evaluated in the perspective of results from decadal scale present day monitoring.
How to cite: Christensen, T. R., Rasmussen, K., Abermann, J., Raundrup, K., Christoffersen, K., Hansen, B. U., and Arndal, M. F.: Centennial scale environmental change at key arctic observational sites, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11193, https://doi.org/10.5194/egusphere-egu2020-11193, 2020.
EGU2020-10316 | Displays | ITS2.15/BG2.25
Recent and future productivity of Russian forests under climate changeAnatoly Shvidenko, Dmitry Schepaschenko, Sergey Bartalev, Andrey Krasovskii, and Anton Platov
Knowledge of dynamics of forest productivity, expressed in terms of Growing Stock Volume (GSV), Net Primary Production (NPP), such derivatives like current increments (net and gross growth), is crucial for understanding the impacts of forest ecosystems on the major global biogeochemical cycles and eventually – on the Earth climate system. This knowledge is not satisfactory in Russia currently (the country’s forests cover >20% of the global forest area) because 1) data of official forest inventory are obsolete and substantially biased due to the fact that about 50% of Russian forests were inventoried more than 30 years ago; 2) of the above indicators, Russian forest inventory directly defines only GSV, but by the methods, which have substantial systematic errors of unknown size; 3) remote sensing methods themselves still cannot reliably provide some necessary details, like species composition, age and age structure of stands, below ground live biomass etc. In this presentation, we attempted to provide a systematic reanalysis of the estimates of the above indicators. To this end, a special system was developed to update the data of forest inventory for periods after the latest inventory by forest enterprises (about 1700) based on all available ground-based information and a multi-sensor concept of remote sensing. Hybrid forest cover was presented as an aggregation of 12 satellite products at spatial resolution of 150m. The updating of the main biometric indicators of Russian forests was based on the models of the growth and bioproductivity of modal stands. The results of the actualization have showed substantial overestimation of areas by official inventory and underestimation (up to 20%) of GSV. Comparison of obtained results with an independent assessment of the dynamics of areas and GSV, which was made by the Space Research Institute of the Russian Academy of Sciences for the period 2000-2017, showed a high level of compatibility. Using the results of actualization, live biomass was assessed based on a new system of conversion coefficients (Schepaschenko et al. 2018), NPP - on a method described in Shvidenko et al. (2007); and current increments – using a regionally distributed modelling system on increment dynamics of modal stands. Climate were analyzed for 3 periods: “historical” (1948-1975), “current”(1975-2017) and “future” (using all 4 scenarios RCP (2020-2100)). NPP and increments were estimated for the two last periods using a model, which takes into account selected climatic indicators and fertilization effect of enhanced CO2 concentration. It is shown that use of the obtained results presents substantial possibility for improvement of estimates of the carbon budget of Russian forests, particularly those received by inventory methods, and eliminate the existing discrepancies in estimates of the carbon budget of Russian forests reported in different publications. Projections for future suppose that significant part of Russian forests under “critical” scenarios (RCP6.0 and RCP 8.5) have a high probability to reach the tipping point by end of this century.
How to cite: Shvidenko, A., Schepaschenko, D., Bartalev, S., Krasovskii, A., and Platov, A.: Recent and future productivity of Russian forests under climate change, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10316, https://doi.org/10.5194/egusphere-egu2020-10316, 2020.
Knowledge of dynamics of forest productivity, expressed in terms of Growing Stock Volume (GSV), Net Primary Production (NPP), such derivatives like current increments (net and gross growth), is crucial for understanding the impacts of forest ecosystems on the major global biogeochemical cycles and eventually – on the Earth climate system. This knowledge is not satisfactory in Russia currently (the country’s forests cover >20% of the global forest area) because 1) data of official forest inventory are obsolete and substantially biased due to the fact that about 50% of Russian forests were inventoried more than 30 years ago; 2) of the above indicators, Russian forest inventory directly defines only GSV, but by the methods, which have substantial systematic errors of unknown size; 3) remote sensing methods themselves still cannot reliably provide some necessary details, like species composition, age and age structure of stands, below ground live biomass etc. In this presentation, we attempted to provide a systematic reanalysis of the estimates of the above indicators. To this end, a special system was developed to update the data of forest inventory for periods after the latest inventory by forest enterprises (about 1700) based on all available ground-based information and a multi-sensor concept of remote sensing. Hybrid forest cover was presented as an aggregation of 12 satellite products at spatial resolution of 150m. The updating of the main biometric indicators of Russian forests was based on the models of the growth and bioproductivity of modal stands. The results of the actualization have showed substantial overestimation of areas by official inventory and underestimation (up to 20%) of GSV. Comparison of obtained results with an independent assessment of the dynamics of areas and GSV, which was made by the Space Research Institute of the Russian Academy of Sciences for the period 2000-2017, showed a high level of compatibility. Using the results of actualization, live biomass was assessed based on a new system of conversion coefficients (Schepaschenko et al. 2018), NPP - on a method described in Shvidenko et al. (2007); and current increments – using a regionally distributed modelling system on increment dynamics of modal stands. Climate were analyzed for 3 periods: “historical” (1948-1975), “current”(1975-2017) and “future” (using all 4 scenarios RCP (2020-2100)). NPP and increments were estimated for the two last periods using a model, which takes into account selected climatic indicators and fertilization effect of enhanced CO2 concentration. It is shown that use of the obtained results presents substantial possibility for improvement of estimates of the carbon budget of Russian forests, particularly those received by inventory methods, and eliminate the existing discrepancies in estimates of the carbon budget of Russian forests reported in different publications. Projections for future suppose that significant part of Russian forests under “critical” scenarios (RCP6.0 and RCP 8.5) have a high probability to reach the tipping point by end of this century.
How to cite: Shvidenko, A., Schepaschenko, D., Bartalev, S., Krasovskii, A., and Platov, A.: Recent and future productivity of Russian forests under climate change, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10316, https://doi.org/10.5194/egusphere-egu2020-10316, 2020.
EGU2020-11155 | Displays | ITS2.15/BG2.25
Adding pieces to the atmosphere-biosphere feedback puzzleSteffen M. Noe, Junninen Heikki, Ülo Mander, Urmas Hõrrak, Kaido Soosaar, Xuemeng Chen, Alisa Krasnova, Dmitrii Krasnov, Joonas Kollo, Kaupo Komsaare, Helina Lipp, Kalju Tamme, and Ahto Kangur
The SMEAR Estonia station was established in 2012 as southernmost “Station for Ecosystem-Atmosphere Relations” in Northern Europe. The station provides continuous data since 2014 and has steadily increased the amount of measured variables. Measurements cover atmospheric gases, air ions and particulate matter, radiation and energy fluxes, forest ecosystem and soil related parameters.
Located in the hemiboreal forest ecosystem at the southern edge of the boreal forest biome the forests are characterised by a mix between coniferous and broadleaved species. The SMEAR Estonia station’s location near to an old growth forest, which is the oldest Estonian forest nature reserve established in 1924, allows for comparisons of atmosphere-biosphere related processes between unmanaged and managed forests. The application of continuous multi-scale data allows us to see first trends of hemiboreal ecosystem-atmosphere interactions in relation to natural and man made disturbances and climatic drivers.
Here, we report and present our available multi-scale data and research results. Our focus lies on the heterogeneity and the dynamics of atmosphere-biosphere exchange processes and feedbacks in the footprint of the SMEAR Estonia station.
How to cite: Noe, S. M., Heikki, J., Mander, Ü., Hõrrak, U., Soosaar, K., Chen, X., Krasnova, A., Krasnov, D., Kollo, J., Komsaare, K., Lipp, H., Tamme, K., and Kangur, A.: Adding pieces to the atmosphere-biosphere feedback puzzle, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11155, https://doi.org/10.5194/egusphere-egu2020-11155, 2020.
The SMEAR Estonia station was established in 2012 as southernmost “Station for Ecosystem-Atmosphere Relations” in Northern Europe. The station provides continuous data since 2014 and has steadily increased the amount of measured variables. Measurements cover atmospheric gases, air ions and particulate matter, radiation and energy fluxes, forest ecosystem and soil related parameters.
Located in the hemiboreal forest ecosystem at the southern edge of the boreal forest biome the forests are characterised by a mix between coniferous and broadleaved species. The SMEAR Estonia station’s location near to an old growth forest, which is the oldest Estonian forest nature reserve established in 1924, allows for comparisons of atmosphere-biosphere related processes between unmanaged and managed forests. The application of continuous multi-scale data allows us to see first trends of hemiboreal ecosystem-atmosphere interactions in relation to natural and man made disturbances and climatic drivers.
Here, we report and present our available multi-scale data and research results. Our focus lies on the heterogeneity and the dynamics of atmosphere-biosphere exchange processes and feedbacks in the footprint of the SMEAR Estonia station.
How to cite: Noe, S. M., Heikki, J., Mander, Ü., Hõrrak, U., Soosaar, K., Chen, X., Krasnova, A., Krasnov, D., Kollo, J., Komsaare, K., Lipp, H., Tamme, K., and Kangur, A.: Adding pieces to the atmosphere-biosphere feedback puzzle, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11155, https://doi.org/10.5194/egusphere-egu2020-11155, 2020.
EGU2020-11360 | Displays | ITS2.15/BG2.25
One tower - two heights: A study of mixed hemiboreal forest carbon balance estimated from two eddy covariance systemsAlisa Krasnova, Dmitrii Krasnov, and Steffen Noe
Eddy-covariance (EC) method is widely used to calculate fluxes of different gases from different ecosystems. One of the assumptions is that the footprint of eddy tower is homogeneous in terms of plant species composition, height, age, soil properties, etc. In reality that is usually not the case: European forests are mostly managed and thus have compartments of tree species. This is even more so in the hemiboreal zone that is characterized by higher heterogeneity and tree species diversity. This results in the possibility for the individual features to have stronger influence on the eddy measurements.
Two identical EC systems (LI-7200 gas analyser + Metek uStar Class A anemometer) were placed at 30m (EC30) and 70m (EC70) height on an atmospheric tower of SMEAR Estonia (Station for Measurements of Ecosystem Atmosphere Relations) above a 20m high forest to measure CO2 fluxes. The footprints are represented by compartments of Scots pine (Pinus sylvestris), Norway spruce (Picea abies) and Birch trees (Betula pendula and Betula pubescens) and clear-cut areas.
According to the EC30 flux data, the mixed hemiboreal forest ecosystem was a source of CO2 (505 gC m-2 in 2015; 333 gC m-2 in 2016; 276 gC m-2 in 2017; 603 gC m-2 in 2018), while according to EC70 data it was a minor sink in some of the years (-47 gC m-2 in 2015; 10 gC m-2 in 2016; -142 gC m-2 in 2017; 151 gC m-2 in 2018).
Both the ecosystem respiration (ER) and the gross primary production (GPP) were bigger when estimated from EC30 than in EC70 for all the years:
GPP EC30: 1738 gC m-2 in 2015; 1669 gC m-2 in 2016; 1892 gC m-2 in 2017; 1654 gC m-2 in 2018;
GPP EC70: 1242 gC m-2 in 2015; 1192 gC m-2 in 2016; 1215 gC m-2 in 2017; 988 gC m-2 in 2018;
ER EC30: 2057 gC m-2 in 2015; 1999 gC m-2 in 2016; 1992 gC m-2 in 2017; 2070 gC m-2 in 2018;
ER EC70: 1088 gC m-2 in 2015; 1120 gC m-2 in 2016; 1019 gC m-2 in 2017; 1021 gC m-2 in 2018;
All the 4 study years (2015-2018) showed similar difference patterns between the two heights: higher EC30 nighttime NEE values and similar daytime NEE values throughout the season. The peak difference between the two systems was in the end of August - middle of September for all the years.
How to cite: Krasnova, A., Krasnov, D., and Noe, S.: One tower - two heights: A study of mixed hemiboreal forest carbon balance estimated from two eddy covariance systems, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11360, https://doi.org/10.5194/egusphere-egu2020-11360, 2020.
Eddy-covariance (EC) method is widely used to calculate fluxes of different gases from different ecosystems. One of the assumptions is that the footprint of eddy tower is homogeneous in terms of plant species composition, height, age, soil properties, etc. In reality that is usually not the case: European forests are mostly managed and thus have compartments of tree species. This is even more so in the hemiboreal zone that is characterized by higher heterogeneity and tree species diversity. This results in the possibility for the individual features to have stronger influence on the eddy measurements.
Two identical EC systems (LI-7200 gas analyser + Metek uStar Class A anemometer) were placed at 30m (EC30) and 70m (EC70) height on an atmospheric tower of SMEAR Estonia (Station for Measurements of Ecosystem Atmosphere Relations) above a 20m high forest to measure CO2 fluxes. The footprints are represented by compartments of Scots pine (Pinus sylvestris), Norway spruce (Picea abies) and Birch trees (Betula pendula and Betula pubescens) and clear-cut areas.
According to the EC30 flux data, the mixed hemiboreal forest ecosystem was a source of CO2 (505 gC m-2 in 2015; 333 gC m-2 in 2016; 276 gC m-2 in 2017; 603 gC m-2 in 2018), while according to EC70 data it was a minor sink in some of the years (-47 gC m-2 in 2015; 10 gC m-2 in 2016; -142 gC m-2 in 2017; 151 gC m-2 in 2018).
Both the ecosystem respiration (ER) and the gross primary production (GPP) were bigger when estimated from EC30 than in EC70 for all the years:
GPP EC30: 1738 gC m-2 in 2015; 1669 gC m-2 in 2016; 1892 gC m-2 in 2017; 1654 gC m-2 in 2018;
GPP EC70: 1242 gC m-2 in 2015; 1192 gC m-2 in 2016; 1215 gC m-2 in 2017; 988 gC m-2 in 2018;
ER EC30: 2057 gC m-2 in 2015; 1999 gC m-2 in 2016; 1992 gC m-2 in 2017; 2070 gC m-2 in 2018;
ER EC70: 1088 gC m-2 in 2015; 1120 gC m-2 in 2016; 1019 gC m-2 in 2017; 1021 gC m-2 in 2018;
All the 4 study years (2015-2018) showed similar difference patterns between the two heights: higher EC30 nighttime NEE values and similar daytime NEE values throughout the season. The peak difference between the two systems was in the end of August - middle of September for all the years.
How to cite: Krasnova, A., Krasnov, D., and Noe, S.: One tower - two heights: A study of mixed hemiboreal forest carbon balance estimated from two eddy covariance systems, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11360, https://doi.org/10.5194/egusphere-egu2020-11360, 2020.
EGU2020-8027 | Displays | ITS2.15/BG2.25
Energy exchange between surface and atmosphere on the Severnaya Zemlya archipelago in 2013 – 2019 yearsAlexander Makshtas, Irina Makhotina, Vasily Kustov, Tuomas Laurila, Irina Bolshakova, Vladimir Sokolov, and Juha-Pekka Tuovinen
Based on the data of meteorological observations, executed in 2013-2019 at Research Station “Ice base Cape Baranova” (RS) and original algorithm, taken into account accuracy of measurements and footprints, the components of surface heat budget are calculated. It is shown that in winter due to radiation cooling turbulent sensible heat flux (H) directs to underlying surface. In summer H due to radiation heating of surface with low albedo directs to atmosphere and reaches 25% of the incoming short-wave radiation. The turbulent latent heat flux (LE) in winter directs to atmosphere. Its value is not more than 10% of H. During summer LE has no predominant direction.
Comprehensive monitoring carried out at RS since 2013 allowed to examine the role of large-scale processes in the polar atmosphere and hydrosphere on the formation of local climate in the region. In 2016, 2018 and 2019 sea ice cover of the Barents and Kara Seas in October, the month of active freezing of active soil layer, occupied the minimal area starting 1978 year (http://wdc.aari.ru/datasets/d0042/). This circumstance along with peculiarities of circulation processes in the atmosphere had led to anomalous of temperature and humidity regimes of lower troposphere. These years monthly mean air temperature up to 700 hPa was about -4 °C compared to -7 - -11 °C in 2013 - 2015 and 2017. In 2016 the lower troposphere was warmer by 2 - 3 °C and specific humidity in atmospheric boundary layer was 30–60% higher its values in 2013–2015 and 2017. Even in 2018, when the area of open water adjacent to the Severnaya Zemlya archipelago was significantly larger than in 2016, specific humidity at altitudes up to 3 km was 4-12 percents less.
In 2016 monthly mean wind speed, mainly of southwestern direction, reached maximum value, more than 7 m/s. It led to weakening of atmospheric surface layer stratification (z/L <0.2). The air specific humidity significantly increased also, up to 3.0 and 2.7 g /kg at 2 meters and at z0 . Long-wave radiation fluxes increased by more than 15 – 20 W/m2. Same time due to increase of underlying surface temperature, its long-wave radiation cooling, which was not compensated by the increase of incoming long-wave radiation increased up to -27 W/m2. H, directed to the underlying surface, increased to 10 W/m2 and LE, directed to atmosphere, increased almost 2 times, up to 12 W/m2. As a result of multidirectional changes of heat fluxes, defining surface heat balance, its value in October 2016 (-31.6 W/m2) was comparable to calculated for other years.
The most probable explanation of the revealed features of atmospheric boundary and surface layers in October 2016 are the absence of sea ice cover in the waters, adjacent to the archipelago, prevented cooling of atmosphere, and strong zonal component of the wind velocity, caused the transfer of warm and moist air masses of Atlantic origin into the study area.
The work had been done under financial support of the Ministry of Science and Higher Education of the Russian Federation (project no. RFMEFI61619X0108).
How to cite: Makshtas, A., Makhotina, I., Kustov, V., Laurila, T., Bolshakova, I., Sokolov, V., and Tuovinen, J.-P.: Energy exchange between surface and atmosphere on the Severnaya Zemlya archipelago in 2013 – 2019 years, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8027, https://doi.org/10.5194/egusphere-egu2020-8027, 2020.
Based on the data of meteorological observations, executed in 2013-2019 at Research Station “Ice base Cape Baranova” (RS) and original algorithm, taken into account accuracy of measurements and footprints, the components of surface heat budget are calculated. It is shown that in winter due to radiation cooling turbulent sensible heat flux (H) directs to underlying surface. In summer H due to radiation heating of surface with low albedo directs to atmosphere and reaches 25% of the incoming short-wave radiation. The turbulent latent heat flux (LE) in winter directs to atmosphere. Its value is not more than 10% of H. During summer LE has no predominant direction.
Comprehensive monitoring carried out at RS since 2013 allowed to examine the role of large-scale processes in the polar atmosphere and hydrosphere on the formation of local climate in the region. In 2016, 2018 and 2019 sea ice cover of the Barents and Kara Seas in October, the month of active freezing of active soil layer, occupied the minimal area starting 1978 year (http://wdc.aari.ru/datasets/d0042/). This circumstance along with peculiarities of circulation processes in the atmosphere had led to anomalous of temperature and humidity regimes of lower troposphere. These years monthly mean air temperature up to 700 hPa was about -4 °C compared to -7 - -11 °C in 2013 - 2015 and 2017. In 2016 the lower troposphere was warmer by 2 - 3 °C and specific humidity in atmospheric boundary layer was 30–60% higher its values in 2013–2015 and 2017. Even in 2018, when the area of open water adjacent to the Severnaya Zemlya archipelago was significantly larger than in 2016, specific humidity at altitudes up to 3 km was 4-12 percents less.
In 2016 monthly mean wind speed, mainly of southwestern direction, reached maximum value, more than 7 m/s. It led to weakening of atmospheric surface layer stratification (z/L <0.2). The air specific humidity significantly increased also, up to 3.0 and 2.7 g /kg at 2 meters and at z0 . Long-wave radiation fluxes increased by more than 15 – 20 W/m2. Same time due to increase of underlying surface temperature, its long-wave radiation cooling, which was not compensated by the increase of incoming long-wave radiation increased up to -27 W/m2. H, directed to the underlying surface, increased to 10 W/m2 and LE, directed to atmosphere, increased almost 2 times, up to 12 W/m2. As a result of multidirectional changes of heat fluxes, defining surface heat balance, its value in October 2016 (-31.6 W/m2) was comparable to calculated for other years.
The most probable explanation of the revealed features of atmospheric boundary and surface layers in October 2016 are the absence of sea ice cover in the waters, adjacent to the archipelago, prevented cooling of atmosphere, and strong zonal component of the wind velocity, caused the transfer of warm and moist air masses of Atlantic origin into the study area.
The work had been done under financial support of the Ministry of Science and Higher Education of the Russian Federation (project no. RFMEFI61619X0108).
How to cite: Makshtas, A., Makhotina, I., Kustov, V., Laurila, T., Bolshakova, I., Sokolov, V., and Tuovinen, J.-P.: Energy exchange between surface and atmosphere on the Severnaya Zemlya archipelago in 2013 – 2019 years, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8027, https://doi.org/10.5194/egusphere-egu2020-8027, 2020.
EGU2020-10424 | Displays | ITS2.15/BG2.25
Novel comprehensive field-based monitoring dataset of largest Siberian river particulate flux into Arctic oceanSergey Chalov and Nikolay Kasimov
Northern rivers transport huge quantities of water and constituents from the continents to the Arctic Ocean. Characteristics of the transport mode of chemical flow are poorly monitored, and the existing estimates of river flux are characterized by high uncertainty. Since 2018, the monitoring campaign ArcticFlux has been sampling the 4 largest Siberian rivers (Ob, Enisey, Lena and Kolyma) multiple times per year at the most downstream river crossection selected as unaffected by river mouth processes (tides, surges etc). Using Acoustic Doppler Current Profiler (ADCP) acquisitions with sediment depth profile sampling we build a simple model to derive the bed and suspended seasonal fluxes, grain size and particulate heavy metals distributions. Study demonstrates the significance of the hydraulic control for the metal partitioning within river as well as explains spatial (inter-basin) variations in particulate flux due to local hydrology, erosion rates and catchment lithology. Using (ADCP) acquisitions with sediment depth profile sampling of the Ob, Enisey, Lena and Kolyma, we aim to build a model to derive the annual flux of the sediments and particulate flux of the selected metals. The datasets is also used to assess the uncertainties in selected sediment quantity and quality data, including contributions from vertical and crossectional variations into fluxes estimates including requirements for sampling strategy. Based on the modeling techniques and application of erosion models for all four Arctic catchments the project will also focus on the novel quantitative assessment of bank and catchment erosion contribution into chemical flux.
How to cite: Chalov, S. and Kasimov, N.: Novel comprehensive field-based monitoring dataset of largest Siberian river particulate flux into Arctic ocean, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10424, https://doi.org/10.5194/egusphere-egu2020-10424, 2020.
Northern rivers transport huge quantities of water and constituents from the continents to the Arctic Ocean. Characteristics of the transport mode of chemical flow are poorly monitored, and the existing estimates of river flux are characterized by high uncertainty. Since 2018, the monitoring campaign ArcticFlux has been sampling the 4 largest Siberian rivers (Ob, Enisey, Lena and Kolyma) multiple times per year at the most downstream river crossection selected as unaffected by river mouth processes (tides, surges etc). Using Acoustic Doppler Current Profiler (ADCP) acquisitions with sediment depth profile sampling we build a simple model to derive the bed and suspended seasonal fluxes, grain size and particulate heavy metals distributions. Study demonstrates the significance of the hydraulic control for the metal partitioning within river as well as explains spatial (inter-basin) variations in particulate flux due to local hydrology, erosion rates and catchment lithology. Using (ADCP) acquisitions with sediment depth profile sampling of the Ob, Enisey, Lena and Kolyma, we aim to build a model to derive the annual flux of the sediments and particulate flux of the selected metals. The datasets is also used to assess the uncertainties in selected sediment quantity and quality data, including contributions from vertical and crossectional variations into fluxes estimates including requirements for sampling strategy. Based on the modeling techniques and application of erosion models for all four Arctic catchments the project will also focus on the novel quantitative assessment of bank and catchment erosion contribution into chemical flux.
How to cite: Chalov, S. and Kasimov, N.: Novel comprehensive field-based monitoring dataset of largest Siberian river particulate flux into Arctic ocean, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10424, https://doi.org/10.5194/egusphere-egu2020-10424, 2020.
EGU2020-20359 | Displays | ITS2.15/BG2.25
A method for the assessment of forest regrowth site index based on Earth observations and modellingVasily Zharko, Sergey Bartalev, and Mikhail Bogodukhov
Presented is a method for the estimation of a productivity class/site index of the forest regrowth after stand replacement natural and human induced disturbances. The method uses Global Forest Change project data on spatial distribution of forest loss sites (including the information about the date of the disturbance) with a 30 m resolution based on Landsat data. Joint analysis of this data resampled to 100 m spatial resolution together with a Russian Land Cover map for 2016 developed based on 100 m PROBA-V data is used to identify reforestation sites and to determine the forest type. Based on this information an appropriate forest growth model is chosen to simulate forest characteristics' dynamics for different site indexes. Finally information on forest characteristics from satellite data-based products is compared to the modeling results for the forest age, computed as a difference between the date of the disturbance and the date of the satellite data product. Reforestation site is assigned a productivity class that yields the best consistency between modeling results and existing satellite data products information.
Application of the presented method was tested over the European part of Russia using a 100 m global growing stock volume (GSV) map developed within Globbiomass project and lidar vegetation canopy height measurements from ICESat-2/ATLAS system (ATL08 data product). It was found that ICESat-2/ATLAS data is better suited for the proposed approach.
Presented method is aimed at the development of a reference dataset on forest parameters since obtained information on forest type, age and site index together can be used to estimate other crucial characteristics, including GSV, mean height, mean stem diameter, basal area, productivity, growth and mortality parameters, using the appropriate model. It is also worth mentioning that proposed approach allows estimation of characteristics of young forests which are rarely represented in the field survey-based reference datasets.
This work was supported by the Russian Science Foundation [grant number 19-77-30015]. Data processing and analysis was carried out using resources of the Centre for collective use ‘IKI-Monitoring’ developed by the Space Research Institute of the Russian Academy of Sciences.
How to cite: Zharko, V., Bartalev, S., and Bogodukhov, M.: A method for the assessment of forest regrowth site index based on Earth observations and modelling, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20359, https://doi.org/10.5194/egusphere-egu2020-20359, 2020.
Presented is a method for the estimation of a productivity class/site index of the forest regrowth after stand replacement natural and human induced disturbances. The method uses Global Forest Change project data on spatial distribution of forest loss sites (including the information about the date of the disturbance) with a 30 m resolution based on Landsat data. Joint analysis of this data resampled to 100 m spatial resolution together with a Russian Land Cover map for 2016 developed based on 100 m PROBA-V data is used to identify reforestation sites and to determine the forest type. Based on this information an appropriate forest growth model is chosen to simulate forest characteristics' dynamics for different site indexes. Finally information on forest characteristics from satellite data-based products is compared to the modeling results for the forest age, computed as a difference between the date of the disturbance and the date of the satellite data product. Reforestation site is assigned a productivity class that yields the best consistency between modeling results and existing satellite data products information.
Application of the presented method was tested over the European part of Russia using a 100 m global growing stock volume (GSV) map developed within Globbiomass project and lidar vegetation canopy height measurements from ICESat-2/ATLAS system (ATL08 data product). It was found that ICESat-2/ATLAS data is better suited for the proposed approach.
Presented method is aimed at the development of a reference dataset on forest parameters since obtained information on forest type, age and site index together can be used to estimate other crucial characteristics, including GSV, mean height, mean stem diameter, basal area, productivity, growth and mortality parameters, using the appropriate model. It is also worth mentioning that proposed approach allows estimation of characteristics of young forests which are rarely represented in the field survey-based reference datasets.
This work was supported by the Russian Science Foundation [grant number 19-77-30015]. Data processing and analysis was carried out using resources of the Centre for collective use ‘IKI-Monitoring’ developed by the Space Research Institute of the Russian Academy of Sciences.
How to cite: Zharko, V., Bartalev, S., and Bogodukhov, M.: A method for the assessment of forest regrowth site index based on Earth observations and modelling, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20359, https://doi.org/10.5194/egusphere-egu2020-20359, 2020.
EGU2020-21798 | Displays | ITS2.15/BG2.25
Potential mechanisms for New Particle Formation and growth from aerosol mixing state and volatility observationsKonstantinos Eleftheriadis, Maria Gini, Luis Mendes, Jakub Ondracek, Radovan Krejci, and Kjetil Tørseth
Measurements of ultrafine particle physico-chemical properties in the Arctic region were identified as an important aspect to better understand aerosol-cloud-climate interactions. Atmospheric new particle formation (NPF) is a phenomenon observed in many different environments around the world. Although, the frequency of atmospheric NPF event occurrences is expected to increase in the Arctic region due to sea ice melt (Dall´Osto, et al., 2017), there is only a limited number of studies that focus on nucleation mode particles in this remote environment; current knowledge is limited with respect to the chemical precursors of resulting nanoparticles and the compounds involved in their subsequent growth. Therefore, it is critical to understand the mechanism leading to their role as Cloud Condensation Nuclei (CCN).
Initial steps involved in NPF and subsequent growth are usually clustering and condensation of both organic and inorganic vapors, while ions are also known to be involved in the nucleation process. If newly formed particles are not lost due to coagulation and manage to grow to sizes > 50 nm, they can act as CCN. In particular, NPF have often been observed to be related to sulfuric acid (SA). However, the extent of sulphate production by biogenic sources, including biogenic dimethyl sulfide (DMS) and methanesulfonic acid (MSA), vs. the one due to anthropogenic SO2 remains an outstanding issue especially in the Arctic troposphere. Biogenic organics in the Arctic Ocean possibly derived from both phytoplankton and terrestrial vegetation could significantly influence the chemical properties of Arctic aerosols (Choi et al., 2019). This coincides very well with MOSAiC’s Atmosphere major interdisciplinary focus on–dimethyl sulfide, a gas produced by metabolic processes in algae and other marine microorganisms, and which, as described has a role in complex chemical processes forming aerosols.
Here we present results from one season (May-August) of continuous measurements of particle volatility will be conducted by means of a custom made and well characterized nano-volatility tandem DMA (nano-VTDMA) system installed at Zeppelin station, Ny Aalesund Svalbard. The nano-VTDMA system consists of a medium DMA (M-DMA), a Nano-TD (IAST, Switzerland), a nano-DMA (TSI) and a CPC (TSI, 3776). The nano- VTDMA measurement cycle is typically arranged into three steps: First, a monodisperse particles fraction will be selected by the first M-DMA; four particle sizes are selected (i.e. 10, 25, 50, 80). Then, the selected particles pass through the thermal denuder (Model NanoTD), operated at four selected temperatures in the range from 30 oC to 250 oC. The residual particle number size distribution are measured by the second nano-DMA and the CPC (3776). Parallel DMPS measurements are also examined to identify the NPF events under study
We analyze the observed 12 events identified with fresh particle bursts and we take into account parallel measurements of tracer gases and Black carbon in order to provide the link to natural or anthropogenic emissions. The area of air mass origin is also used as a possible clarification on where these fresh particle originate from.
How to cite: Eleftheriadis, K., Gini, M., Mendes, L., Ondracek, J., Krejci, R., and Tørseth, K.: Potential mechanisms for New Particle Formation and growth from aerosol mixing state and volatility observations, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21798, https://doi.org/10.5194/egusphere-egu2020-21798, 2020.
Measurements of ultrafine particle physico-chemical properties in the Arctic region were identified as an important aspect to better understand aerosol-cloud-climate interactions. Atmospheric new particle formation (NPF) is a phenomenon observed in many different environments around the world. Although, the frequency of atmospheric NPF event occurrences is expected to increase in the Arctic region due to sea ice melt (Dall´Osto, et al., 2017), there is only a limited number of studies that focus on nucleation mode particles in this remote environment; current knowledge is limited with respect to the chemical precursors of resulting nanoparticles and the compounds involved in their subsequent growth. Therefore, it is critical to understand the mechanism leading to their role as Cloud Condensation Nuclei (CCN).
Initial steps involved in NPF and subsequent growth are usually clustering and condensation of both organic and inorganic vapors, while ions are also known to be involved in the nucleation process. If newly formed particles are not lost due to coagulation and manage to grow to sizes > 50 nm, they can act as CCN. In particular, NPF have often been observed to be related to sulfuric acid (SA). However, the extent of sulphate production by biogenic sources, including biogenic dimethyl sulfide (DMS) and methanesulfonic acid (MSA), vs. the one due to anthropogenic SO2 remains an outstanding issue especially in the Arctic troposphere. Biogenic organics in the Arctic Ocean possibly derived from both phytoplankton and terrestrial vegetation could significantly influence the chemical properties of Arctic aerosols (Choi et al., 2019). This coincides very well with MOSAiC’s Atmosphere major interdisciplinary focus on–dimethyl sulfide, a gas produced by metabolic processes in algae and other marine microorganisms, and which, as described has a role in complex chemical processes forming aerosols.
Here we present results from one season (May-August) of continuous measurements of particle volatility will be conducted by means of a custom made and well characterized nano-volatility tandem DMA (nano-VTDMA) system installed at Zeppelin station, Ny Aalesund Svalbard. The nano-VTDMA system consists of a medium DMA (M-DMA), a Nano-TD (IAST, Switzerland), a nano-DMA (TSI) and a CPC (TSI, 3776). The nano- VTDMA measurement cycle is typically arranged into three steps: First, a monodisperse particles fraction will be selected by the first M-DMA; four particle sizes are selected (i.e. 10, 25, 50, 80). Then, the selected particles pass through the thermal denuder (Model NanoTD), operated at four selected temperatures in the range from 30 oC to 250 oC. The residual particle number size distribution are measured by the second nano-DMA and the CPC (3776). Parallel DMPS measurements are also examined to identify the NPF events under study
We analyze the observed 12 events identified with fresh particle bursts and we take into account parallel measurements of tracer gases and Black carbon in order to provide the link to natural or anthropogenic emissions. The area of air mass origin is also used as a possible clarification on where these fresh particle originate from.
How to cite: Eleftheriadis, K., Gini, M., Mendes, L., Ondracek, J., Krejci, R., and Tørseth, K.: Potential mechanisms for New Particle Formation and growth from aerosol mixing state and volatility observations, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21798, https://doi.org/10.5194/egusphere-egu2020-21798, 2020.
EGU2020-8216 | Displays | ITS2.15/BG2.25
Characterization of organic aerosol across the Arctic land surfaceImad El Haddad, Vaios Moschos, Julia Schmale, Urs Baltensperger, and André S.H. Prévôt
Organic compounds are of high importance in the Arctic because they contribute between one and two thirds to the submicron aerosol mass and may be co-emitted or interact with other aerosol species, such as black carbon, sulfate and metals; they also act as a vehicle of transport for persistent organic pollutants to the Arctic. Organic-containing aerosols (OA) can both absorb and scatter light, thereby changing the radiative balance, and may act as cloud condensation nuclei. OA might become increasingly important in a warming Arctic due to anthropogenic activities and natural emissions, e.g., as a result of expanded vegetation, intensified wildfires, decreasing sea ice extent and thickness leading to higher release of marine volatile organic compounds, and thawing tundra soils (permafrost) along shores and rivers. The continuous monitoring of organic carbon along with a detailed chemical analysis to determine its natural and anthropogenic sources, seasonal variability and inter-annual evolution in the Arctic is of prime importance for improved climate simulations and a realistic assessment of the effectiveness of potential mitigation or adaptation actions.
The OA chemical composition and corresponding sources remain largely unknown, partly due to the challenging measurement conditions. For example, tremendous effort is required for the deployment of online aerosol mass spectrometry at various environments for long time periods. To overcome this challenge, an offline Aerodyne aerosol mass spectrometer (AMS) technique has been introduced based on re-aerosolized liquid filter extracts. The method is capable of covering broad spatial and seasonal observations as well as determining the sources of OA (e.g. primary versus secondary, biogenic versus anthropogenic). Within the project iCUPE (Integrative and Comprehensive Understanding on Polar Environments), we extend the coverage of this technique to the most climate change sensitive region worldwide, using year-long/multi-year (from 2014 to 2019) quartz fiber filter samples collected at 8 surface stations from 68° N to 83°N, covering six Arctic Council nations including the least investigated Siberian Arctic.
Here, we present a project overview and first results from filter water extracts nebulized in Argon and measured with a high-resolution Long-Time-of-Flight AMS (average resolution ~7k). Preliminary data suggest significant variability among different sites and seasons with regard to the relative fraction of fragments-markers of certain sources, indicating largely regionally-specific sources of OA across the Arctic land surface. For example, during the same time period we observed more (roughly 90%) and more strongly oxygenated fragments (especially mass to charge ratio m/z 44) at extremely remote sites. Our average fCO2 (m/z=44) of 0.26 ± 0.08 and CO+:CO2+ of 0.40 ± 0.14 both indicate more oxidized OA than in continental aerosols. The Van Krevelen diagram shows that the addition of carboxylic acid groups (or alcohol+carbonyl on different C atoms) with significant fragmentation may dominate the OA oxidation at high O:C. We further discuss the integration of this analysis with advanced statistical tools for factor identification on the OA fraction. Additionally, the samples will be characterized with ultra-high-resolution mass spectrometry coupled with liquid chromatography, for a two-dimensional molecular identification of primary aerosol tracers and secondary organic aerosol precursors.
How to cite: El Haddad, I., Moschos, V., Schmale, J., Baltensperger, U., and Prévôt, A. S. H.: Characterization of organic aerosol across the Arctic land surface, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8216, https://doi.org/10.5194/egusphere-egu2020-8216, 2020.
Organic compounds are of high importance in the Arctic because they contribute between one and two thirds to the submicron aerosol mass and may be co-emitted or interact with other aerosol species, such as black carbon, sulfate and metals; they also act as a vehicle of transport for persistent organic pollutants to the Arctic. Organic-containing aerosols (OA) can both absorb and scatter light, thereby changing the radiative balance, and may act as cloud condensation nuclei. OA might become increasingly important in a warming Arctic due to anthropogenic activities and natural emissions, e.g., as a result of expanded vegetation, intensified wildfires, decreasing sea ice extent and thickness leading to higher release of marine volatile organic compounds, and thawing tundra soils (permafrost) along shores and rivers. The continuous monitoring of organic carbon along with a detailed chemical analysis to determine its natural and anthropogenic sources, seasonal variability and inter-annual evolution in the Arctic is of prime importance for improved climate simulations and a realistic assessment of the effectiveness of potential mitigation or adaptation actions.
The OA chemical composition and corresponding sources remain largely unknown, partly due to the challenging measurement conditions. For example, tremendous effort is required for the deployment of online aerosol mass spectrometry at various environments for long time periods. To overcome this challenge, an offline Aerodyne aerosol mass spectrometer (AMS) technique has been introduced based on re-aerosolized liquid filter extracts. The method is capable of covering broad spatial and seasonal observations as well as determining the sources of OA (e.g. primary versus secondary, biogenic versus anthropogenic). Within the project iCUPE (Integrative and Comprehensive Understanding on Polar Environments), we extend the coverage of this technique to the most climate change sensitive region worldwide, using year-long/multi-year (from 2014 to 2019) quartz fiber filter samples collected at 8 surface stations from 68° N to 83°N, covering six Arctic Council nations including the least investigated Siberian Arctic.
Here, we present a project overview and first results from filter water extracts nebulized in Argon and measured with a high-resolution Long-Time-of-Flight AMS (average resolution ~7k). Preliminary data suggest significant variability among different sites and seasons with regard to the relative fraction of fragments-markers of certain sources, indicating largely regionally-specific sources of OA across the Arctic land surface. For example, during the same time period we observed more (roughly 90%) and more strongly oxygenated fragments (especially mass to charge ratio m/z 44) at extremely remote sites. Our average fCO2 (m/z=44) of 0.26 ± 0.08 and CO+:CO2+ of 0.40 ± 0.14 both indicate more oxidized OA than in continental aerosols. The Van Krevelen diagram shows that the addition of carboxylic acid groups (or alcohol+carbonyl on different C atoms) with significant fragmentation may dominate the OA oxidation at high O:C. We further discuss the integration of this analysis with advanced statistical tools for factor identification on the OA fraction. Additionally, the samples will be characterized with ultra-high-resolution mass spectrometry coupled with liquid chromatography, for a two-dimensional molecular identification of primary aerosol tracers and secondary organic aerosol precursors.
How to cite: El Haddad, I., Moschos, V., Schmale, J., Baltensperger, U., and Prévôt, A. S. H.: Characterization of organic aerosol across the Arctic land surface, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8216, https://doi.org/10.5194/egusphere-egu2020-8216, 2020.
EGU2020-4908 | Displays | ITS2.15/BG2.25
Long-term measurements (2010–2019) of carbonaceous aerosol at the Zotino Tall Tower Observatory (ZOTTO) in central SiberiaEugene Mikhailov, Olga Ivanova, Sergey Vlasenko, Evgeniy Nebos’ko, Meinrat Andreae, and Ulrich Pöschl
The Siberian forests cover about 70% of the total area of the Eurasian boreal forest and are an important factor controlling global and regional climate. Forest fires and biogenic emissions from coniferous trees and forest litter are the main sources of carbonaceous aerosols emitted into the atmosphere over boreal forests. Typically, two classes of carbonaceous aerosol are commonly present in ambient air – elemental carbon (EC) (often referred to as black carbon or soot) and organic carbon (OC). Both OC and EC are important agents in the climate system, which affect the optical characteristics and thermal balance of the atmosphere both directly, by absorbing and scattering incoming solar radiation, and indirectly, by modifying cloud properties.
In 2010, a filter-based sampler was mounted at the background ZOTTO station (60.8º N and 89.4 º E; 114 m a.s.l.) for aerosol chemical analysis. We present here the time series of carbonaceous aerosol data measurements for 10 years (2010 -2019). We investigate the seasonal variations in PM, EC, and OC. These data are supplemented by measurements of aerosol absorption (PSAP) and scattering (TSI 3563) coefficients. We analyze polluted, background and near-pristine periods, as well as the most pronounced pollution events and their sources, observed over the entire sampling campaign.
We also present ground-based measurements of aerosol-cloud condensation nuclear (CCN) properties and hygroscopicity parameter values obtained from the CCN dataset. A method for assessing the condensation properties of aerosols from satellite measurements based on the data of the VIIRS multichannel radiometer installed on the polar satellite Suomi (USA) has been implemented. The CCN parameters of aerosol particles determined from satellite datasets have been compared with those obtained from ground-based measurements.
Acknowledgments. This work was supported by the Russian Science Foundation (grant agreement no. 18-17-00076) and Max Planck Society (MPG).
How to cite: Mikhailov, E., Ivanova, O., Vlasenko, S., Nebos’ko, E., Andreae, M., and Pöschl, U.: Long-term measurements (2010–2019) of carbonaceous aerosol at the Zotino Tall Tower Observatory (ZOTTO) in central Siberia, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4908, https://doi.org/10.5194/egusphere-egu2020-4908, 2020.
The Siberian forests cover about 70% of the total area of the Eurasian boreal forest and are an important factor controlling global and regional climate. Forest fires and biogenic emissions from coniferous trees and forest litter are the main sources of carbonaceous aerosols emitted into the atmosphere over boreal forests. Typically, two classes of carbonaceous aerosol are commonly present in ambient air – elemental carbon (EC) (often referred to as black carbon or soot) and organic carbon (OC). Both OC and EC are important agents in the climate system, which affect the optical characteristics and thermal balance of the atmosphere both directly, by absorbing and scattering incoming solar radiation, and indirectly, by modifying cloud properties.
In 2010, a filter-based sampler was mounted at the background ZOTTO station (60.8º N and 89.4 º E; 114 m a.s.l.) for aerosol chemical analysis. We present here the time series of carbonaceous aerosol data measurements for 10 years (2010 -2019). We investigate the seasonal variations in PM, EC, and OC. These data are supplemented by measurements of aerosol absorption (PSAP) and scattering (TSI 3563) coefficients. We analyze polluted, background and near-pristine periods, as well as the most pronounced pollution events and their sources, observed over the entire sampling campaign.
We also present ground-based measurements of aerosol-cloud condensation nuclear (CCN) properties and hygroscopicity parameter values obtained from the CCN dataset. A method for assessing the condensation properties of aerosols from satellite measurements based on the data of the VIIRS multichannel radiometer installed on the polar satellite Suomi (USA) has been implemented. The CCN parameters of aerosol particles determined from satellite datasets have been compared with those obtained from ground-based measurements.
Acknowledgments. This work was supported by the Russian Science Foundation (grant agreement no. 18-17-00076) and Max Planck Society (MPG).
How to cite: Mikhailov, E., Ivanova, O., Vlasenko, S., Nebos’ko, E., Andreae, M., and Pöschl, U.: Long-term measurements (2010–2019) of carbonaceous aerosol at the Zotino Tall Tower Observatory (ZOTTO) in central Siberia, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4908, https://doi.org/10.5194/egusphere-egu2020-4908, 2020.
EGU2020-201 | Displays | ITS2.15/BG2.25
Enviro-HIRLAM modeling of atmospheric aerosols and pollution transport and feedbacks: North-West Russia and Northern EuropeGeorgiy Nerobelov, Margarita Sedeeva, Alexander Mahura, Roman Nuterman, and Sergei Smyshlyaev
Undoubtedly, urbanization level has been rising rapidly during last decades, and due to growth in the number of industries the amount of anthropogenic aerosols and gases as pollution has been increasing. Some pollutants influence humans` health when others lead to changes in different meteorological parameters. In this study the aerosols influence on selected meteorological parameters (air temperature at 2 m, specific humidity, total cloud cover, precipitation) as well as anthropogenic SO2 and SO4 atmospheric dispersion and deposition on water bodies during January and August of 2010 were evaluated using the Enviro-HIRLAM online integrated modelling system. We focused on territories of the North-West Russia (with zooming to St. Petersburg, Moscow and Helsinki) and on territories of the Kola Peninsula and Northern European countries. Four model runs were performed: CTRL (no aerosols effects), DAE (direct aerosols effect), IDAE (indirect aerosols effect) and DAE+IDAE (direct + indirect aerosols effects).
Aerosol influence was stronger during Aug 2010. DAE basically lead to decrease in air temperature at 2 m and total cloud cover. IDAE and DAE+IDAE increased these parameters. DAE decreased specific humidity in Jan and increased in Aug 2010. IDAE and DAE+IDAE increased that parameter in Jan 2010 and decreased in Aug 2010. All aerosol effects caused reduction in precipitation for both months. With zooming to the metropolitan areas, in Aug 2010, DAE decreased air temperature in St. Petersburg and Helsinki, but increased in Moscow. IDAE decreased temperature in St. Petersburg and increased in other cities. DAE+IDAE decreased air temperature in St. Petersburg and Helsinki, but increased in Moscow. DAE decreased total cloud cover in three cities when IDAE and DAE+IDAE increased. All effects led to decrease in specific humidity and precipitation for territories of three cities. DAE decreased all analyzed parameters in three cities in Jan, except for precipitation in St. Petersburg. IDAE and DAE+IDAE caused growth in all parameters, except for precipitation in Helsinki and for temperature in Moscow (DAE+IDAE).
The analysis of the modeled SO2 spatial-temporal distribution showed that the number of cases with transboundary pollution on the territory of Northern Europe was higher during Aug 2010. An anticyclonic circulation led to high concentrations of SO2 over its sources during the same period. SO2 concentration reached its maximum values with periods of highest air temperatures quite often. It was revealed that the ambient air standard for SO2 was exceeded 13 times during a whole period studied. Only once SO2 concentration was excessed on the territory of Norway (Kirkenes) and the rest - on the territory of the Kola Peninsula (Russia). For the sulphates’ wet deposition, the number of such cases as well as values were higher during Aug 2010. For Norther Europe countries, the maximum of deposited sulphates was observed on the territory of Finland, and the minimum - over Sweden.
How to cite: Nerobelov, G., Sedeeva, M., Mahura, A., Nuterman, R., and Smyshlyaev, S.: Enviro-HIRLAM modeling of atmospheric aerosols and pollution transport and feedbacks: North-West Russia and Northern Europe, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-201, https://doi.org/10.5194/egusphere-egu2020-201, 2020.
Undoubtedly, urbanization level has been rising rapidly during last decades, and due to growth in the number of industries the amount of anthropogenic aerosols and gases as pollution has been increasing. Some pollutants influence humans` health when others lead to changes in different meteorological parameters. In this study the aerosols influence on selected meteorological parameters (air temperature at 2 m, specific humidity, total cloud cover, precipitation) as well as anthropogenic SO2 and SO4 atmospheric dispersion and deposition on water bodies during January and August of 2010 were evaluated using the Enviro-HIRLAM online integrated modelling system. We focused on territories of the North-West Russia (with zooming to St. Petersburg, Moscow and Helsinki) and on territories of the Kola Peninsula and Northern European countries. Four model runs were performed: CTRL (no aerosols effects), DAE (direct aerosols effect), IDAE (indirect aerosols effect) and DAE+IDAE (direct + indirect aerosols effects).
Aerosol influence was stronger during Aug 2010. DAE basically lead to decrease in air temperature at 2 m and total cloud cover. IDAE and DAE+IDAE increased these parameters. DAE decreased specific humidity in Jan and increased in Aug 2010. IDAE and DAE+IDAE increased that parameter in Jan 2010 and decreased in Aug 2010. All aerosol effects caused reduction in precipitation for both months. With zooming to the metropolitan areas, in Aug 2010, DAE decreased air temperature in St. Petersburg and Helsinki, but increased in Moscow. IDAE decreased temperature in St. Petersburg and increased in other cities. DAE+IDAE decreased air temperature in St. Petersburg and Helsinki, but increased in Moscow. DAE decreased total cloud cover in three cities when IDAE and DAE+IDAE increased. All effects led to decrease in specific humidity and precipitation for territories of three cities. DAE decreased all analyzed parameters in three cities in Jan, except for precipitation in St. Petersburg. IDAE and DAE+IDAE caused growth in all parameters, except for precipitation in Helsinki and for temperature in Moscow (DAE+IDAE).
The analysis of the modeled SO2 spatial-temporal distribution showed that the number of cases with transboundary pollution on the territory of Northern Europe was higher during Aug 2010. An anticyclonic circulation led to high concentrations of SO2 over its sources during the same period. SO2 concentration reached its maximum values with periods of highest air temperatures quite often. It was revealed that the ambient air standard for SO2 was exceeded 13 times during a whole period studied. Only once SO2 concentration was excessed on the territory of Norway (Kirkenes) and the rest - on the territory of the Kola Peninsula (Russia). For the sulphates’ wet deposition, the number of such cases as well as values were higher during Aug 2010. For Norther Europe countries, the maximum of deposited sulphates was observed on the territory of Finland, and the minimum - over Sweden.
How to cite: Nerobelov, G., Sedeeva, M., Mahura, A., Nuterman, R., and Smyshlyaev, S.: Enviro-HIRLAM modeling of atmospheric aerosols and pollution transport and feedbacks: North-West Russia and Northern Europe, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-201, https://doi.org/10.5194/egusphere-egu2020-201, 2020.
EGU2020-21193 | Displays | ITS2.15/BG2.25
Trace metal and nutrient fluxes into Arctic ocean by largest Siberian rivers (ArcticFlux)Vasilii Efimov, Sergey Chalov, Dmitry Magritsky, Anatolii Tsyplenkov, Liudmila Efimova, and Nikolay Kasimov
Trace metal and nutrient fluxes into Arctic ocean by largest Siberian rivers (ArcticFlux)
Precise estimates of river runoff are one the most challenging fields of river hydrology. Quantitative assessment of the fluxes of suspended and, especially, bed load, as well as their correlation with the flow dissolved loads remains weekly studied with a crucial need of in-situ observations, especially in large rivers.
The project is focused on semi-empirical and modeling study of flows, concentrations, modes and loads of trace metals and nutrients fluxes of the major rivers of Arctic. Monitoring stations were organized at the outlets of largest Siberian rivers: Ob, Yenisei, Lena, Kolyma, which transport more than 60 % of the water flow from the Russian Arctic. Observations were made for high and low water regime periods on the regular basis, and the total number of samples today exceeds 210. For each sample analyses were made for trace metals (68 elements), nutrients and dissolved and suspended organic carbon matter content both in dissolved and particulate (suspended and bed loads) forms. These samples can determine annual and seasonal distribution to 70% of the chemical elements and substances, carried by large rivers of the Russian Arctic into the Arctic Ocean.
For more accurate flux assessment, a new sampling technique was used. It allows to determine all components of the dissolved, suspended and, especially, bed load along the river section and includes sampling at 3-5 verticals on different depth. As a result, it is possible to determine the variability of the fluxes along the width of the section. As an example, concentrations of suspended sediments on the left and right banks of the Kolyma River differ in 6-7 times (up to70 mg / dm3) and there are significant differences in Ni, Fe, Al, Cu, and Pb fluxes. Heterogeneity in the distribution of sediment and chemical flow across the width of the rivers arise due to the inflow of tributaries and as a result of permafrost melting and wave erosion of the banks. The study of the intensity of bank erosion and sedimentation at the outlets of Arctic rivers both in the field and according to remote sensing data is a significant part of the project. Based on the modeling techniques and application of erosion models for all four Arctic catchments it will also focus on the novel quantitative assessment of bank and catchment erosion contribution into chemical and sediment loads.
The project concept is considered as a part of Marine component of Pan-Eurasian program (PEEX) and builds a bridge to integrate PEEX marine components with the existing terrestrial/atmospheric PEEX
The reported study was funded by RFBR according to the research project 18-05-60219
How to cite: Efimov, V., Chalov, S., Magritsky, D., Tsyplenkov, A., Efimova, L., and Kasimov, N.: Trace metal and nutrient fluxes into Arctic ocean by largest Siberian rivers (ArcticFlux), EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21193, https://doi.org/10.5194/egusphere-egu2020-21193, 2020.
Trace metal and nutrient fluxes into Arctic ocean by largest Siberian rivers (ArcticFlux)
Precise estimates of river runoff are one the most challenging fields of river hydrology. Quantitative assessment of the fluxes of suspended and, especially, bed load, as well as their correlation with the flow dissolved loads remains weekly studied with a crucial need of in-situ observations, especially in large rivers.
The project is focused on semi-empirical and modeling study of flows, concentrations, modes and loads of trace metals and nutrients fluxes of the major rivers of Arctic. Monitoring stations were organized at the outlets of largest Siberian rivers: Ob, Yenisei, Lena, Kolyma, which transport more than 60 % of the water flow from the Russian Arctic. Observations were made for high and low water regime periods on the regular basis, and the total number of samples today exceeds 210. For each sample analyses were made for trace metals (68 elements), nutrients and dissolved and suspended organic carbon matter content both in dissolved and particulate (suspended and bed loads) forms. These samples can determine annual and seasonal distribution to 70% of the chemical elements and substances, carried by large rivers of the Russian Arctic into the Arctic Ocean.
For more accurate flux assessment, a new sampling technique was used. It allows to determine all components of the dissolved, suspended and, especially, bed load along the river section and includes sampling at 3-5 verticals on different depth. As a result, it is possible to determine the variability of the fluxes along the width of the section. As an example, concentrations of suspended sediments on the left and right banks of the Kolyma River differ in 6-7 times (up to70 mg / dm3) and there are significant differences in Ni, Fe, Al, Cu, and Pb fluxes. Heterogeneity in the distribution of sediment and chemical flow across the width of the rivers arise due to the inflow of tributaries and as a result of permafrost melting and wave erosion of the banks. The study of the intensity of bank erosion and sedimentation at the outlets of Arctic rivers both in the field and according to remote sensing data is a significant part of the project. Based on the modeling techniques and application of erosion models for all four Arctic catchments it will also focus on the novel quantitative assessment of bank and catchment erosion contribution into chemical and sediment loads.
The project concept is considered as a part of Marine component of Pan-Eurasian program (PEEX) and builds a bridge to integrate PEEX marine components with the existing terrestrial/atmospheric PEEX
The reported study was funded by RFBR according to the research project 18-05-60219
How to cite: Efimov, V., Chalov, S., Magritsky, D., Tsyplenkov, A., Efimova, L., and Kasimov, N.: Trace metal and nutrient fluxes into Arctic ocean by largest Siberian rivers (ArcticFlux), EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21193, https://doi.org/10.5194/egusphere-egu2020-21193, 2020.
EGU2020-2580 | Displays | ITS2.15/BG2.25
Estimates of anthropogenic CO2 emissions from satellite and ground based measurementsYury Timofeyev, George Nerobelov, Sergey Smyshlyaev, Ivan Berezin, Yana Virolainen, Maria Makarova, Anatoly Poberovsky, Alexander Polyakov, and Stefania Foka
In recent years, satellite methods have played an important role in CO2 monitoring. Various satellite instruments (SCIAMACHY, AIRS, GOSAT, OCO-2, etc.) validated by ground-based and aircraft measurements allow to retrieving the column averaged CO2 mixing ratio (XCO2) with high accuracy (0.25–1.0%). The relatively high spatial resolution of a number of instruments (for example, OCO-2) allows studies of spatial and temporal CO2 variations, that, under appropriate conditions, makes it possible to estimate anthropogenic emissions from different cities.
Various techniques (source pixel mass balance method, plume dispersion model and atmospheric inversion system) for determining anthropogenic greenhouse gas emissions from data of satellite measurements are considered.
On the basis of three-dimensional modeling and comparison with the results of various local and remote measurements, numerical models of the atmosphere were adapted to different megacities of Russia. Based on numerical experiments, the errors of various satellite techniques for determining emissions caused by various factors (measurement errors, quality of used a priori and additional experimental information, adequacy of used numerical atmospheric model, etc.) were evaluated. Anthropogenic CO2 emissions in St. Petersburg, Moscow and other cities of Russia are estimated using various satellite measurements. These estimates of anthropogenic emissions are compared with data obtained by different methods and for different cities.
How to cite: Timofeyev, Y., Nerobelov, G., Smyshlyaev, S., Berezin, I., Virolainen, Y., Makarova, M., Poberovsky, A., Polyakov, A., and Foka, S.: Estimates of anthropogenic CO2 emissions from satellite and ground based measurements, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2580, https://doi.org/10.5194/egusphere-egu2020-2580, 2020.
In recent years, satellite methods have played an important role in CO2 monitoring. Various satellite instruments (SCIAMACHY, AIRS, GOSAT, OCO-2, etc.) validated by ground-based and aircraft measurements allow to retrieving the column averaged CO2 mixing ratio (XCO2) with high accuracy (0.25–1.0%). The relatively high spatial resolution of a number of instruments (for example, OCO-2) allows studies of spatial and temporal CO2 variations, that, under appropriate conditions, makes it possible to estimate anthropogenic emissions from different cities.
Various techniques (source pixel mass balance method, plume dispersion model and atmospheric inversion system) for determining anthropogenic greenhouse gas emissions from data of satellite measurements are considered.
On the basis of three-dimensional modeling and comparison with the results of various local and remote measurements, numerical models of the atmosphere were adapted to different megacities of Russia. Based on numerical experiments, the errors of various satellite techniques for determining emissions caused by various factors (measurement errors, quality of used a priori and additional experimental information, adequacy of used numerical atmospheric model, etc.) were evaluated. Anthropogenic CO2 emissions in St. Petersburg, Moscow and other cities of Russia are estimated using various satellite measurements. These estimates of anthropogenic emissions are compared with data obtained by different methods and for different cities.
How to cite: Timofeyev, Y., Nerobelov, G., Smyshlyaev, S., Berezin, I., Virolainen, Y., Makarova, M., Poberovsky, A., Polyakov, A., and Foka, S.: Estimates of anthropogenic CO2 emissions from satellite and ground based measurements, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2580, https://doi.org/10.5194/egusphere-egu2020-2580, 2020.
EGU2020-3218 | Displays | ITS2.15/BG2.25
Studying the spatial and seasonal variability of greenhouse gases across West Siberia: large-scale mobile measurement campaigns of 2018-2019Mikhail Arshinov, Boris Belan, Denis Davydov, Artem Kozlov, Alexander Fofonov, and Victoria Arshinova
The continuous ground-based measurements of greenhouse gases carried out in Siberia in the past two decades allowed the long-term trends, as well as the diurnal and seasonal cycles of CO2 and CH4 to be derived for this poorly studied region (Belikov et al., 2019). To date, these in-situ observations are made at the joint Japan-Russia Siberian Tall Tower Inland Observation Network (JR-STATION) consisted of 6 automated stations that should be maintained several times per year. In late October to early November 2018, we have undertaken the first mobile campaign to derive a distribution of CO2 and CH4 concentrations at high spatial resolution while traveling to the sites of the above network. For that, we used a commercially available GHG CRDS analyzer (G4301, Picarro Inc., Santa Clara, CA, USA) installed in an off-road vehicle (Arshinov et al., 2019). Over one trip, the instrument were driven over 7000 km throughout the study area.
In March, June, August, and October 2019 we have performed four more campaigns along the same route. This enabled the seasonal pattern of CO2 and CH4 concentrations to be obtained over a huge area of West Siberia between 54.5° and 63.2° north latitude and between 62.3° and 85.0° east longitude, as well as to reveal a large- and small-scale spatial heterogeneity in CH4 mixing ratios particularly over wetland regions. We plan to continue mobile campaigns to cover interannual variations.
This work was supported by the Ministry of Science and Higher Education of the Russian Federation under State Contract No. 14.616.21.0104 (ID No RFMEFI61618X0104).
Belikov, D.; Arshinov, M.; Belan, B.; Davydov, D.; Fofonov, A.; Sasakawa, M.; Machida, T. Analysis of the Diurnal, Weekly, and Seasonal Cycles and Annual Trends in Atmospheric CO2 and CH4 at Tower Network in Siberia from 2005 to 2016. Atmosphere 2019, 10, 689.
Arshinov, M.Yu.; Belan B.D.; Davydov D.K.; Kozlov A.V., Fofonov A.V., and Arshinova V. Heterogeneity of the spatial distribution of CO2 and CH4 concentrations in the atmospheric surface layer over West Siberia: October-November 2018, Proc. SPIE 11208, 25th International Symposium on Atmospheric and Ocean Optics: Atmospheric Physics, 1120831 (18 December 2019);https://doi.org/10.1117/12.2539205
How to cite: Arshinov, M., Belan, B., Davydov, D., Kozlov, A., Fofonov, A., and Arshinova, V.: Studying the spatial and seasonal variability of greenhouse gases across West Siberia: large-scale mobile measurement campaigns of 2018-2019, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3218, https://doi.org/10.5194/egusphere-egu2020-3218, 2020.
The continuous ground-based measurements of greenhouse gases carried out in Siberia in the past two decades allowed the long-term trends, as well as the diurnal and seasonal cycles of CO2 and CH4 to be derived for this poorly studied region (Belikov et al., 2019). To date, these in-situ observations are made at the joint Japan-Russia Siberian Tall Tower Inland Observation Network (JR-STATION) consisted of 6 automated stations that should be maintained several times per year. In late October to early November 2018, we have undertaken the first mobile campaign to derive a distribution of CO2 and CH4 concentrations at high spatial resolution while traveling to the sites of the above network. For that, we used a commercially available GHG CRDS analyzer (G4301, Picarro Inc., Santa Clara, CA, USA) installed in an off-road vehicle (Arshinov et al., 2019). Over one trip, the instrument were driven over 7000 km throughout the study area.
In March, June, August, and October 2019 we have performed four more campaigns along the same route. This enabled the seasonal pattern of CO2 and CH4 concentrations to be obtained over a huge area of West Siberia between 54.5° and 63.2° north latitude and between 62.3° and 85.0° east longitude, as well as to reveal a large- and small-scale spatial heterogeneity in CH4 mixing ratios particularly over wetland regions. We plan to continue mobile campaigns to cover interannual variations.
This work was supported by the Ministry of Science and Higher Education of the Russian Federation under State Contract No. 14.616.21.0104 (ID No RFMEFI61618X0104).
Belikov, D.; Arshinov, M.; Belan, B.; Davydov, D.; Fofonov, A.; Sasakawa, M.; Machida, T. Analysis of the Diurnal, Weekly, and Seasonal Cycles and Annual Trends in Atmospheric CO2 and CH4 at Tower Network in Siberia from 2005 to 2016. Atmosphere 2019, 10, 689.
Arshinov, M.Yu.; Belan B.D.; Davydov D.K.; Kozlov A.V., Fofonov A.V., and Arshinova V. Heterogeneity of the spatial distribution of CO2 and CH4 concentrations in the atmospheric surface layer over West Siberia: October-November 2018, Proc. SPIE 11208, 25th International Symposium on Atmospheric and Ocean Optics: Atmospheric Physics, 1120831 (18 December 2019);https://doi.org/10.1117/12.2539205
How to cite: Arshinov, M., Belan, B., Davydov, D., Kozlov, A., Fofonov, A., and Arshinova, V.: Studying the spatial and seasonal variability of greenhouse gases across West Siberia: large-scale mobile measurement campaigns of 2018-2019, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3218, https://doi.org/10.5194/egusphere-egu2020-3218, 2020.
EGU2020-10235 | Displays | ITS2.15/BG2.25
Medical geographic modeling of spatiotemporal changes of naturally determined diseases under the changing climate and economic development of the Russian ArcticSvetlana Malkhazova, Dmitry Orlov, and Irina Bashmakova
This research aims at the solution of environmental problems related to sustainable and economically efficient development of the North, which could enhance the quality of life and health of the population in the changing Russian Arctic. The medical geographic modeling of spatiotemporal patterns of naturally determined diseases is based on the detailed database covering the Arctic zone of Russia. The role of factors affecting the spread of diseases is unequal, with the climatic factor regarded as the most significant at all levels of territorial differentiation. At the highest (national) level, this factor determines the latitudinal zoning, which, in turn, determines the existence conditions of disease hosts and vectors and, ultimately, the foci of diseases. At regional level, the effect of climate is traced in monthly mean temperatures, temperature extremes, precipitation, snow depth, length of no-frost period, etc. Changes of these characteristics influence the poikilothermic (cold-blooded) arthropods, as well as the pathogens spending a part of their life cycles in the arthropods’ organisms. Another important factor is related to water resources, particularly, water-table height and ecological state of water bodies. Comparative analysis of hydrological and hydrochemical data, and their total impact on morbidity rates in terms of pathogenicity eco-indices, can serve as an additional tool for detecting the critical infection areas and population early warning. The original methodology is applied to evaluate the actual medical environmental situation, to forecast possible spatiotemporal changes in morbidity, including due to the most virulent infections, and to elaborate recommendations to public health authorities on planning the preventive and health-improving activities in the Arctic.
How to cite: Malkhazova, S., Orlov, D., and Bashmakova, I.: Medical geographic modeling of spatiotemporal changes of naturally determined diseases under the changing climate and economic development of the Russian Arctic, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10235, https://doi.org/10.5194/egusphere-egu2020-10235, 2020.
This research aims at the solution of environmental problems related to sustainable and economically efficient development of the North, which could enhance the quality of life and health of the population in the changing Russian Arctic. The medical geographic modeling of spatiotemporal patterns of naturally determined diseases is based on the detailed database covering the Arctic zone of Russia. The role of factors affecting the spread of diseases is unequal, with the climatic factor regarded as the most significant at all levels of territorial differentiation. At the highest (national) level, this factor determines the latitudinal zoning, which, in turn, determines the existence conditions of disease hosts and vectors and, ultimately, the foci of diseases. At regional level, the effect of climate is traced in monthly mean temperatures, temperature extremes, precipitation, snow depth, length of no-frost period, etc. Changes of these characteristics influence the poikilothermic (cold-blooded) arthropods, as well as the pathogens spending a part of their life cycles in the arthropods’ organisms. Another important factor is related to water resources, particularly, water-table height and ecological state of water bodies. Comparative analysis of hydrological and hydrochemical data, and their total impact on morbidity rates in terms of pathogenicity eco-indices, can serve as an additional tool for detecting the critical infection areas and population early warning. The original methodology is applied to evaluate the actual medical environmental situation, to forecast possible spatiotemporal changes in morbidity, including due to the most virulent infections, and to elaborate recommendations to public health authorities on planning the preventive and health-improving activities in the Arctic.
How to cite: Malkhazova, S., Orlov, D., and Bashmakova, I.: Medical geographic modeling of spatiotemporal changes of naturally determined diseases under the changing climate and economic development of the Russian Arctic, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10235, https://doi.org/10.5194/egusphere-egu2020-10235, 2020.
EGU2020-20567 | Displays | ITS2.15/BG2.25
Space Observatory for carbon budget monitoring in Russian forests using Earth observations and modellingSergey Bartalev
Russian forest is a factor of global importance for implementation of international conventions on climate considering its potential for absorption and accumulation of the atmospheric carbon at an impressive scale. Considering recently adopted Paris agreement on climate the comprehensive and accurate estimation of Russian forests’ carbon budget became a top priority research and development issue on national agenda. However existing quantitative estimates of Russian forests’ carbon budget are of significant level of uncertainty. One of the most obvious reasons for such uncertainty is not sufficiently reliable and up-to-date information on characteristics of forests and their dynamics.
The Russian Science Foundation has supported an ambitious research megaproject titled “Space Observatory for Forest Carbon” (SOFC) started in year 2019 and aimed at the development of a new concept and comprehensive methods for forest carbon budget monitoring using Earth observation data and forest growth and dynamics models. The main SOFC project objectives are as follows:
- Development of a new concept and methodology for Russian forests and their carbon budget monitoring based on the integration of remote sensing and ground data along with improved models of forest structure and dynamics;
- Development of new annually updated GIS databases on the characteristics and multi-annual dynamics of Russian forests;
- Development of an informational system and technology for the continuous monitoring of Russian forests’ carbon budget.
Information necessary for carbon budget estimation includes data on various land cover types, forest characteristics (growing stock volume, species composition, age, site-index) and ecological parameters (Net Primary Production, heterotrophic respiration). Data on natural (fires, diseases and pests, windstorm, droughts) and anthropogenic (felling, pollution) forest disturbances causing deforestation, as well as information on subsequent reforestation processes are also vital.
The existing remote sensing methods can provide significant part of missing country-wide information about the land cover types and forest characteristics for the national-scale carbon budget estimation and monitoring. Multi-year time series of this data since the beginning of the century allow modelling the forest dynamics and its biophysical characteristics. The Earth observation data derived information on forest fires’ impact includes burnt area mapping over various land cover types as well as forest fire severity assessment allowing characterisation of fire induced carbon emissions. Furthermore, developed methods for processing and analysis of multi-year satellite data time series enable detection of forest cover changes caused by various destructive factors making it possible to substantially improve the accuracy of carbon budget estimation.
Obtained information on forest ecosystems’ parameters is used to improve existing and develop new approaches to forest carbon budget estimation, as well as to simulate various scenarios of Russian economy development depending on forest management practices and climate change trajectories.
This work was supported by the Russian Science Foundation [grant number 19-77-30015].
How to cite: Bartalev, S.: Space Observatory for carbon budget monitoring in Russian forests using Earth observations and modelling, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20567, https://doi.org/10.5194/egusphere-egu2020-20567, 2020.
Russian forest is a factor of global importance for implementation of international conventions on climate considering its potential for absorption and accumulation of the atmospheric carbon at an impressive scale. Considering recently adopted Paris agreement on climate the comprehensive and accurate estimation of Russian forests’ carbon budget became a top priority research and development issue on national agenda. However existing quantitative estimates of Russian forests’ carbon budget are of significant level of uncertainty. One of the most obvious reasons for such uncertainty is not sufficiently reliable and up-to-date information on characteristics of forests and their dynamics.
The Russian Science Foundation has supported an ambitious research megaproject titled “Space Observatory for Forest Carbon” (SOFC) started in year 2019 and aimed at the development of a new concept and comprehensive methods for forest carbon budget monitoring using Earth observation data and forest growth and dynamics models. The main SOFC project objectives are as follows:
- Development of a new concept and methodology for Russian forests and their carbon budget monitoring based on the integration of remote sensing and ground data along with improved models of forest structure and dynamics;
- Development of new annually updated GIS databases on the characteristics and multi-annual dynamics of Russian forests;
- Development of an informational system and technology for the continuous monitoring of Russian forests’ carbon budget.
Information necessary for carbon budget estimation includes data on various land cover types, forest characteristics (growing stock volume, species composition, age, site-index) and ecological parameters (Net Primary Production, heterotrophic respiration). Data on natural (fires, diseases and pests, windstorm, droughts) and anthropogenic (felling, pollution) forest disturbances causing deforestation, as well as information on subsequent reforestation processes are also vital.
The existing remote sensing methods can provide significant part of missing country-wide information about the land cover types and forest characteristics for the national-scale carbon budget estimation and monitoring. Multi-year time series of this data since the beginning of the century allow modelling the forest dynamics and its biophysical characteristics. The Earth observation data derived information on forest fires’ impact includes burnt area mapping over various land cover types as well as forest fire severity assessment allowing characterisation of fire induced carbon emissions. Furthermore, developed methods for processing and analysis of multi-year satellite data time series enable detection of forest cover changes caused by various destructive factors making it possible to substantially improve the accuracy of carbon budget estimation.
Obtained information on forest ecosystems’ parameters is used to improve existing and develop new approaches to forest carbon budget estimation, as well as to simulate various scenarios of Russian economy development depending on forest management practices and climate change trajectories.
This work was supported by the Russian Science Foundation [grant number 19-77-30015].
How to cite: Bartalev, S.: Space Observatory for carbon budget monitoring in Russian forests using Earth observations and modelling, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20567, https://doi.org/10.5194/egusphere-egu2020-20567, 2020.
EGU2020-18506 | Displays | ITS2.15/BG2.25
Satellite-Based Analysis of Fire Events and Transport of Emissions in the ArcticAnu-Maija Sundström, Tomi Karppinen, Antti Arola, Larisa Sogacheva, Hannakaisa Lindqvist, Gerrit de Leeuw, and Johanna Tamminen
Climate change is proceeding fastest in the Arctic region. During past years Arctic summers have been warmer and drier elevating the risk for extensive forest fire episodes. In fact, satellite observations show, that during past two summers (2018, 2019) an increase is seen in the number of fires occurring above the Arctic Circle, especially in Siberia. While human-induced emissions of long-lived greenhouse gases are the main driving factor of global warming, short-lived climate forcers or pollutants emitted from the forest fires are also playing an important role especially in the Arctic. Absorbing aerosols can cause direct arctic warming locally. They can also alter radiative balance when depositing onto snow/ice and decreasing the surface albedo, resulting in subsequent warming. Aerosol-cloud interaction feedbacks can also enhance warming. Forest fire emissions also affect local air quality and photochemical processes in the atmosphere. For example, CO contributes to the formation of tropospheric ozone and affects the abundance of greenhouse gases such as methane and CO2.
This study focuses on analyzing fire episodes in the Arctic for the past 10 years, as well as investigating the transport of forest fire CO and smoke aerosols to the Arctic. Smoke plumes and their transport are analyzed using Absorbing Aerosol Index (AAI) from several satellite instruments: GOME-2 onboard Metop A and B, OMI onboard Aura, and TROPOMI onboard Copernicus Sentinel-5P satellite. Observations of CO are obtained from IASI (Metop A and B) as well as from TROPOMI, while the fire observations are obtained from MODIS instruments onboard Aqua and Terra, as well as from VIIRS onboard Suomi NPP. In addition, observations e.g. from a space-borne lidar, CALIPSO, is used to obtain vertical distribution of smoke and to estimate plume heights.
How to cite: Sundström, A.-M., Karppinen, T., Arola, A., Sogacheva, L., Lindqvist, H., de Leeuw, G., and Tamminen, J.: Satellite-Based Analysis of Fire Events and Transport of Emissions in the Arctic , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18506, https://doi.org/10.5194/egusphere-egu2020-18506, 2020.
Climate change is proceeding fastest in the Arctic region. During past years Arctic summers have been warmer and drier elevating the risk for extensive forest fire episodes. In fact, satellite observations show, that during past two summers (2018, 2019) an increase is seen in the number of fires occurring above the Arctic Circle, especially in Siberia. While human-induced emissions of long-lived greenhouse gases are the main driving factor of global warming, short-lived climate forcers or pollutants emitted from the forest fires are also playing an important role especially in the Arctic. Absorbing aerosols can cause direct arctic warming locally. They can also alter radiative balance when depositing onto snow/ice and decreasing the surface albedo, resulting in subsequent warming. Aerosol-cloud interaction feedbacks can also enhance warming. Forest fire emissions also affect local air quality and photochemical processes in the atmosphere. For example, CO contributes to the formation of tropospheric ozone and affects the abundance of greenhouse gases such as methane and CO2.
This study focuses on analyzing fire episodes in the Arctic for the past 10 years, as well as investigating the transport of forest fire CO and smoke aerosols to the Arctic. Smoke plumes and their transport are analyzed using Absorbing Aerosol Index (AAI) from several satellite instruments: GOME-2 onboard Metop A and B, OMI onboard Aura, and TROPOMI onboard Copernicus Sentinel-5P satellite. Observations of CO are obtained from IASI (Metop A and B) as well as from TROPOMI, while the fire observations are obtained from MODIS instruments onboard Aqua and Terra, as well as from VIIRS onboard Suomi NPP. In addition, observations e.g. from a space-borne lidar, CALIPSO, is used to obtain vertical distribution of smoke and to estimate plume heights.
How to cite: Sundström, A.-M., Karppinen, T., Arola, A., Sogacheva, L., Lindqvist, H., de Leeuw, G., and Tamminen, J.: Satellite-Based Analysis of Fire Events and Transport of Emissions in the Arctic , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18506, https://doi.org/10.5194/egusphere-egu2020-18506, 2020.
EGU2020-13853 | Displays | ITS2.15/BG2.25
Precipitation isotope (δ¹⁸O, δ²H, d-excess) seasonality across the Pan-Arctic during MOSAiCMoein Mellat, Hannah Bailey, Kaisa-Riikka Mustonen, Hannu Marttila, Pete D. Akers, Eric S. Klein, and Jeffrey M. Welker and the PAPIN
Stable isotopes of oxygen and hydrogen in precipitation (δ18OP, δ2HP, d-excess) are valuable hydrological tracers linked to ocean-atmospheric processes such as moisture source, storm trajectory, and seasonal temperature cycles. However, characteristics of δ18OP, δ2HP and d-excess and the processes governing them are yet to be quantified across the Arctic due to a lack of long-term empirical data. The Pan-Arctic Precipitation Isotopes Network (PAPIN) is a new coordinated network of 24 stations aimed at the direct sampling, analysis, and synthesis of precipitation isotope geochemistry in the north. Our ongoing event-based sampling provides a rich spatial dataset during the Multidisciplinary drifting Observatory for the Study of Arctic Climate (“MOSAiC”) expedition and new insight into coupled climate processes operating in the Arctic today. To date, precipitation δ18O and δ2H data (2018-2019) exhibit pronounced spatial and seasonal variability that broadly conforms to theoretical and observed understanding: (1) decreasing δ18OP/ δ2HP with increasing latitude and elevation, (2) decreasing δ18OP/ δ2HP with increasing continentality, and (3) increasing δ18OP/ δ2HP with increasing SAT. However, event-based sampling reveals remarkable variability among these relationships. For example, our observed Arctic mean summer -latitude slope of -0.3‰/degree of latitude is 50% smaller than the annual latitude effect in the mid-latitudes (-0.6‰/degree). This rate decreases to -0.1‰/degree of latitude in Finland and Russia, while in Alaska and northern Canadian a -0.7‰/degree latitudinal rate is observed. Similarly, we observe marked spatial differences in mean δ18O-temperature coefficients. Using back-trajectory analysis, we attribute these nuances to divergent moisture sources and transport pathways into, within, and out of the Arctic, and demonstrate how atmospheric circulation processes drive changes in isotope geochemistry and climate that are linked to sea ice concentration. For example, Alaska moisture derived from the North Pacific Ocean, Sea of Okhotsk, and the Bering Sea remains relatively enriched in 18OP/2H due to higher sea surface temperatures, whereas moisture originating from ice-covered seas to the north is characterized by relatively depleted values. This is the first coordinated network to quantify the spatial patterns of isotopes in precipitation, simultaneously, across the entire Arctic. In combination with a Pan-Arctic network of continuous water vapor isotope analyzers, our process-level studies will resolve the patterns and processes governing the δ18O, δ2H and d-excess values of the Arctic water cycle during the MOSAiC expedition and beyond.
How to cite: Mellat, M., Bailey, H., Mustonen, K.-R., Marttila, H., Akers, P. D., Klein, E. S., and Welker, J. M. and the PAPIN: Precipitation isotope (δ¹⁸O, δ²H, d-excess) seasonality across the Pan-Arctic during MOSAiC, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13853, https://doi.org/10.5194/egusphere-egu2020-13853, 2020.
Stable isotopes of oxygen and hydrogen in precipitation (δ18OP, δ2HP, d-excess) are valuable hydrological tracers linked to ocean-atmospheric processes such as moisture source, storm trajectory, and seasonal temperature cycles. However, characteristics of δ18OP, δ2HP and d-excess and the processes governing them are yet to be quantified across the Arctic due to a lack of long-term empirical data. The Pan-Arctic Precipitation Isotopes Network (PAPIN) is a new coordinated network of 24 stations aimed at the direct sampling, analysis, and synthesis of precipitation isotope geochemistry in the north. Our ongoing event-based sampling provides a rich spatial dataset during the Multidisciplinary drifting Observatory for the Study of Arctic Climate (“MOSAiC”) expedition and new insight into coupled climate processes operating in the Arctic today. To date, precipitation δ18O and δ2H data (2018-2019) exhibit pronounced spatial and seasonal variability that broadly conforms to theoretical and observed understanding: (1) decreasing δ18OP/ δ2HP with increasing latitude and elevation, (2) decreasing δ18OP/ δ2HP with increasing continentality, and (3) increasing δ18OP/ δ2HP with increasing SAT. However, event-based sampling reveals remarkable variability among these relationships. For example, our observed Arctic mean summer -latitude slope of -0.3‰/degree of latitude is 50% smaller than the annual latitude effect in the mid-latitudes (-0.6‰/degree). This rate decreases to -0.1‰/degree of latitude in Finland and Russia, while in Alaska and northern Canadian a -0.7‰/degree latitudinal rate is observed. Similarly, we observe marked spatial differences in mean δ18O-temperature coefficients. Using back-trajectory analysis, we attribute these nuances to divergent moisture sources and transport pathways into, within, and out of the Arctic, and demonstrate how atmospheric circulation processes drive changes in isotope geochemistry and climate that are linked to sea ice concentration. For example, Alaska moisture derived from the North Pacific Ocean, Sea of Okhotsk, and the Bering Sea remains relatively enriched in 18OP/2H due to higher sea surface temperatures, whereas moisture originating from ice-covered seas to the north is characterized by relatively depleted values. This is the first coordinated network to quantify the spatial patterns of isotopes in precipitation, simultaneously, across the entire Arctic. In combination with a Pan-Arctic network of continuous water vapor isotope analyzers, our process-level studies will resolve the patterns and processes governing the δ18O, δ2H and d-excess values of the Arctic water cycle during the MOSAiC expedition and beyond.
How to cite: Mellat, M., Bailey, H., Mustonen, K.-R., Marttila, H., Akers, P. D., Klein, E. S., and Welker, J. M. and the PAPIN: Precipitation isotope (δ¹⁸O, δ²H, d-excess) seasonality across the Pan-Arctic during MOSAiC, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13853, https://doi.org/10.5194/egusphere-egu2020-13853, 2020.
EGU2020-13095 | Displays | ITS2.15/BG2.25
Estimation of possible impact of black carbon emissions from 2019 large Siberian forest wildfires on the Arctic regionVeronika Ginzburg, Sergey Kostrikin, Vladimir Korotkov, Anastasia Revokatovа, Polina Polumieva, Alexey Chernenkov, and Maria Zelenova
The presented study is aimed to estimate the probability of black carbon transportation from large forest wildfires in Russian boreal taiga occurred in the summer 2019 to Arctic region and to estimate its deposition to ice surface and contribution to shortwave radiative forcing.
The extreme forest fires were observed in 2019 over the territories of Krasnoyarskiy region and Yakutia republic. The Russian Informational Remote monitoring System of the Federal Forestry Agency provides data on the areas of forest lands damaged by different types of fires. These data were used to choose ten most intensive and ten most continuous fires for each region. Estimation of fuel mass available for combustion including biomass, litter and deadwood were made using the growing stock data of the State Forestry Register differentiated for the regions of the Russian Federation applying country specific conversion coefficients [Schepaschenko et al. 2018]. Emission of black carbon from forest fires was carried out using the methodology and combustion coefficients from the 2006 IPCC Guidelines for National Greenhouse Gas Inventories and the coefficient of black carbon emissions from Akagi et al. [2011].
The main factor determining the transfer of particles is the synoptic situation. Blocking anticyclones and cyclonic series affects the circulation regime and conditions for the transport of particles to the Arctic. For these regions climatic frequency of occurrence of Southern and South-Western winds in summer is about 30-40%. The probability of atmospheric trajectory transfer from each chosen fire event to the Arctic region was estimated by the trajectory model HYSPLIT, also real synoptic data for each chosen fire event were used to analyze the probability of emission cloud transfer to northern latitudes.
The black carbon effect including concentrations in the atmosphere, deposition on the ice surface, modification of surface albedo in the ice region of Arctic and influence of additional radiation forcing associated with BC emissions from forest fires were estimates using the climate model INMCM5 [Volodin et a.l., 2017]. Aerosol sources, advection, gravitational sedimentation, surface absorption, and scavenging by precipitation are taken into account to compute aerosol concentration variations. Radiation forcing caused by BC emission from forest fires was calculated using the SNICAR model.
The study is supported by RFBR project No.18-05-60183.
REFERENCES
Volodin E. M., Mortikov E. V., Kostrykin S. V., Galin V. Ya., Lykossov V. N., Gritsun A.S., Diansky N. A., Gusev A. V., Yakovlev N.G. Simulation of the present-day climate with the climate model INMCM5, Climate Dynamics, 2017, doi:10.1007/s00382-017-3539-7.
2006 IPCC Guidelines for National Greenhouse Gas Inventories, Vol. 4: Agriculture, Forestry and Other Land Use (IPCC, 2006
K. Akagi, R. J. Yokelson, C. Wiedinmyer, M. J. Alvarado, J. S. Reid, T. Karl, J. D. Crounse, and P. O. Wennberg, “Emission factors for open and domestic biomass burning for use in atmospheric models,” Atmos. Chem. Phys. 11 (9), 4039–4072 (2011).
Schepaschenko D., Moltchanova E., Shvidenko A., Blyshchyk V., Dmitriev E., Martynenko O., See L., Kraxner F. (2018) Improved Estimates of Biomass Expansion Factors for Russian Forests // Forests. – 9, 312. P. 1-23. – https://doi.org/10.3390/f9060312
How to cite: Ginzburg, V., Kostrikin, S., Korotkov, V., Revokatovа, A., Polumieva, P., Chernenkov, A., and Zelenova, M.: Estimation of possible impact of black carbon emissions from 2019 large Siberian forest wildfires on the Arctic region, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13095, https://doi.org/10.5194/egusphere-egu2020-13095, 2020.
The presented study is aimed to estimate the probability of black carbon transportation from large forest wildfires in Russian boreal taiga occurred in the summer 2019 to Arctic region and to estimate its deposition to ice surface and contribution to shortwave radiative forcing.
The extreme forest fires were observed in 2019 over the territories of Krasnoyarskiy region and Yakutia republic. The Russian Informational Remote monitoring System of the Federal Forestry Agency provides data on the areas of forest lands damaged by different types of fires. These data were used to choose ten most intensive and ten most continuous fires for each region. Estimation of fuel mass available for combustion including biomass, litter and deadwood were made using the growing stock data of the State Forestry Register differentiated for the regions of the Russian Federation applying country specific conversion coefficients [Schepaschenko et al. 2018]. Emission of black carbon from forest fires was carried out using the methodology and combustion coefficients from the 2006 IPCC Guidelines for National Greenhouse Gas Inventories and the coefficient of black carbon emissions from Akagi et al. [2011].
The main factor determining the transfer of particles is the synoptic situation. Blocking anticyclones and cyclonic series affects the circulation regime and conditions for the transport of particles to the Arctic. For these regions climatic frequency of occurrence of Southern and South-Western winds in summer is about 30-40%. The probability of atmospheric trajectory transfer from each chosen fire event to the Arctic region was estimated by the trajectory model HYSPLIT, also real synoptic data for each chosen fire event were used to analyze the probability of emission cloud transfer to northern latitudes.
The black carbon effect including concentrations in the atmosphere, deposition on the ice surface, modification of surface albedo in the ice region of Arctic and influence of additional radiation forcing associated with BC emissions from forest fires were estimates using the climate model INMCM5 [Volodin et a.l., 2017]. Aerosol sources, advection, gravitational sedimentation, surface absorption, and scavenging by precipitation are taken into account to compute aerosol concentration variations. Radiation forcing caused by BC emission from forest fires was calculated using the SNICAR model.
The study is supported by RFBR project No.18-05-60183.
REFERENCES
Volodin E. M., Mortikov E. V., Kostrykin S. V., Galin V. Ya., Lykossov V. N., Gritsun A.S., Diansky N. A., Gusev A. V., Yakovlev N.G. Simulation of the present-day climate with the climate model INMCM5, Climate Dynamics, 2017, doi:10.1007/s00382-017-3539-7.
2006 IPCC Guidelines for National Greenhouse Gas Inventories, Vol. 4: Agriculture, Forestry and Other Land Use (IPCC, 2006
K. Akagi, R. J. Yokelson, C. Wiedinmyer, M. J. Alvarado, J. S. Reid, T. Karl, J. D. Crounse, and P. O. Wennberg, “Emission factors for open and domestic biomass burning for use in atmospheric models,” Atmos. Chem. Phys. 11 (9), 4039–4072 (2011).
Schepaschenko D., Moltchanova E., Shvidenko A., Blyshchyk V., Dmitriev E., Martynenko O., See L., Kraxner F. (2018) Improved Estimates of Biomass Expansion Factors for Russian Forests // Forests. – 9, 312. P. 1-23. – https://doi.org/10.3390/f9060312
How to cite: Ginzburg, V., Kostrikin, S., Korotkov, V., Revokatovа, A., Polumieva, P., Chernenkov, A., and Zelenova, M.: Estimation of possible impact of black carbon emissions from 2019 large Siberian forest wildfires on the Arctic region, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13095, https://doi.org/10.5194/egusphere-egu2020-13095, 2020.
EGU2020-10833 | Displays | ITS2.15/BG2.25
Coupling bottom-up process modeling to atmospheric inversions to constrain the Siberian methane budgetMathias Goeckede, Philipp de Vrese, Victor Brovkin, Frank-Thomas Koch, and Christian Roedenbeck
Methane (CH4) is one of the most important greenhouse gases, but unexpected changes in atmospheric CH4 budgets over the past decades emphasize that many aspects regarding the role of this gas in the global climate system remain unexplained to date. With emissions and concentrations likely to continue increasing in the future, quantitative and qualitative insights into processes governing CH4 sources and sinks need to be improved in order to better predict feedbacks with a changing climate. Particularly the high northern latitudes have been identified as a potential future hotspot for global CH4 emissions, but the effective impact of rapid climate change on the mobilization of the enormous carbon reservoir currently stored in northern soils remains unclear.
Process-based modelling frameworks are the most promising tool for predicting CH4 emission trajectories under future climate scenarios. In order to improve the insights into CH4 emissions and their controls, the land-surface component of the Max Planck Earth System model, JSBACH, has been upgraded in recent years. In this context, a particular focus has been placed on refining important processes in permafrost landscapes, including freeze-thaw processes, high-resolution vertical gradients in transport and transformation of carbon in soils, and a dynamic coupling between carbon, water and energy cycles. Evaluating the performance of this model, however, remains a challenge because of the limited observational database for high Northern latitude regions.
In the presented study, we couple methane flux fields simulated by JSBACH to an atmospheric inversion scheme to evaluate model performance within the Siberian domain. Optimization of the surface-atmosphere exchange processes against an atmospheric methane mixing-ratio database will allow to identify the large-scale representativeness of JSBACH simulations, including its spatio-temporal variability in the chosen domain. We will test the impact of selected model parameter settings on the agreement between bottom-up and top-down techniques, therefore highlighting how sensitive regional scale methane budgets are to dominant processes and controls within this region.
How to cite: Goeckede, M., de Vrese, P., Brovkin, V., Koch, F.-T., and Roedenbeck, C.: Coupling bottom-up process modeling to atmospheric inversions to constrain the Siberian methane budget, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10833, https://doi.org/10.5194/egusphere-egu2020-10833, 2020.
Methane (CH4) is one of the most important greenhouse gases, but unexpected changes in atmospheric CH4 budgets over the past decades emphasize that many aspects regarding the role of this gas in the global climate system remain unexplained to date. With emissions and concentrations likely to continue increasing in the future, quantitative and qualitative insights into processes governing CH4 sources and sinks need to be improved in order to better predict feedbacks with a changing climate. Particularly the high northern latitudes have been identified as a potential future hotspot for global CH4 emissions, but the effective impact of rapid climate change on the mobilization of the enormous carbon reservoir currently stored in northern soils remains unclear.
Process-based modelling frameworks are the most promising tool for predicting CH4 emission trajectories under future climate scenarios. In order to improve the insights into CH4 emissions and their controls, the land-surface component of the Max Planck Earth System model, JSBACH, has been upgraded in recent years. In this context, a particular focus has been placed on refining important processes in permafrost landscapes, including freeze-thaw processes, high-resolution vertical gradients in transport and transformation of carbon in soils, and a dynamic coupling between carbon, water and energy cycles. Evaluating the performance of this model, however, remains a challenge because of the limited observational database for high Northern latitude regions.
In the presented study, we couple methane flux fields simulated by JSBACH to an atmospheric inversion scheme to evaluate model performance within the Siberian domain. Optimization of the surface-atmosphere exchange processes against an atmospheric methane mixing-ratio database will allow to identify the large-scale representativeness of JSBACH simulations, including its spatio-temporal variability in the chosen domain. We will test the impact of selected model parameter settings on the agreement between bottom-up and top-down techniques, therefore highlighting how sensitive regional scale methane budgets are to dominant processes and controls within this region.
How to cite: Goeckede, M., de Vrese, P., Brovkin, V., Koch, F.-T., and Roedenbeck, C.: Coupling bottom-up process modeling to atmospheric inversions to constrain the Siberian methane budget, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10833, https://doi.org/10.5194/egusphere-egu2020-10833, 2020.
EGU2020-16584 | Displays | ITS2.15/BG2.25
Assessment of CH4 sources in the Arctic using regional atmospheric measurements and their link to surface emissionsSophie Wittig, Antoine Berchet, Jean-Daniel Paris, Mikhail Arshinov, Toshinobu Machida, Motoki Sasakawa, Doug Worthy, and Isabelle Pison
The Arctic is a critical area in terms of global warming. Not only are the rising temperatures already causing changes in the natural conditions of this region, but the high potential of increased methane (CH4) regional emissions are also likely to intensify global warming even stronger in the near term.
This future effect consists in the thawing and destabilization of inland and sub-sea permafrost that enhance the release of methane into the atmosphere from extensive CH4 and organic carbon pools which have so far been shielded by ice and frozen soil. Moreover, the high latitude regions are already playing a key role in the global CH4-budget because of such large sources as wetlands and freshwater lakes in addition to human activities, predominantly the fossil fuel industry of the Arctic nations.
However, the level of scientific understanding of the actual contribution of Arctic methane emissions to the global CH4-budget is still relatively immature. Besides the difficulties in carrying out measurements in such remote areas, this is due to a high inhomogeneity in the spatial distribution of methane sources and sinks as well as to ongoing changes in hydrology, vegetation and carbon decomposition.
Therefore, the aim of this work is to reduce the uncertainties about methane sources and sinks in the Arctic region during the most recent years by using an atmospheric approach, in order to improve the quality of the assessment of the local and global impacts.
To do so, the data of atmospheric CH4 concentrations measured at about 30 stations located in different Arctic nations have been analysed in regard to the trends, seasonal fluctuations and spatial patterns that they demonstrate as well as their link to regional emissions.
How to cite: Wittig, S., Berchet, A., Paris, J.-D., Arshinov, M., Machida, T., Sasakawa, M., Worthy, D., and Pison, I.: Assessment of CH4 sources in the Arctic using regional atmospheric measurements and their link to surface emissions, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-16584, https://doi.org/10.5194/egusphere-egu2020-16584, 2020.
The Arctic is a critical area in terms of global warming. Not only are the rising temperatures already causing changes in the natural conditions of this region, but the high potential of increased methane (CH4) regional emissions are also likely to intensify global warming even stronger in the near term.
This future effect consists in the thawing and destabilization of inland and sub-sea permafrost that enhance the release of methane into the atmosphere from extensive CH4 and organic carbon pools which have so far been shielded by ice and frozen soil. Moreover, the high latitude regions are already playing a key role in the global CH4-budget because of such large sources as wetlands and freshwater lakes in addition to human activities, predominantly the fossil fuel industry of the Arctic nations.
However, the level of scientific understanding of the actual contribution of Arctic methane emissions to the global CH4-budget is still relatively immature. Besides the difficulties in carrying out measurements in such remote areas, this is due to a high inhomogeneity in the spatial distribution of methane sources and sinks as well as to ongoing changes in hydrology, vegetation and carbon decomposition.
Therefore, the aim of this work is to reduce the uncertainties about methane sources and sinks in the Arctic region during the most recent years by using an atmospheric approach, in order to improve the quality of the assessment of the local and global impacts.
To do so, the data of atmospheric CH4 concentrations measured at about 30 stations located in different Arctic nations have been analysed in regard to the trends, seasonal fluctuations and spatial patterns that they demonstrate as well as their link to regional emissions.
How to cite: Wittig, S., Berchet, A., Paris, J.-D., Arshinov, M., Machida, T., Sasakawa, M., Worthy, D., and Pison, I.: Assessment of CH4 sources in the Arctic using regional atmospheric measurements and their link to surface emissions, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-16584, https://doi.org/10.5194/egusphere-egu2020-16584, 2020.
EGU2020-7740 * | Displays | ITS2.15/BG2.25 | Highlight
Pan-Eurasian Experiment (PEEX) Programme – Overview on the recent resultsHanna Lappalainen, Veli-Matti Kerminen, Nuria Altimir, Alexander Mahura, Ekataterina Ezhova, Timo Vihma, Petteri Uuotila, Sergey Chalov, Pavel Konstantinov, Michael Archinov, Yubao Qui, Igor Ezau, Ilmo Kukkonen, Vladimir Melnikov, Aijun Ding, Alexander Baklanov, Nikolai Kasimov, Hudong Guo, Varely Bondur, and Tuukka Petäjä and the Hanna Lappalainen
Pan-Eurasian Experiment (PEEX) Programme (www.atm.helsinki.fi/peex) initiated in 2012 is an asset for INAR at the University of Helsinki and its co-partners to have high international visibility, to attract further research collaboration and to upscale the scientific impact in various arenas. The PEEX is interested in the northern high latitudes (Arctic, boreal) and on China and the new Silk Road Economic Belt regions. The PEEX scientific focus is on understanding of large-scale feedbacks and interactions between the land -atmosphere - ocean continuum under the changing climate of the Northern high latitudes and at the Arctic (Kulmala et al. 2015, Lappalainen et al. 2014; 2015; 2016; 2018, Vihma et al. 2019, Alekseychik et al. 2019, Kasimov et al. 2018) and on the transport and transformation of air pollution in China. PEEX research results have been published the PEEX Special Issue in J. Atmospheric Chemistry and Physics (www.atmos-chem-phys.net/special_issue395.html), in the Journal “Geography, Environment, Sustainability” (ges.rgo.ru/jour) and in the J. Big Data (journalofbigdata.springeropen.com). In 2019 PEEX started comprehensive analysis on the first results over last five years based on the published peer review papers and results attained from the PEEX geographical domain. The aim of the analysis is to study the state-of-the-art research outcome versus the PEEX large-scale research questions addressed by the Science Plan (Lappalainen et al. 2015). To facilitate the direct input from the research community, we have asked researchers to answer to a form where they could list their main scientific results and activities considered relevant to PEEX region and also include ancillary information such as type of activity or geographical extend. The preliminary metadata database covers information from over 400 scientific papers and the analysis is in progress. The key gaps of current understanding and future research needs will be discussed from the system point of view, from the land ecosystems, atmosphere, ocean & river systems and society perspectives and preliminary results will be introduced at EGU PEEX session.
How to cite: Lappalainen, H., Kerminen, V.-M., Altimir, N., Mahura, A., Ezhova, E., Vihma, T., Uuotila, P., Chalov, S., Konstantinov, P., Archinov, M., Qui, Y., Ezau, I., Kukkonen, I., Melnikov, V., Ding, A., Baklanov, A., Kasimov, N., Guo, H., Bondur, V., and Petäjä, T. and the Hanna Lappalainen: Pan-Eurasian Experiment (PEEX) Programme – Overview on the recent results , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7740, https://doi.org/10.5194/egusphere-egu2020-7740, 2020.
Pan-Eurasian Experiment (PEEX) Programme (www.atm.helsinki.fi/peex) initiated in 2012 is an asset for INAR at the University of Helsinki and its co-partners to have high international visibility, to attract further research collaboration and to upscale the scientific impact in various arenas. The PEEX is interested in the northern high latitudes (Arctic, boreal) and on China and the new Silk Road Economic Belt regions. The PEEX scientific focus is on understanding of large-scale feedbacks and interactions between the land -atmosphere - ocean continuum under the changing climate of the Northern high latitudes and at the Arctic (Kulmala et al. 2015, Lappalainen et al. 2014; 2015; 2016; 2018, Vihma et al. 2019, Alekseychik et al. 2019, Kasimov et al. 2018) and on the transport and transformation of air pollution in China. PEEX research results have been published the PEEX Special Issue in J. Atmospheric Chemistry and Physics (www.atmos-chem-phys.net/special_issue395.html), in the Journal “Geography, Environment, Sustainability” (ges.rgo.ru/jour) and in the J. Big Data (journalofbigdata.springeropen.com). In 2019 PEEX started comprehensive analysis on the first results over last five years based on the published peer review papers and results attained from the PEEX geographical domain. The aim of the analysis is to study the state-of-the-art research outcome versus the PEEX large-scale research questions addressed by the Science Plan (Lappalainen et al. 2015). To facilitate the direct input from the research community, we have asked researchers to answer to a form where they could list their main scientific results and activities considered relevant to PEEX region and also include ancillary information such as type of activity or geographical extend. The preliminary metadata database covers information from over 400 scientific papers and the analysis is in progress. The key gaps of current understanding and future research needs will be discussed from the system point of view, from the land ecosystems, atmosphere, ocean & river systems and society perspectives and preliminary results will be introduced at EGU PEEX session.
How to cite: Lappalainen, H., Kerminen, V.-M., Altimir, N., Mahura, A., Ezhova, E., Vihma, T., Uuotila, P., Chalov, S., Konstantinov, P., Archinov, M., Qui, Y., Ezau, I., Kukkonen, I., Melnikov, V., Ding, A., Baklanov, A., Kasimov, N., Guo, H., Bondur, V., and Petäjä, T. and the Hanna Lappalainen: Pan-Eurasian Experiment (PEEX) Programme – Overview on the recent results , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7740, https://doi.org/10.5194/egusphere-egu2020-7740, 2020.
EGU2020-5359 | Displays | ITS2.15/BG2.25
Big Earth Data enhance the Implementation of PEEX along the Belt and Road regionsYubao Qiu, Huadong Guo, Jie Liu, Fang Chen, and Massimo Menenti
The Digital Belt and Road program, the DBAR, is aiming to resolve the scientific understanding of the Earth changes, and sustainable development goals along the Belt and Road regions (B&R), which was initiated in 2016, and now have been developing into its 1st phase of implementation plan after the startup phase. With a strong collaboration and common interest, the Pan-Eurasia experiment the PEEX, and DBAR is crossed together to use the Earth observations to understanding and address the challenges for the environmental changes, especially for the Belt and Road in Asia about the changing of snow and ice, vegetation and ecosystem, disaster, urban, agriculture, water stress and etc.
With the development of the Earth Observations, either from the ground observations or the space/air borne platform, the Big Earth Data approach has been developing for addressing the societal and science challenges for the PEEX and DBAR common domain, with the eight working group efforts, and its potential contribution to the working efforts for the PEEX. In this talk, we will describe the Big Earth Data, societal challenges, its platform development, and more focus will be put in the snow and ice, urban, environment, disaster, and water as the priorities for the cross feralization with PEEX.
How to cite: Qiu, Y., Guo, H., Liu, J., Chen, F., and Menenti, M.: Big Earth Data enhance the Implementation of PEEX along the Belt and Road regions, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5359, https://doi.org/10.5194/egusphere-egu2020-5359, 2020.
The Digital Belt and Road program, the DBAR, is aiming to resolve the scientific understanding of the Earth changes, and sustainable development goals along the Belt and Road regions (B&R), which was initiated in 2016, and now have been developing into its 1st phase of implementation plan after the startup phase. With a strong collaboration and common interest, the Pan-Eurasia experiment the PEEX, and DBAR is crossed together to use the Earth observations to understanding and address the challenges for the environmental changes, especially for the Belt and Road in Asia about the changing of snow and ice, vegetation and ecosystem, disaster, urban, agriculture, water stress and etc.
With the development of the Earth Observations, either from the ground observations or the space/air borne platform, the Big Earth Data approach has been developing for addressing the societal and science challenges for the PEEX and DBAR common domain, with the eight working group efforts, and its potential contribution to the working efforts for the PEEX. In this talk, we will describe the Big Earth Data, societal challenges, its platform development, and more focus will be put in the snow and ice, urban, environment, disaster, and water as the priorities for the cross feralization with PEEX.
How to cite: Qiu, Y., Guo, H., Liu, J., Chen, F., and Menenti, M.: Big Earth Data enhance the Implementation of PEEX along the Belt and Road regions, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5359, https://doi.org/10.5194/egusphere-egu2020-5359, 2020.
EGU2020-13662 | Displays | ITS2.15/BG2.25
Investigation of the different heat waves indices applicability for the territory of UkraineIvan Kostyrko, Sergiy Snizhko, Olga Shevchenko, Rostislav Oliynyk, Hanna Svintsitska, and Alexander Mahura
Climate extremes are of the major concern in the global context, since they can result in significant financial and human losses. The scale of heat wave (HW) impacts highlights the necessity to be able to measure extreme events in an informative manner, which is suitable for the geographical region and the climatic fields. Recent climatic projections show increasing in the frequency, magnitude, and duration of temperature extremes. It makes very important to determine appropriate metrics of heat / cold waves, and in particular, under climate change conditions. But to date there is no one universally acceptable heat wave definition. Because of the range of social groups and economy sectors affected by heat waves it is, of course, impossible to obtain a single index that is appropriate across each group and can be calculated from readily available climatological data.
This research therefore has the aim to calculate some absolute and relative heat wave indices for the reference period 1981-2010 to provide a comparison spatial-temporal distribution of the HW indices over territory of Ukraine according to the recommendation of WMO 2015 Technical Regulations.
The selected methodology for the heat waves investigation at this stage is based on the absolute indices proposed by Fischer and Schar (2010), Perkins and Alexander (2012): HWM (average magnitude of all summertime heat waves), HWA (hottest day of hottest summertime event), HWN (yearly number of heat waves during summertime), HWD (length of the longest summertime event) HWF (sum of participating heat wave days in the summertime season, which meet the HW definition criteria over a 30-day interval). Also, relative indices (Heat Wave Magnitude Index (HWMI), Russo et al. 2014 and Heat Wave Magnitude Index daily (HWMId), Russo et al. 2015) were used for the research.
Thereby, in this research 5 absolute indices and 2 relative indices for 50 weather stations of the meteorological network of the Ukrainian Hydrometeorological Centre for the summer months of the reference period 1981-2010 were calculated.
It was found that for the almost all the territory of Ukraine, the anomalies of all absolute heat wave indices in 2010 (compared to the reference period 1980-2010) were clearly noticeable. However, the analysis of the heat wave 2010 showed that a certain multicollinearity is inherent to the absolute indices calculated. The results of the statistical estimation showed that using all five heat wave indices is not necessary. In our opinion, only HWN, HWF and HWM are sufficient for the HW characteristic.
The calculated relative heat waves indexes are sufficiently sensitive to the minor changes of the daily maximum air temperature. It was found HWMId is the most sensitive between the studied indices. Therefore, on our opinion HWN, HWF, HWM and HWMId indices are the most applicable for the investigation of heat waves over the territory of Ukraine.
How to cite: Kostyrko, I., Snizhko, S., Shevchenko, O., Oliynyk, R., Svintsitska, H., and Mahura, A.: Investigation of the different heat waves indices applicability for the territory of Ukraine, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13662, https://doi.org/10.5194/egusphere-egu2020-13662, 2020.
Climate extremes are of the major concern in the global context, since they can result in significant financial and human losses. The scale of heat wave (HW) impacts highlights the necessity to be able to measure extreme events in an informative manner, which is suitable for the geographical region and the climatic fields. Recent climatic projections show increasing in the frequency, magnitude, and duration of temperature extremes. It makes very important to determine appropriate metrics of heat / cold waves, and in particular, under climate change conditions. But to date there is no one universally acceptable heat wave definition. Because of the range of social groups and economy sectors affected by heat waves it is, of course, impossible to obtain a single index that is appropriate across each group and can be calculated from readily available climatological data.
This research therefore has the aim to calculate some absolute and relative heat wave indices for the reference period 1981-2010 to provide a comparison spatial-temporal distribution of the HW indices over territory of Ukraine according to the recommendation of WMO 2015 Technical Regulations.
The selected methodology for the heat waves investigation at this stage is based on the absolute indices proposed by Fischer and Schar (2010), Perkins and Alexander (2012): HWM (average magnitude of all summertime heat waves), HWA (hottest day of hottest summertime event), HWN (yearly number of heat waves during summertime), HWD (length of the longest summertime event) HWF (sum of participating heat wave days in the summertime season, which meet the HW definition criteria over a 30-day interval). Also, relative indices (Heat Wave Magnitude Index (HWMI), Russo et al. 2014 and Heat Wave Magnitude Index daily (HWMId), Russo et al. 2015) were used for the research.
Thereby, in this research 5 absolute indices and 2 relative indices for 50 weather stations of the meteorological network of the Ukrainian Hydrometeorological Centre for the summer months of the reference period 1981-2010 were calculated.
It was found that for the almost all the territory of Ukraine, the anomalies of all absolute heat wave indices in 2010 (compared to the reference period 1980-2010) were clearly noticeable. However, the analysis of the heat wave 2010 showed that a certain multicollinearity is inherent to the absolute indices calculated. The results of the statistical estimation showed that using all five heat wave indices is not necessary. In our opinion, only HWN, HWF and HWM are sufficient for the HW characteristic.
The calculated relative heat waves indexes are sufficiently sensitive to the minor changes of the daily maximum air temperature. It was found HWMId is the most sensitive between the studied indices. Therefore, on our opinion HWN, HWF, HWM and HWMId indices are the most applicable for the investigation of heat waves over the territory of Ukraine.
How to cite: Kostyrko, I., Snizhko, S., Shevchenko, O., Oliynyk, R., Svintsitska, H., and Mahura, A.: Investigation of the different heat waves indices applicability for the territory of Ukraine, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13662, https://doi.org/10.5194/egusphere-egu2020-13662, 2020.
EGU2020-13639 | Displays | ITS2.15/BG2.25
Large-scale quantifying of sources and sinks of atmospheric carbon in Central Siberia: from middle taiga to Arctic tundraAlexey Panov, Anatoly Prokushkin, Anastasiya Urban, Vyacheslav Zyrianov, Mikhail Korets, Nikita Sidenko, Jošt Lavrič, and Martin Heimann
The boreal and arctic zone of Siberia represents a «hot spot» area in the global Earth climate system, containing large and potentially vulnerable carbon stocks as well as considerable carbon dioxide (CO2) and methane (CH4) exchange fluxes with the atmosphere. Up to the recent time Siberian region was only sparsely covered by carbon flux measurements. Solely in the frame of EU-funded projects «Eurosiberian Carbonflux» and «Terrestrial Carbon Observing System – Siberia» (TCOS-Siberia) between 1998 and 2005 several atmospheric and terrestrial ecosystem stations were operational in European Russia and Siberia.
Since 2006, in order to monitor long-term biogeochemical changes, the Zotino Tall Tower Observatory (ZOTTO; www.zottoproject.org), a research platform for large-scale climatic observations, is operational in Central Siberia (60°48' N, 89°21' E) about 20 km west of the Yenisei river. Observatory consists of a 304-m tall mast for continuous high-precision measurements of greenhouse gases, meteorology and multitude of aerosol properties in the planetary boundary layer (PBL). Sampling of the PBL is essential for the «top-down» approach in observation strategy, since it minimizes local effects and permits to capture regional concentration signals. Such measurements are used in atmospheric inversion modelling to estimate sinks/sources at the surface over the large Siberian territory. In turn, the tall tower observations are linked with eddy covariance measurements of exchange carbon fluxes, introducing a «bottom-up» observational approach, over locally representative ecosystems: pine forest–bog complexes (60°48'N; 89°22'E); a mid-taiga dark coniferous forest (60°01'N; 89°49'E); a northern taiga mature larch forest (64°12'N; 100°27'E) and a forest-tundra ecotone (67°28'N; 86°29'E). This meridional observation network captures exchange fluxes of CO2 and CH4 in ecosystems of the main biogeochemical provinces for the Yenisey river basin of 2580 thousand km², that can be scaled up to the region using vegetation maps, forest biomass inventories and remote sensing information. Since 2018 observation network was expanded and a new coastal station for continuous atmospheric measurements of GHG (СО2/СН4/Н2О) and meteorology is operational on the shore of the Arctic ocean (73°33'N; 80°34'E) near the Dikson settlement. Such coastal station enhances the atmospheric signal derived at «ZOTTO» regarding the budget of trace gases in Central Siberia, permits tracing ocean-continent transport of GHG and also extends the circum-Arctic observation network.
Here we summarize the scientific rationale of the observation network, infrastructure details of the stations, the local environments and provide some exemplary results obtained from measurements. The reported study was funded by the Max Planck Society (Germany), RSF project № 14-24-00113 and the RFBR projects № 18-05-00235 and 18-05-60203.
How to cite: Panov, A., Prokushkin, A., Urban, A., Zyrianov, V., Korets, M., Sidenko, N., Lavrič, J., and Heimann, M.: Large-scale quantifying of sources and sinks of atmospheric carbon in Central Siberia: from middle taiga to Arctic tundra , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13639, https://doi.org/10.5194/egusphere-egu2020-13639, 2020.
The boreal and arctic zone of Siberia represents a «hot spot» area in the global Earth climate system, containing large and potentially vulnerable carbon stocks as well as considerable carbon dioxide (CO2) and methane (CH4) exchange fluxes with the atmosphere. Up to the recent time Siberian region was only sparsely covered by carbon flux measurements. Solely in the frame of EU-funded projects «Eurosiberian Carbonflux» and «Terrestrial Carbon Observing System – Siberia» (TCOS-Siberia) between 1998 and 2005 several atmospheric and terrestrial ecosystem stations were operational in European Russia and Siberia.
Since 2006, in order to monitor long-term biogeochemical changes, the Zotino Tall Tower Observatory (ZOTTO; www.zottoproject.org), a research platform for large-scale climatic observations, is operational in Central Siberia (60°48' N, 89°21' E) about 20 km west of the Yenisei river. Observatory consists of a 304-m tall mast for continuous high-precision measurements of greenhouse gases, meteorology and multitude of aerosol properties in the planetary boundary layer (PBL). Sampling of the PBL is essential for the «top-down» approach in observation strategy, since it minimizes local effects and permits to capture regional concentration signals. Such measurements are used in atmospheric inversion modelling to estimate sinks/sources at the surface over the large Siberian territory. In turn, the tall tower observations are linked with eddy covariance measurements of exchange carbon fluxes, introducing a «bottom-up» observational approach, over locally representative ecosystems: pine forest–bog complexes (60°48'N; 89°22'E); a mid-taiga dark coniferous forest (60°01'N; 89°49'E); a northern taiga mature larch forest (64°12'N; 100°27'E) and a forest-tundra ecotone (67°28'N; 86°29'E). This meridional observation network captures exchange fluxes of CO2 and CH4 in ecosystems of the main biogeochemical provinces for the Yenisey river basin of 2580 thousand km², that can be scaled up to the region using vegetation maps, forest biomass inventories and remote sensing information. Since 2018 observation network was expanded and a new coastal station for continuous atmospheric measurements of GHG (СО2/СН4/Н2О) and meteorology is operational on the shore of the Arctic ocean (73°33'N; 80°34'E) near the Dikson settlement. Such coastal station enhances the atmospheric signal derived at «ZOTTO» regarding the budget of trace gases in Central Siberia, permits tracing ocean-continent transport of GHG and also extends the circum-Arctic observation network.
Here we summarize the scientific rationale of the observation network, infrastructure details of the stations, the local environments and provide some exemplary results obtained from measurements. The reported study was funded by the Max Planck Society (Germany), RSF project № 14-24-00113 and the RFBR projects № 18-05-00235 and 18-05-60203.
How to cite: Panov, A., Prokushkin, A., Urban, A., Zyrianov, V., Korets, M., Sidenko, N., Lavrič, J., and Heimann, M.: Large-scale quantifying of sources and sinks of atmospheric carbon in Central Siberia: from middle taiga to Arctic tundra , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13639, https://doi.org/10.5194/egusphere-egu2020-13639, 2020.
EGU2020-11705 | Displays | ITS2.15/BG2.25
Anthropogenic Transformation of Russian Arctic: dividing the area into zones based on cluster analysisEkaterina Eremenko, Andrei Bredikhin, Sergei Kharchenko, Yury Belyaev, Ekaterina Matlakhova, Fedor Romanenko, Sergei Bolysov, and Yulia Fuzeina
In this study we analyzed the information about the presence of different types of anthropogenic objects (settlements, transport infrastructure, mining areas, etc.) in the Arctic zone of Russia. This information was taken from open Internet-sources: maps, cartographic projects, databases, schemes of regional development of the Russian Federation. Data analysis shows than only about 20% of Russian Arctic’s area is affected by economic development, meanwhile on the other 80% of the area there are practically no anthropogenic objects.
The economic development of the Arctic region decreases from West to East of Russia. The Republic of Karelia is characterized by the highest economic development level (only 13,1% of the area are not affected by any economic activities), the lowest levels have Krasnoyarskiy krai (95,2%) and the Republic of Sakha (Yakutia) (87,2%). Data on the presence, position, and types of anthropogenic objects were subjected to the k-means method of cluster analysis in order to identify characteristic combinations of objects corresponding to different types of development. Within the Arctic zone of Russia six main types of economical use of the territory were identified. Each of these types was characterized by the dominance of a certain type of anthropogenic objects (settlements, roads, mining industry objects, oil and gas transport infrastructure, wood industry objects).
Each type of the economical use of the territory is characterized by specific anthropogenic transformation of the topography of the area. The greatest transformation of the topography and geomorphological processes was found within the open mining areas. The least influence on the topography is connected with some of the linear transport structures (unpaved roads and underground gas pipelines). In general, economic activity in Russian Arctic is relatively low. Anthropogenic transformation of topography and geomorphic processes is typical for the area about 667 thousand square km, that is about 18% of the total area of the Russian Arctic.
This study is supported by Russian Foundation for Basic Research (RFBR) Project № 18-05-60200 "Anthropogenic transformation of Arctic Landscapes for the last 100 years".
How to cite: Eremenko, E., Bredikhin, A., Kharchenko, S., Belyaev, Y., Matlakhova, E., Romanenko, F., Bolysov, S., and Fuzeina, Y.: Anthropogenic Transformation of Russian Arctic: dividing the area into zones based on cluster analysis, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11705, https://doi.org/10.5194/egusphere-egu2020-11705, 2020.
In this study we analyzed the information about the presence of different types of anthropogenic objects (settlements, transport infrastructure, mining areas, etc.) in the Arctic zone of Russia. This information was taken from open Internet-sources: maps, cartographic projects, databases, schemes of regional development of the Russian Federation. Data analysis shows than only about 20% of Russian Arctic’s area is affected by economic development, meanwhile on the other 80% of the area there are practically no anthropogenic objects.
The economic development of the Arctic region decreases from West to East of Russia. The Republic of Karelia is characterized by the highest economic development level (only 13,1% of the area are not affected by any economic activities), the lowest levels have Krasnoyarskiy krai (95,2%) and the Republic of Sakha (Yakutia) (87,2%). Data on the presence, position, and types of anthropogenic objects were subjected to the k-means method of cluster analysis in order to identify characteristic combinations of objects corresponding to different types of development. Within the Arctic zone of Russia six main types of economical use of the territory were identified. Each of these types was characterized by the dominance of a certain type of anthropogenic objects (settlements, roads, mining industry objects, oil and gas transport infrastructure, wood industry objects).
Each type of the economical use of the territory is characterized by specific anthropogenic transformation of the topography of the area. The greatest transformation of the topography and geomorphological processes was found within the open mining areas. The least influence on the topography is connected with some of the linear transport structures (unpaved roads and underground gas pipelines). In general, economic activity in Russian Arctic is relatively low. Anthropogenic transformation of topography and geomorphic processes is typical for the area about 667 thousand square km, that is about 18% of the total area of the Russian Arctic.
This study is supported by Russian Foundation for Basic Research (RFBR) Project № 18-05-60200 "Anthropogenic transformation of Arctic Landscapes for the last 100 years".
How to cite: Eremenko, E., Bredikhin, A., Kharchenko, S., Belyaev, Y., Matlakhova, E., Romanenko, F., Bolysov, S., and Fuzeina, Y.: Anthropogenic Transformation of Russian Arctic: dividing the area into zones based on cluster analysis, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11705, https://doi.org/10.5194/egusphere-egu2020-11705, 2020.
EGU2020-1386 | Displays | ITS2.15/BG2.25
Dynamics of gaseous elemental mercury during polar spring and winterFidel Pankratov, Alexander Mahura, Valentin Popov, and Vladimir Masloboev
Dynamics of gaseous elemental mercury during polar spring and winter
Since June 2001 the long-term monitoring of the gaseous elemental mercury (thereafter, mercury) in the surface layer of the atmospheric has been conducted near the Amderma settlement (69,72оN; 61,62oE; Yugor Peninsula, Russia).
During this monitoring, variations of the lowered mercury concentrations (<1.0 ng m-3) were observed for spring (March–May) period in 2005 and 2011. For spring 2005, the intensity of the solar radiation did not affect the number of low values of mercury concentrations. With an increase of solar activity during the day there was a reverse effect: i.e. from 9 until 15 h the number of lowered values of concentration decreased. For the evening hours, the highest number of lowered concentrations and atmospheric mercury depletion events, AMDEs (12 events) were observed. For 2005, upon reaching a daily high solar activity the processes of mercury depletion were not observed. It could be because lacking of a large number of marine aerosols in the atmospheric surface layer, although the processes of photochemical reactions did not stop. For spring 2011, during increased solar activity the number of AMDEs increased to 62 events. However, there was no ice cover observed in the coastal area, and consequently, large amounts of sea aerosol could be presented in the surface layer of the atmosphere.
For the winter (December-January) period, the maximum number (in total, 495) of lowered values of mercury concentration and AMDEs (32 events) were recorded in 2010–2011. Such situation was previously observed only in winter of 2006–2007 (13 events). As there is no direct sunlight in mentioned period, the removal of mercury from the atmosphere may be caused by combination of physical and chemical processes that are not related to photochemistry. Starting mid-January, although duration of the day increases, but solar energy is not enough to activate photochemical reactions and predominant type of solar radiation is diffuse rather than direct one. However, AMDEs were still reported at that time (18 events were registered in January 2011).
After mid-March, the angle of sun’s declination increases and the incoming solar energy is sufficient to activate photochemistry. However, during March–May there was no linear relationship identified for AMDEs. The maximum number (300) of lowered values of mercury concentration and AMDEs (21 events, with duration up to 66 hours) were registered in April. Such AMDEs are connected with presence of elevated concentrations of aerosols in the absence of ice cover in the marine coastal zone. Not excluded a possibility of contribution of anthropogenic aerosols (from burning of fossil fuels) in the process of mercury deposition from the atmosphere on the underlying surface.
How to cite: Pankratov, F., Mahura, A., Popov, V., and Masloboev, V.: Dynamics of gaseous elemental mercury during polar spring and winter, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1386, https://doi.org/10.5194/egusphere-egu2020-1386, 2020.
Dynamics of gaseous elemental mercury during polar spring and winter
Since June 2001 the long-term monitoring of the gaseous elemental mercury (thereafter, mercury) in the surface layer of the atmospheric has been conducted near the Amderma settlement (69,72оN; 61,62oE; Yugor Peninsula, Russia).
During this monitoring, variations of the lowered mercury concentrations (<1.0 ng m-3) were observed for spring (March–May) period in 2005 and 2011. For spring 2005, the intensity of the solar radiation did not affect the number of low values of mercury concentrations. With an increase of solar activity during the day there was a reverse effect: i.e. from 9 until 15 h the number of lowered values of concentration decreased. For the evening hours, the highest number of lowered concentrations and atmospheric mercury depletion events, AMDEs (12 events) were observed. For 2005, upon reaching a daily high solar activity the processes of mercury depletion were not observed. It could be because lacking of a large number of marine aerosols in the atmospheric surface layer, although the processes of photochemical reactions did not stop. For spring 2011, during increased solar activity the number of AMDEs increased to 62 events. However, there was no ice cover observed in the coastal area, and consequently, large amounts of sea aerosol could be presented in the surface layer of the atmosphere.
For the winter (December-January) period, the maximum number (in total, 495) of lowered values of mercury concentration and AMDEs (32 events) were recorded in 2010–2011. Such situation was previously observed only in winter of 2006–2007 (13 events). As there is no direct sunlight in mentioned period, the removal of mercury from the atmosphere may be caused by combination of physical and chemical processes that are not related to photochemistry. Starting mid-January, although duration of the day increases, but solar energy is not enough to activate photochemical reactions and predominant type of solar radiation is diffuse rather than direct one. However, AMDEs were still reported at that time (18 events were registered in January 2011).
After mid-March, the angle of sun’s declination increases and the incoming solar energy is sufficient to activate photochemistry. However, during March–May there was no linear relationship identified for AMDEs. The maximum number (300) of lowered values of mercury concentration and AMDEs (21 events, with duration up to 66 hours) were registered in April. Such AMDEs are connected with presence of elevated concentrations of aerosols in the absence of ice cover in the marine coastal zone. Not excluded a possibility of contribution of anthropogenic aerosols (from burning of fossil fuels) in the process of mercury deposition from the atmosphere on the underlying surface.
How to cite: Pankratov, F., Mahura, A., Popov, V., and Masloboev, V.: Dynamics of gaseous elemental mercury during polar spring and winter, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1386, https://doi.org/10.5194/egusphere-egu2020-1386, 2020.
EGU2020-763 | Displays | ITS2.15/BG2.25
Evaluation of radiation forcing from snow pollution by black carbon emissions from forest fires using the SNICAR radiation model and data from the INMCM5 climate modelAlexey Chernenkov, Sergey Kostrykin, and Veronika Ginzburg
In this study, we consider the effect on climate of one of the atmospheric aerosols - black carbon. Estimates are obtained for changes in the surface albedo and additional radiation forcing associated with account for BC emissions from forest fires. For this, we used the data of a historical experiment with the climate model INMCM5 [1] developed at the INM RAS, as well as the one-dimensional SNICAR (SNow-ICe-AErosole radiation model) [2] model of radiation transfer in the snow layer. In the historical experiment with the climate model INMCM, carried out as part of the CMIP6 project [3], the climate of the Earth system was simulated from 1850 to 2014. In this case, the external forcing on the Earth system was set as close as possible to the observed one.
Based on the monthly average model data on the height of the snow cover, as well as the flux of black carbon from the atmosphere, assuming uniform mixing of the precipitation of BC in the snow, the concentration of BC in each cell of the model grid was calculated. Then, using the obtained concentrations, radiation forcing caused by BC emission from forest fires was calculated using the SNICAR model.
Since anthropogenic emissions of black carbon far exceed emissions resulting from the burning of biomass, two seasons that differ in the intensity of forest fires were chosen to study the role of forest fires in the radiation balance. Based on the GFED (Global Fire Emission Database) [4], 1998 (corresponding to large emissions of black carbon at mid-latitudes into the atmosphere caused by biomass burning) and 2001 (corresponding to small emissions) were selected as such seasons. Moreover, it is known that the anthropogenic source for the specified period changed slightly. Additional forcing amounted to 2-3 W/m2 locally with a relative estimation error of the order of 10-15%. The results of calculations of the average annual radiation forcing for the mainland are in good agreement with [2], [5].
This work is supported by RFBR project No.18-05-60183.
List of references:
How to cite: Chernenkov, A., Kostrykin, S., and Ginzburg, V.: Evaluation of radiation forcing from snow pollution by black carbon emissions from forest fires using the SNICAR radiation model and data from the INMCM5 climate model, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-763, https://doi.org/10.5194/egusphere-egu2020-763, 2020.
In this study, we consider the effect on climate of one of the atmospheric aerosols - black carbon. Estimates are obtained for changes in the surface albedo and additional radiation forcing associated with account for BC emissions from forest fires. For this, we used the data of a historical experiment with the climate model INMCM5 [1] developed at the INM RAS, as well as the one-dimensional SNICAR (SNow-ICe-AErosole radiation model) [2] model of radiation transfer in the snow layer. In the historical experiment with the climate model INMCM, carried out as part of the CMIP6 project [3], the climate of the Earth system was simulated from 1850 to 2014. In this case, the external forcing on the Earth system was set as close as possible to the observed one.
Based on the monthly average model data on the height of the snow cover, as well as the flux of black carbon from the atmosphere, assuming uniform mixing of the precipitation of BC in the snow, the concentration of BC in each cell of the model grid was calculated. Then, using the obtained concentrations, radiation forcing caused by BC emission from forest fires was calculated using the SNICAR model.
Since anthropogenic emissions of black carbon far exceed emissions resulting from the burning of biomass, two seasons that differ in the intensity of forest fires were chosen to study the role of forest fires in the radiation balance. Based on the GFED (Global Fire Emission Database) [4], 1998 (corresponding to large emissions of black carbon at mid-latitudes into the atmosphere caused by biomass burning) and 2001 (corresponding to small emissions) were selected as such seasons. Moreover, it is known that the anthropogenic source for the specified period changed slightly. Additional forcing amounted to 2-3 W/m2 locally with a relative estimation error of the order of 10-15%. The results of calculations of the average annual radiation forcing for the mainland are in good agreement with [2], [5].
This work is supported by RFBR project No.18-05-60183.
List of references:
How to cite: Chernenkov, A., Kostrykin, S., and Ginzburg, V.: Evaluation of radiation forcing from snow pollution by black carbon emissions from forest fires using the SNICAR radiation model and data from the INMCM5 climate model, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-763, https://doi.org/10.5194/egusphere-egu2020-763, 2020.
EGU2020-5892 | Displays | ITS2.15/BG2.25
On Space Weather factors which can impact terrestrial physical and biological processesOlga Stupishina and Elena Golovina
The main idea of our work is to find out the perspective points for the investigation of space factors which can impact physical and biological processes on Earth surface. Some decades ago the complex of those factors was named as “Space Weather”. So the main purpose of our work is to discover the connection between Space Weather and Terrestrial Weather as well as the impact of this environmental complex (Space Weather plus Terrestrial Weather) on biological objects and thereby on the human health.
The first part of the presented work contains the description of the Space Weather characteristics for the appearance moments of very long-live (more than 10 days) atmosphere pressure systems on different terrestrial latitude locations. These Long-live Pressure Systems (LPS) are interesting for us because some of them (namely anticyclones) can block pressure fields so they can create some dangerous situations for the human health as well as for the human activity. The different terrestrial latitude locations were: (1) Saint-Petersburg (59o57‘N, 30o19‘E) and (2) Tambov (52o43‘N, 41o27‘E). This latitude difference in observations is interesting for us because we know about the different affect of Space Weather variations on northern and southern places so we want to study this difference. The time-intervals were: (1) 1999-2014 years (Saint-Petersburg), (2) 2007-2014 years (Tambov). Space Weather parameters were: (1) global variations of Solar Activity (SA) parameters; (2) daily characteristics of the SA flare component in various bands of the electromagnetic spectrum; (3) variations of Interplanetary Space characteristics in Earth vicinity; (4) variations of daily statistics of Geomagnetic Field (GMF) characteristics. For the appearance moments of LPS we have discovered the interesting behaviour for follow Space Weather characteristics: variations of all global SA indices, variations of low energy (C-class) X-ray solar flares number, variations of proton fluxes, and variations of GMF parameters daily statistics. Also we have discovered the terrestrial-latitude difference in the atmosphere response on the Space Weather impact.
The second part of our work contains the results of investigation of environmental (Space Weather plus Terrestrial Weather) impact on human health. This study was done for Saint-Petersburg region (the northern place from the previous point of our investigation). The human health status was indicated by: (1) Cardiac Rhythm Variations (CRV) of patients in the clinic of Medicine Academy, Sudden Cardiac Deaths (SCD) in Research Institute of Emergency Medicine, facts of hard situation in 6 local clinics in different places of Saint-Petersburg and its suburb. We have found out that the dramatic cardiac events (CRV extrema, SCD maxima, hard days in clinics) are connected with variations of solar radio bursts number (the burst type is “noise storm”), the spread daily statistics (coefficient of variation) of GMF z-component and with spread daily statistics (coefficient of oscillation) of air temperature.
Results of our work may be used as the base for the hazard environmental monitoring.
How to cite: Stupishina, O. and Golovina, E.: On Space Weather factors which can impact terrestrial physical and biological processes, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5892, https://doi.org/10.5194/egusphere-egu2020-5892, 2020.
The main idea of our work is to find out the perspective points for the investigation of space factors which can impact physical and biological processes on Earth surface. Some decades ago the complex of those factors was named as “Space Weather”. So the main purpose of our work is to discover the connection between Space Weather and Terrestrial Weather as well as the impact of this environmental complex (Space Weather plus Terrestrial Weather) on biological objects and thereby on the human health.
The first part of the presented work contains the description of the Space Weather characteristics for the appearance moments of very long-live (more than 10 days) atmosphere pressure systems on different terrestrial latitude locations. These Long-live Pressure Systems (LPS) are interesting for us because some of them (namely anticyclones) can block pressure fields so they can create some dangerous situations for the human health as well as for the human activity. The different terrestrial latitude locations were: (1) Saint-Petersburg (59o57‘N, 30o19‘E) and (2) Tambov (52o43‘N, 41o27‘E). This latitude difference in observations is interesting for us because we know about the different affect of Space Weather variations on northern and southern places so we want to study this difference. The time-intervals were: (1) 1999-2014 years (Saint-Petersburg), (2) 2007-2014 years (Tambov). Space Weather parameters were: (1) global variations of Solar Activity (SA) parameters; (2) daily characteristics of the SA flare component in various bands of the electromagnetic spectrum; (3) variations of Interplanetary Space characteristics in Earth vicinity; (4) variations of daily statistics of Geomagnetic Field (GMF) characteristics. For the appearance moments of LPS we have discovered the interesting behaviour for follow Space Weather characteristics: variations of all global SA indices, variations of low energy (C-class) X-ray solar flares number, variations of proton fluxes, and variations of GMF parameters daily statistics. Also we have discovered the terrestrial-latitude difference in the atmosphere response on the Space Weather impact.
The second part of our work contains the results of investigation of environmental (Space Weather plus Terrestrial Weather) impact on human health. This study was done for Saint-Petersburg region (the northern place from the previous point of our investigation). The human health status was indicated by: (1) Cardiac Rhythm Variations (CRV) of patients in the clinic of Medicine Academy, Sudden Cardiac Deaths (SCD) in Research Institute of Emergency Medicine, facts of hard situation in 6 local clinics in different places of Saint-Petersburg and its suburb. We have found out that the dramatic cardiac events (CRV extrema, SCD maxima, hard days in clinics) are connected with variations of solar radio bursts number (the burst type is “noise storm”), the spread daily statistics (coefficient of variation) of GMF z-component and with spread daily statistics (coefficient of oscillation) of air temperature.
Results of our work may be used as the base for the hazard environmental monitoring.
How to cite: Stupishina, O. and Golovina, E.: On Space Weather factors which can impact terrestrial physical and biological processes, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5892, https://doi.org/10.5194/egusphere-egu2020-5892, 2020.
EGU2020-4880 | Displays | ITS2.15/BG2.25
Urban aerosol in Moscow megacity and its radiative effects according to the AeroRadCity experiment and COSMO-ART modellingNataly Chubarova, Elizaveta Androsova, Elena Volpert, Alexander Kirsanov, Bernhard Vogel, Heike Vogel, Olga Popovicheva, Irina Eremina, and Gdaly Rivin
The AeroRadCity urban aerosol experiment over Moscow megacity have been carried out during spring 2018 and 2019. The experiment included measurement campaign at the Moscow MSU MO and numerical experiments using COSMO-ART model (Vogel et al., 2010, Vilfand et al., 2017). We examined the dynamic of aerosol properties and their radiative effects under various meteorological conditions using both columnar and surface aerosol measurements (AERONET dataset, mass concentration of PM10, black carbon (BC), different aerosol gas-precursors, etc.). For qualifying urban pollution special attention was given to the analysis of columnar and surface Angstrom absorption coefficients, low values of which indicated the BC dominance as a result of high-temperature combustion of natural fuel in transport engines. We obtained a positive statistically significant dependence of AOD on PM and BC concentrations with a pronounced bifurcation point around PM10=0.04 mgm-3. Model and experimental data demonstrated positive BC relationships with PM10, NO2 and SO2 at Moscow megacity (Chubarova et al., 2019). The analysis of radiative effects of aerosol in clear sky conditions has revealed up to 30% loss for UV irradiance and 15% - for shortwave irradiance at high AOD. Much intensive radiation attenuation is observed in the afternoon, when remote pollution sources affected solar fluxes at elevated boundary layer conditions. Negative (cooling) RF effect at TOA varied from -20 Wm-2 to -1 Wm-2 with average of -8 Wm-2. The minimum (absolute) RF effect corresponded to the lowest AOT and single scattering albedo. A statistically significant regression dependence of the single scattering albedo on BC/PM10 fraction was obtained at high level of particle dispersion intensity.
The urban AOT550 calculations in COSMO-ART model were compared with the results of measurements in Moscow and Zvenigorod at the A. M. Oboukhov IFA RAS institute. They showed a satisfactory agreement between model and measured values of city aerosol pollution (respectively, dAOT= 0.017 and dAOT= 0.013). In some days the difference increased up to 0.05 in conditions with low intensity of pollutant dispersion.
During the experiment a high correlation (R2=0.95) was revealed between the insoluble component and the total mineralization of rain precipitation, which indicates that 70% of aerosol deposition occurs as the insoluble fraction. We show that at the initial concentration of C0(PM)>10 μgm-3 exponential washout coefficients are significant for PM (alfa (PM)=0.17+-0.09 hour-1) and insignificant for BC (alfa (BC) =0.07+-0.10 hour-1). At C0(PM) <10 μgm-3, the alfa values both for PM and BC are close to zero. According to the numerical experiments with and without account of wet deposition the alfa value was estimated to be 0.08 hour-1, which fits the confidence interval obtained from the measurements. The work was supported by the Russian Science Foundation, grant # 18-17-00149.
References:
Chubarova N.E. et al. (2019). GEOGRAPHY, ENVIRONMENT, SUSTAINABILITY. 2019;12(4):114-131.
Vogel et al., (2010). In Integrated Systems of Meso-meteorological and Chemical Transport Models, Springer, pp. 75-80.
Vilfand et al. (2017). Russian Meteorology and Hydrology, vol. 42, № 5, pp. 292–298. DOI:10.3103/S106837391705003X.
How to cite: Chubarova, N., Androsova, E., Volpert, E., Kirsanov, A., Vogel, B., Vogel, H., Popovicheva, O., Eremina, I., and Rivin, G.: Urban aerosol in Moscow megacity and its radiative effects according to the AeroRadCity experiment and COSMO-ART modelling, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4880, https://doi.org/10.5194/egusphere-egu2020-4880, 2020.
The AeroRadCity urban aerosol experiment over Moscow megacity have been carried out during spring 2018 and 2019. The experiment included measurement campaign at the Moscow MSU MO and numerical experiments using COSMO-ART model (Vogel et al., 2010, Vilfand et al., 2017). We examined the dynamic of aerosol properties and their radiative effects under various meteorological conditions using both columnar and surface aerosol measurements (AERONET dataset, mass concentration of PM10, black carbon (BC), different aerosol gas-precursors, etc.). For qualifying urban pollution special attention was given to the analysis of columnar and surface Angstrom absorption coefficients, low values of which indicated the BC dominance as a result of high-temperature combustion of natural fuel in transport engines. We obtained a positive statistically significant dependence of AOD on PM and BC concentrations with a pronounced bifurcation point around PM10=0.04 mgm-3. Model and experimental data demonstrated positive BC relationships with PM10, NO2 and SO2 at Moscow megacity (Chubarova et al., 2019). The analysis of radiative effects of aerosol in clear sky conditions has revealed up to 30% loss for UV irradiance and 15% - for shortwave irradiance at high AOD. Much intensive radiation attenuation is observed in the afternoon, when remote pollution sources affected solar fluxes at elevated boundary layer conditions. Negative (cooling) RF effect at TOA varied from -20 Wm-2 to -1 Wm-2 with average of -8 Wm-2. The minimum (absolute) RF effect corresponded to the lowest AOT and single scattering albedo. A statistically significant regression dependence of the single scattering albedo on BC/PM10 fraction was obtained at high level of particle dispersion intensity.
The urban AOT550 calculations in COSMO-ART model were compared with the results of measurements in Moscow and Zvenigorod at the A. M. Oboukhov IFA RAS institute. They showed a satisfactory agreement between model and measured values of city aerosol pollution (respectively, dAOT= 0.017 and dAOT= 0.013). In some days the difference increased up to 0.05 in conditions with low intensity of pollutant dispersion.
During the experiment a high correlation (R2=0.95) was revealed between the insoluble component and the total mineralization of rain precipitation, which indicates that 70% of aerosol deposition occurs as the insoluble fraction. We show that at the initial concentration of C0(PM)>10 μgm-3 exponential washout coefficients are significant for PM (alfa (PM)=0.17+-0.09 hour-1) and insignificant for BC (alfa (BC) =0.07+-0.10 hour-1). At C0(PM) <10 μgm-3, the alfa values both for PM and BC are close to zero. According to the numerical experiments with and without account of wet deposition the alfa value was estimated to be 0.08 hour-1, which fits the confidence interval obtained from the measurements. The work was supported by the Russian Science Foundation, grant # 18-17-00149.
References:
Chubarova N.E. et al. (2019). GEOGRAPHY, ENVIRONMENT, SUSTAINABILITY. 2019;12(4):114-131.
Vogel et al., (2010). In Integrated Systems of Meso-meteorological and Chemical Transport Models, Springer, pp. 75-80.
Vilfand et al. (2017). Russian Meteorology and Hydrology, vol. 42, № 5, pp. 292–298. DOI:10.3103/S106837391705003X.
How to cite: Chubarova, N., Androsova, E., Volpert, E., Kirsanov, A., Vogel, B., Vogel, H., Popovicheva, O., Eremina, I., and Rivin, G.: Urban aerosol in Moscow megacity and its radiative effects according to the AeroRadCity experiment and COSMO-ART modelling, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4880, https://doi.org/10.5194/egusphere-egu2020-4880, 2020.
EGU2020-7970 | Displays | ITS2.15/BG2.25
New insights into physical and chemical atmospheric transformations of biomass burning aerosol from wildfires in SiberiaIgor Konovalov, Nikolai Golovushkin, Matthias Beekmann, and Valerii Kozlov
Wildfires in Siberia are a major source of aerosol in Northern Eurasia. Biomass burning (BB) aerosol can significantly impact the Earth’s radiative balance through absorption and scattering of solar radiation, interactions with clouds and changes of surface albedo due to deposition of black and brown carbon on ice and snow. There is growing evidence that atmospheric aging of BB aerosol can be associated with profound but diverse chemical and physical transformations which, in most cases, are not adequately represented in chemistry-transport and climate models that are widely used in assessments of radiative and climate effects of atmospheric pollutants.
An idea of this study is to identify changes in the optical properties of aging BB aerosol using absorption and extinction aerosol optical depths (AAOD and AOD) retrieved from the OMI and MODIS satellite observations and to elucidate key processes behind these changes using the Mie-theory-based calculations along with simulations with chemistry-transport and microphysical box models involving representation of the evolution of organic particulate matter within the VBS framework. The study focuses on a major outflow of BB plumes from Siberia into the European part of Russia in July 2016. The analysis of the satellite data is complemented by the original results of biomass burning aerosol aging experiments in a large aerosol chamber.
The results indicate that the BB aerosol evolution during the first 10-20 hours features strong secondary organic aerosol (SOA) formation resulting in a substantial increase in the particle single scattering albedo. Further evolution is affected by the loss of organic matter, probably due to evaporation and oxidation. The results also indicate that although brown carbon contained in the primary aerosol is rapidly lost (consistently with available independent observations) due to evaporation and photochemical destruction of chromospheres, it is partly replaced by weakly absorbing low-volatile SOA.
In general, this study reveals that aging BB aerosol from wildfires in Siberia undergoes major physical and chemical transformations that have to be taken into account in assessments of the impact of Siberian fires on the radiative balance in Northern Eurasia and the Arctic. It also proposes a practical way to address these complex transformations in chemistry-transport and climate models.
The study was supported by the Russian Science Foundation (grant agreement No. 19-77-20109).
References
- Konovalov, I.B., Beekmann, M., Berezin, E.V., Formenti, P., and Andreae, M.O.: Probing into the aging dynamics of biomass burning aerosol by using satellite measurements of aerosol optical depth and carbon monoxide, Atmos. Chem. Phys., 17, 4513–4537, 2017.
- Konovalov, I.B., Lvova, D.A., Beekmann, M., Jethva, H., Mikhailov, E.F., Paris, J.-D., Belan, B.D., Kozlov, V.S., Ciais, P., and Andreae, M.O.: Estimation of black carbon emissions from Siberian fires using satellite observations of absorption and extinction optical depths, Atmos. Chem. Phys., 18, 14889–14924, 2018.
- Konovalov, I.B., Beekmann, M., Golovushkin, N.A., and Andreae, M.O.: Nonlinear behavior of organic aerosol in biomass burning plumes: a microphysical model analysis, Atmos. Chem. Phys., 19, 12091–12119, 2019.
How to cite: Konovalov, I., Golovushkin, N., Beekmann, M., and Kozlov, V.: New insights into physical and chemical atmospheric transformations of biomass burning aerosol from wildfires in Siberia, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7970, https://doi.org/10.5194/egusphere-egu2020-7970, 2020.
Wildfires in Siberia are a major source of aerosol in Northern Eurasia. Biomass burning (BB) aerosol can significantly impact the Earth’s radiative balance through absorption and scattering of solar radiation, interactions with clouds and changes of surface albedo due to deposition of black and brown carbon on ice and snow. There is growing evidence that atmospheric aging of BB aerosol can be associated with profound but diverse chemical and physical transformations which, in most cases, are not adequately represented in chemistry-transport and climate models that are widely used in assessments of radiative and climate effects of atmospheric pollutants.
An idea of this study is to identify changes in the optical properties of aging BB aerosol using absorption and extinction aerosol optical depths (AAOD and AOD) retrieved from the OMI and MODIS satellite observations and to elucidate key processes behind these changes using the Mie-theory-based calculations along with simulations with chemistry-transport and microphysical box models involving representation of the evolution of organic particulate matter within the VBS framework. The study focuses on a major outflow of BB plumes from Siberia into the European part of Russia in July 2016. The analysis of the satellite data is complemented by the original results of biomass burning aerosol aging experiments in a large aerosol chamber.
The results indicate that the BB aerosol evolution during the first 10-20 hours features strong secondary organic aerosol (SOA) formation resulting in a substantial increase in the particle single scattering albedo. Further evolution is affected by the loss of organic matter, probably due to evaporation and oxidation. The results also indicate that although brown carbon contained in the primary aerosol is rapidly lost (consistently with available independent observations) due to evaporation and photochemical destruction of chromospheres, it is partly replaced by weakly absorbing low-volatile SOA.
In general, this study reveals that aging BB aerosol from wildfires in Siberia undergoes major physical and chemical transformations that have to be taken into account in assessments of the impact of Siberian fires on the radiative balance in Northern Eurasia and the Arctic. It also proposes a practical way to address these complex transformations in chemistry-transport and climate models.
The study was supported by the Russian Science Foundation (grant agreement No. 19-77-20109).
References
- Konovalov, I.B., Beekmann, M., Berezin, E.V., Formenti, P., and Andreae, M.O.: Probing into the aging dynamics of biomass burning aerosol by using satellite measurements of aerosol optical depth and carbon monoxide, Atmos. Chem. Phys., 17, 4513–4537, 2017.
- Konovalov, I.B., Lvova, D.A., Beekmann, M., Jethva, H., Mikhailov, E.F., Paris, J.-D., Belan, B.D., Kozlov, V.S., Ciais, P., and Andreae, M.O.: Estimation of black carbon emissions from Siberian fires using satellite observations of absorption and extinction optical depths, Atmos. Chem. Phys., 18, 14889–14924, 2018.
- Konovalov, I.B., Beekmann, M., Golovushkin, N.A., and Andreae, M.O.: Nonlinear behavior of organic aerosol in biomass burning plumes: a microphysical model analysis, Atmos. Chem. Phys., 19, 12091–12119, 2019.
How to cite: Konovalov, I., Golovushkin, N., Beekmann, M., and Kozlov, V.: New insights into physical and chemical atmospheric transformations of biomass burning aerosol from wildfires in Siberia, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7970, https://doi.org/10.5194/egusphere-egu2020-7970, 2020.
EGU2020-11582 | Displays | ITS2.15/BG2.25
PEEX Integrated Multi-scales and -Process Modelling for Environmental ApplicationsAlexander Mahura, Alexander Baklanov, Tuukka Petäjä, Roman Nuterman, Serguei Ivanov, Silas Michaelides, Igor Ruban, Risto Makkonen, Hanna K Lappalainen, Sergej Zilitinkevich, and Markku Kulmala
The Pan-Eurasian EXperiment (PEEX; www.atm.helsinki.fi/peex) programme is a long-term programme. One of the PEEX Research Infrastructure’s components is the PEEX-Modelling-Platform (PEEX-MP; www.atm.helsinki.fi/peex/index.php/modelling-platform). PEEX-MP includes more than 30 different models running at different scales, resolutions, geographical domains, resolving different physical-chemical-biological processes, etc. and used as research tools providing insights and valuable information/ output for different level assessments for environment and population. These models cover main components - atmosphere, hydrosphere, pedosphere and biosphere. The seamless coupling multi-scale and -processes modelling concept developed is important and advanced step towards realization of the PEEX research agenda presented in the PEEX Science Plan (www.atm.helsinki.fi/peex/images/PEEX_Science_Plan.pdf). Accessibility to infrastructure with High Performance Computing is important for such modelling.
In particular, the Enviro-HIRLAM (Environment – HIgh Resolution Limited Area Model) & HARMONIE (The HIRLAM-ALADIN Research for Meso-scale Operational NWP In Europe) models can be applied for multi-scale and –processes studies on interactions and feedbacks of meteorology vs aerosols/chemistry; aerosols vs. cloud formation and radiative forcing; boundary layer parameterizations; urbanization processes impact on changes in urban weather and climate; assessments for human and environment; improving prediction of extreme weather/ pollution events; etc. All these can be studied at different spatial (urban-subregional-regional) and temporal scales. In addition, added value to analysis is obtained through integration of modelling results into GIS environment for further risk/vulnerability/consequences/etc. studies.
As part of the Enviro-PEEX project (www.atm.helsinki.fi/peex/index.php/enviro), the models were used to study aerosols feedbacks and interactions in Arctic-boreal domain at regional scale & effects of radar data assimilation at mesoscale resolution, respectively.
Enviro-HIRLAM model was run in a long-term mode at 15-5 km resolutions for reference and aerosols effects (direct, indirect, combined included) with ECMWF boundary conditions and anthropogenic/ biogenic/ natural emissions pre-processed. Analysis of differences between model runs for basic statistics (avg, med, max, min, std) showed less pronounced variations of concentrations for average in Arctic regions vs other regions, and more pronounced for maximum concentration in Russian Siberia and Ural. Monthly averaged sulphur dioxide was larger over mid-latitudes (influence of anthropogenic sources) with maximum due to long-range atmospheric transport. For particular matter, it is lower in Arctic compared with mid-latitudes, but their composition is dominated by sea salt aerosols.
HARMONIE model was tested with pre-processing (optimising inner parameters) and data assimilation of radar reflectivity, which minimize a representative error (associated with discrepancy between resolutions in informational sources). The method showed improvement in prediction of precipitation rain rates and spatial pattern within radars’ location areas and better reproduction of mesoscale belts and cell patterns of few-to-ten size in precipitation fields. Compatibility between model resolution and smoothed radar observation density was achieved by “cube-smoothing” approach. This ensures equivalent presentation of precipitation (reflectivity) structures in both model and observation in a sense of equally preserving the scales of precipitation patterns.
Moreover, for selected PEEX-MP models, used by UHEL-INAR, such Enviro-HIRLAM, EC-Earth, MALTE-Box a series of science education oriented trainings/schools is organized in April & August 2020 (ums.rshu.ru & worldslargerivers.boku.ac.at/wlr/index.php/ysss.html) which are part of the PEEX Educational Platform activities as well.
How to cite: Mahura, A., Baklanov, A., Petäjä, T., Nuterman, R., Ivanov, S., Michaelides, S., Ruban, I., Makkonen, R., Lappalainen, H. K., Zilitinkevich, S., and Kulmala, M.: PEEX Integrated Multi-scales and -Process Modelling for Environmental Applications, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11582, https://doi.org/10.5194/egusphere-egu2020-11582, 2020.
The Pan-Eurasian EXperiment (PEEX; www.atm.helsinki.fi/peex) programme is a long-term programme. One of the PEEX Research Infrastructure’s components is the PEEX-Modelling-Platform (PEEX-MP; www.atm.helsinki.fi/peex/index.php/modelling-platform). PEEX-MP includes more than 30 different models running at different scales, resolutions, geographical domains, resolving different physical-chemical-biological processes, etc. and used as research tools providing insights and valuable information/ output for different level assessments for environment and population. These models cover main components - atmosphere, hydrosphere, pedosphere and biosphere. The seamless coupling multi-scale and -processes modelling concept developed is important and advanced step towards realization of the PEEX research agenda presented in the PEEX Science Plan (www.atm.helsinki.fi/peex/images/PEEX_Science_Plan.pdf). Accessibility to infrastructure with High Performance Computing is important for such modelling.
In particular, the Enviro-HIRLAM (Environment – HIgh Resolution Limited Area Model) & HARMONIE (The HIRLAM-ALADIN Research for Meso-scale Operational NWP In Europe) models can be applied for multi-scale and –processes studies on interactions and feedbacks of meteorology vs aerosols/chemistry; aerosols vs. cloud formation and radiative forcing; boundary layer parameterizations; urbanization processes impact on changes in urban weather and climate; assessments for human and environment; improving prediction of extreme weather/ pollution events; etc. All these can be studied at different spatial (urban-subregional-regional) and temporal scales. In addition, added value to analysis is obtained through integration of modelling results into GIS environment for further risk/vulnerability/consequences/etc. studies.
As part of the Enviro-PEEX project (www.atm.helsinki.fi/peex/index.php/enviro), the models were used to study aerosols feedbacks and interactions in Arctic-boreal domain at regional scale & effects of radar data assimilation at mesoscale resolution, respectively.
Enviro-HIRLAM model was run in a long-term mode at 15-5 km resolutions for reference and aerosols effects (direct, indirect, combined included) with ECMWF boundary conditions and anthropogenic/ biogenic/ natural emissions pre-processed. Analysis of differences between model runs for basic statistics (avg, med, max, min, std) showed less pronounced variations of concentrations for average in Arctic regions vs other regions, and more pronounced for maximum concentration in Russian Siberia and Ural. Monthly averaged sulphur dioxide was larger over mid-latitudes (influence of anthropogenic sources) with maximum due to long-range atmospheric transport. For particular matter, it is lower in Arctic compared with mid-latitudes, but their composition is dominated by sea salt aerosols.
HARMONIE model was tested with pre-processing (optimising inner parameters) and data assimilation of radar reflectivity, which minimize a representative error (associated with discrepancy between resolutions in informational sources). The method showed improvement in prediction of precipitation rain rates and spatial pattern within radars’ location areas and better reproduction of mesoscale belts and cell patterns of few-to-ten size in precipitation fields. Compatibility between model resolution and smoothed radar observation density was achieved by “cube-smoothing” approach. This ensures equivalent presentation of precipitation (reflectivity) structures in both model and observation in a sense of equally preserving the scales of precipitation patterns.
Moreover, for selected PEEX-MP models, used by UHEL-INAR, such Enviro-HIRLAM, EC-Earth, MALTE-Box a series of science education oriented trainings/schools is organized in April & August 2020 (ums.rshu.ru & worldslargerivers.boku.ac.at/wlr/index.php/ysss.html) which are part of the PEEX Educational Platform activities as well.
How to cite: Mahura, A., Baklanov, A., Petäjä, T., Nuterman, R., Ivanov, S., Michaelides, S., Ruban, I., Makkonen, R., Lappalainen, H. K., Zilitinkevich, S., and Kulmala, M.: PEEX Integrated Multi-scales and -Process Modelling for Environmental Applications, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11582, https://doi.org/10.5194/egusphere-egu2020-11582, 2020.
EGU2020-22602 | Displays | ITS2.15/BG2.25
Comprehensive environmental observations and their integration in the Arctic-boreal environmentTuukka Petäjä, Hanna Lappalainen, Jaana Bäck, and Markku Kulmala
The environment in the Arctic and boreal is changing rapidly due to megatrends such as globalization, new transport route development, demography and use of natural resources. These megatrends have environmental effects, particularly in terrestrial, marine and cryosphere domains which are undergoing substantial changes. Local, regional, national and international decision-making bodies require fact-based services to tackle challenges of rapid environmental change. In this presentation we will present results from “integrative and Comprehensive Understanding on Polar Environments (iCUPE) project, which combines in-situ observations and satellite remote sensing for novel data and scientific understanding on the Arctic pollution. We will also summarize the benefits arising from integrated and co-located observations that contribute to different European environmental research infrastructures with practical scientific insights from such synthesis.
How to cite: Petäjä, T., Lappalainen, H., Bäck, J., and Kulmala, M.: Comprehensive environmental observations and their integration in the Arctic-boreal environment, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22602, https://doi.org/10.5194/egusphere-egu2020-22602, 2020.
The environment in the Arctic and boreal is changing rapidly due to megatrends such as globalization, new transport route development, demography and use of natural resources. These megatrends have environmental effects, particularly in terrestrial, marine and cryosphere domains which are undergoing substantial changes. Local, regional, national and international decision-making bodies require fact-based services to tackle challenges of rapid environmental change. In this presentation we will present results from “integrative and Comprehensive Understanding on Polar Environments (iCUPE) project, which combines in-situ observations and satellite remote sensing for novel data and scientific understanding on the Arctic pollution. We will also summarize the benefits arising from integrated and co-located observations that contribute to different European environmental research infrastructures with practical scientific insights from such synthesis.
How to cite: Petäjä, T., Lappalainen, H., Bäck, J., and Kulmala, M.: Comprehensive environmental observations and their integration in the Arctic-boreal environment, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22602, https://doi.org/10.5194/egusphere-egu2020-22602, 2020.
EGU2020-2355 | Displays | ITS2.15/BG2.25
Data mining and machine learning to enhance new-particle formation identification and analysisMartha A. Zaidan, Pak L. Fung, Darren Wraith, Tuomo Nieminen, Tareq Hussein, Veli-Matti Kerminen, Tuukka Petäjä, and Markku Kulmala
Data Mining (DM) and Machine Learning (ML) have become very popular modern statistical learning tools in solving many complex scientific problems. In this work, we present two case studies that used DM and ML techniques to enhance new-particle formation (NPF) identification and analysis. Extensive measurements and large data sets related to NPF and other ambient variables have been collected in arctic and boreal regions. The focus area of our studies is the SMEAR II station located in Hyytiälä forest, Finland that is in the area of interest of the Pan-Eurasian Experiment (PEEX).
Atmospheric NPF is an important source of climatically relevant atmospheric aerosol particles. NPF is typically observed by monitoring the time-evolution of ambient aerosol particle size distributions. Due to the noisiness of the real-world ambient data, currently the most reliable way to classify measurement days into NPF event/non-event days is through a manual visualisation method. However, manual labour, with long multi-year time series, is extremely time-consuming and human subjectivity poses challenges for comparing the results of different data sets. In this case, ML classifier is used to classify event/non-event days of NPF using a manually generated database. The results demonstrate that ML-based approaches point towards the potential of these methods and suggest further exploration in this direction.
Furthermore, NPF is a very non-linear process that includes atmospheric chemistry of precursors and clustering physics as well as subsequent growth before NPF can be observed. Thanks to ongoing efforts, now there exists a tremendous amount of atmospheric data, obtained through continuous measurements directly from the atmosphere. This fact makes the analysis by human brains difficult, on the other hand, enables the usage of modern data science techniques. Here, we demonstrate the use of DM method, named mutual information (MI) to understand NPF events and a wide variety of simultaneously monitored ambient variables. The same results are obtained by the proposed MI method which operates without supervision and without the need of understanding the physics deeply.
How to cite: Zaidan, M. A., Fung, P. L., Wraith, D., Nieminen, T., Hussein, T., Kerminen, V.-M., Petäjä, T., and Kulmala, M.: Data mining and machine learning to enhance new-particle formation identification and analysis, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2355, https://doi.org/10.5194/egusphere-egu2020-2355, 2020.
Data Mining (DM) and Machine Learning (ML) have become very popular modern statistical learning tools in solving many complex scientific problems. In this work, we present two case studies that used DM and ML techniques to enhance new-particle formation (NPF) identification and analysis. Extensive measurements and large data sets related to NPF and other ambient variables have been collected in arctic and boreal regions. The focus area of our studies is the SMEAR II station located in Hyytiälä forest, Finland that is in the area of interest of the Pan-Eurasian Experiment (PEEX).
Atmospheric NPF is an important source of climatically relevant atmospheric aerosol particles. NPF is typically observed by monitoring the time-evolution of ambient aerosol particle size distributions. Due to the noisiness of the real-world ambient data, currently the most reliable way to classify measurement days into NPF event/non-event days is through a manual visualisation method. However, manual labour, with long multi-year time series, is extremely time-consuming and human subjectivity poses challenges for comparing the results of different data sets. In this case, ML classifier is used to classify event/non-event days of NPF using a manually generated database. The results demonstrate that ML-based approaches point towards the potential of these methods and suggest further exploration in this direction.
Furthermore, NPF is a very non-linear process that includes atmospheric chemistry of precursors and clustering physics as well as subsequent growth before NPF can be observed. Thanks to ongoing efforts, now there exists a tremendous amount of atmospheric data, obtained through continuous measurements directly from the atmosphere. This fact makes the analysis by human brains difficult, on the other hand, enables the usage of modern data science techniques. Here, we demonstrate the use of DM method, named mutual information (MI) to understand NPF events and a wide variety of simultaneously monitored ambient variables. The same results are obtained by the proposed MI method which operates without supervision and without the need of understanding the physics deeply.
How to cite: Zaidan, M. A., Fung, P. L., Wraith, D., Nieminen, T., Hussein, T., Kerminen, V.-M., Petäjä, T., and Kulmala, M.: Data mining and machine learning to enhance new-particle formation identification and analysis, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2355, https://doi.org/10.5194/egusphere-egu2020-2355, 2020.
EGU2020-2693 | Displays | ITS2.15/BG2.25
Input-adaptive proxy of air quality parameters: A case study for black carbon in Helsinki, FinlandPak L Fung, Martha A Zaidan, Salla Sillanpää, Anu Kousa, Jarkko V Niemi, Hilkka Timonen, Joel Kuula, Erkka Saukko, Krista Luoma, Tuukka Petäjä, Sasu Tarkoma, Markku Kulmala, and Tareq Hussein
Urban air pollution has been a global challenge, and continuous air quality measurement is important to understand the nature of the problem. However, missing data has often been an issue in air quality measurement. In this study, we presented a modified method to impute missing data by input-adaptive proxy. We used black carbon (BC) concentration data in Mäkelänkatu traffic site (TR) and Kumpula urban background site (BG) in Helsinki, Finland in 2017–2018 as training sets. The input-adaptive proxy selected input variables of other air quality variables based on their Pearson correlation coefficients with BC. In order to avoid overfitting, this proxy used the algorithm of least squares model with a bisquare weighting function and allowed a maximum of three input variables. The generated models were then evaluated and ranked by adjusted coefficient of determination (adjR2), mean absolute error and root mean square error. BC concentration was first estimated by the best model. In case of missing data in the input variables in the best model, the input-adaptive proxy then used the second-best model until all the missing data gaps were filled up.
The input-adaptive proxy managed to fill up 100% of the missing voids while traditional proxy filled only 20–80% of missing BC data. Furthermore, the overall performance of the input-adaptive proxy is reliable both in TR (adjR2=0.86–0.94) and in BG (adjR2=0.74–0.91). TR has a generally better regression performance because the level of BC can be mostly explained by traffic count, nitrogen oxides and accumulation mode. On the contrary, the source of BC in BG is more heterogeneous, which includes traffic emission and residential combustion, and the concentration of BC is influenced by meteorological parameters; therefore, the rule of including maximum three input variables might lead to the lower adjR2. The proxy works slightly better for workdays scenario than in weekends in both sites. In TR, the proxy works similarly in all seasons, while in BG, the proxy performance is better in winter and autumn than in the other seasons. The simplicity, full coverage and high reliability of the input-adaptive proxy make it sound to further estimate other air quality parameters. Moreover, it can act as an air quality virtual sensor alongside with on-site instruments.
How to cite: Fung, P. L., Zaidan, M. A., Sillanpää, S., Kousa, A., Niemi, J. V., Timonen, H., Kuula, J., Saukko, E., Luoma, K., Petäjä, T., Tarkoma, S., Kulmala, M., and Hussein, T.: Input-adaptive proxy of air quality parameters: A case study for black carbon in Helsinki, Finland, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2693, https://doi.org/10.5194/egusphere-egu2020-2693, 2020.
Urban air pollution has been a global challenge, and continuous air quality measurement is important to understand the nature of the problem. However, missing data has often been an issue in air quality measurement. In this study, we presented a modified method to impute missing data by input-adaptive proxy. We used black carbon (BC) concentration data in Mäkelänkatu traffic site (TR) and Kumpula urban background site (BG) in Helsinki, Finland in 2017–2018 as training sets. The input-adaptive proxy selected input variables of other air quality variables based on their Pearson correlation coefficients with BC. In order to avoid overfitting, this proxy used the algorithm of least squares model with a bisquare weighting function and allowed a maximum of three input variables. The generated models were then evaluated and ranked by adjusted coefficient of determination (adjR2), mean absolute error and root mean square error. BC concentration was first estimated by the best model. In case of missing data in the input variables in the best model, the input-adaptive proxy then used the second-best model until all the missing data gaps were filled up.
The input-adaptive proxy managed to fill up 100% of the missing voids while traditional proxy filled only 20–80% of missing BC data. Furthermore, the overall performance of the input-adaptive proxy is reliable both in TR (adjR2=0.86–0.94) and in BG (adjR2=0.74–0.91). TR has a generally better regression performance because the level of BC can be mostly explained by traffic count, nitrogen oxides and accumulation mode. On the contrary, the source of BC in BG is more heterogeneous, which includes traffic emission and residential combustion, and the concentration of BC is influenced by meteorological parameters; therefore, the rule of including maximum three input variables might lead to the lower adjR2. The proxy works slightly better for workdays scenario than in weekends in both sites. In TR, the proxy works similarly in all seasons, while in BG, the proxy performance is better in winter and autumn than in the other seasons. The simplicity, full coverage and high reliability of the input-adaptive proxy make it sound to further estimate other air quality parameters. Moreover, it can act as an air quality virtual sensor alongside with on-site instruments.
How to cite: Fung, P. L., Zaidan, M. A., Sillanpää, S., Kousa, A., Niemi, J. V., Timonen, H., Kuula, J., Saukko, E., Luoma, K., Petäjä, T., Tarkoma, S., Kulmala, M., and Hussein, T.: Input-adaptive proxy of air quality parameters: A case study for black carbon in Helsinki, Finland, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2693, https://doi.org/10.5194/egusphere-egu2020-2693, 2020.
EGU2020-8794 | Displays | ITS2.15/BG2.25
Revision of the current theory of unstably stratified turbulence and its potential implications for PEEXSergej Zilitinkevich and Irina Repina
Turbulence in unstably stratified flows is traditionally considered as chaotic eddies generated on equal terms by the two very different mechanisms: mean velocity shears, and buoyancy forces. By this means, vertical buoyant plumes comprising “convective turbulence” are not distinguished form 3-dimensional shear-generated eddies comprising “mechanical turbulence”. The latter are dynamically unstable and, hence, break down to produce smaller eddies, thus performing direct cascade of turbulent kinetic energy (TKE) and other properties of turbulence from larger to smaller scales towards their molecular dissipation. The conventional theory does not distinguish convective plumes from mechanical eddies and, factually, postulates that plumes also perform the direct cascade.
We declare that this conventional vision is erroneous except for the trivial case of domination of dynamic instability, when mechanical eddies destroy convective plumes and violently involve them into direct cascade. In geophysical convective boundary layers (CBLs), this condition is satisfied in the thin near-surface sublayer comprising usually less than one per cent of CBL. Beyond this sublayer, dominant role belongs to convective plumes that do not break down but merge to form larger plumes, thus, performing inverse cascade culminated in the conversion of convective TKE into kinetic energy of the CBL-scale self-organised structures: cells or rolls. Therewith, weak mechanical turbulence generated by the mean-flow shears performs usual direct cascade. Hence, horizontal TKE is fully mechanical, whereas vertical TKE is almost fully convective. The key role in this unorthodox picture play the rates of conversion of TKE or another property of convective turbulence into kinetic energy or anther property of the CBL-scale self-organised structures. We define this vision of unstably stratified turbulence theoretically and prove it experimentally by the example of TKE budget in horizontally homogenous atmospheric surface-layer flow.
How to cite: Zilitinkevich, S. and Repina, I.: Revision of the current theory of unstably stratified turbulence and its potential implications for PEEX, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8794, https://doi.org/10.5194/egusphere-egu2020-8794, 2020.
Turbulence in unstably stratified flows is traditionally considered as chaotic eddies generated on equal terms by the two very different mechanisms: mean velocity shears, and buoyancy forces. By this means, vertical buoyant plumes comprising “convective turbulence” are not distinguished form 3-dimensional shear-generated eddies comprising “mechanical turbulence”. The latter are dynamically unstable and, hence, break down to produce smaller eddies, thus performing direct cascade of turbulent kinetic energy (TKE) and other properties of turbulence from larger to smaller scales towards their molecular dissipation. The conventional theory does not distinguish convective plumes from mechanical eddies and, factually, postulates that plumes also perform the direct cascade.
We declare that this conventional vision is erroneous except for the trivial case of domination of dynamic instability, when mechanical eddies destroy convective plumes and violently involve them into direct cascade. In geophysical convective boundary layers (CBLs), this condition is satisfied in the thin near-surface sublayer comprising usually less than one per cent of CBL. Beyond this sublayer, dominant role belongs to convective plumes that do not break down but merge to form larger plumes, thus, performing inverse cascade culminated in the conversion of convective TKE into kinetic energy of the CBL-scale self-organised structures: cells or rolls. Therewith, weak mechanical turbulence generated by the mean-flow shears performs usual direct cascade. Hence, horizontal TKE is fully mechanical, whereas vertical TKE is almost fully convective. The key role in this unorthodox picture play the rates of conversion of TKE or another property of convective turbulence into kinetic energy or anther property of the CBL-scale self-organised structures. We define this vision of unstably stratified turbulence theoretically and prove it experimentally by the example of TKE budget in horizontally homogenous atmospheric surface-layer flow.
How to cite: Zilitinkevich, S. and Repina, I.: Revision of the current theory of unstably stratified turbulence and its potential implications for PEEX, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8794, https://doi.org/10.5194/egusphere-egu2020-8794, 2020.
EGU2020-9034 | Displays | ITS2.15/BG2.25
Large organized structures in stably stratified turbulent shear flows.Andrey Glazunov, Evgeny Mortikov, Grigory Zasko, Yuri Nechepurenko, and Sergej Zilitinkevich
We analyzed the data of the numerical simulation of stably stratified turbulent shear flows. It is shown that, along with chaotic turbulence, the flows contain large organized structures. In the temperature field, these structures appear as inclined layers with weakly stable stratification, separated by very thin layers with large temperature gradients. The existence of such layered structures in nature is indirectly confirmed by the analysis of field measurements. An increase of the turbulent Prandtl number with increasing gradient Richardson number was fixed in simulation data. The hypothesis is proposed that physical mechanism for maintaining of turbulence in supercritically stable stratification is connected with the revealed structures. It is shown that the spatial scales and the shapes of the identified organized structures can be explained using the calculation of optimal disturbances for the simplified linear model.
This study was supported by the Russian Foundation for Basic Research (grants nos. 18-05-60126, 20-05-00776) and by Academy of Finland project ClimEco no. 314 798/799 (2018-2020).
How to cite: Glazunov, A., Mortikov, E., Zasko, G., Nechepurenko, Y., and Zilitinkevich, S.: Large organized structures in stably stratified turbulent shear flows., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9034, https://doi.org/10.5194/egusphere-egu2020-9034, 2020.
We analyzed the data of the numerical simulation of stably stratified turbulent shear flows. It is shown that, along with chaotic turbulence, the flows contain large organized structures. In the temperature field, these structures appear as inclined layers with weakly stable stratification, separated by very thin layers with large temperature gradients. The existence of such layered structures in nature is indirectly confirmed by the analysis of field measurements. An increase of the turbulent Prandtl number with increasing gradient Richardson number was fixed in simulation data. The hypothesis is proposed that physical mechanism for maintaining of turbulence in supercritically stable stratification is connected with the revealed structures. It is shown that the spatial scales and the shapes of the identified organized structures can be explained using the calculation of optimal disturbances for the simplified linear model.
This study was supported by the Russian Foundation for Basic Research (grants nos. 18-05-60126, 20-05-00776) and by Academy of Finland project ClimEco no. 314 798/799 (2018-2020).
How to cite: Glazunov, A., Mortikov, E., Zasko, G., Nechepurenko, Y., and Zilitinkevich, S.: Large organized structures in stably stratified turbulent shear flows., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9034, https://doi.org/10.5194/egusphere-egu2020-9034, 2020.
EGU2020-13175 | Displays | ITS2.15/BG2.25
Fire activity and Aerosol Optical Depth over PEEX area for the last two decadesLarisa Sogacheva, Anu-Maija Sundström, Gerrit de Leeuw, Antti Arola, Tuukka Petäjä, Hanna K. Lappalainen, and Markku Kulmala
The Pan-Eurasian Experiment Program (PEEX) is an interdisciplinary scientific program bringing together ground-based in situ and remote sensing observations, satellite measurements and modeling tools aiming to improve the understanding of land-water-atmosphere interactions, feedback mechanisms and their effects on the ecosystem, climate and society in northern Eurasia, Russia and China. One of the pillars of the PEEX program is the ground-based observation system with new stations being established across the whole PEEX domain complementing existing infrastructure. However, in view of the large area covering thousands of kilometres, large gaps will remain where no or little observational information will be available. The gap can partly be filled by satellite remote sensing of relevant parameters as regards atmospheric composition, land and water surface properties including snow and ice, and vegetation.
Forest fires and corresponding emissions to the atmosphere dramatically change the atmospheric composition in case of long-lasting fire events, which might cover extended areas. In the burned areas, CO2 exchange, as well as emissions of different compounds are getting to higher levels, which might contribute to climate change by changing the radiative budget through the aerosol-cloud interaction and cloud formation. In the boreal forest, after CO2, CO and CH4, the largest emission factors for individual species were formaldehyde, followed by methanol and NO2 (Simpson et al., ACP, 2011). The emitted long-life components, e.g., black carbon, might further be transported to the distant areas and measured at the surface far from the burned areas.
During the last few decades, several burning episodes have been observed over PEEX area by satellites (as fire counts), specifically over Siberia and central Russia. Fire activity can also be seen in increasing Aerosol Optical depth (AOD) retrieved from satellites, as well as fire radiative power (FRP) calculated using the satellite data. In the current work, we study the time series of the fire activity, FRP and AOD over PEEX area and specifically over selected cities.
How to cite: Sogacheva, L., Sundström, A.-M., de Leeuw, G., Arola, A., Petäjä, T., Lappalainen, H. K., and Kulmala, M.: Fire activity and Aerosol Optical Depth over PEEX area for the last two decades, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13175, https://doi.org/10.5194/egusphere-egu2020-13175, 2020.
The Pan-Eurasian Experiment Program (PEEX) is an interdisciplinary scientific program bringing together ground-based in situ and remote sensing observations, satellite measurements and modeling tools aiming to improve the understanding of land-water-atmosphere interactions, feedback mechanisms and their effects on the ecosystem, climate and society in northern Eurasia, Russia and China. One of the pillars of the PEEX program is the ground-based observation system with new stations being established across the whole PEEX domain complementing existing infrastructure. However, in view of the large area covering thousands of kilometres, large gaps will remain where no or little observational information will be available. The gap can partly be filled by satellite remote sensing of relevant parameters as regards atmospheric composition, land and water surface properties including snow and ice, and vegetation.
Forest fires and corresponding emissions to the atmosphere dramatically change the atmospheric composition in case of long-lasting fire events, which might cover extended areas. In the burned areas, CO2 exchange, as well as emissions of different compounds are getting to higher levels, which might contribute to climate change by changing the radiative budget through the aerosol-cloud interaction and cloud formation. In the boreal forest, after CO2, CO and CH4, the largest emission factors for individual species were formaldehyde, followed by methanol and NO2 (Simpson et al., ACP, 2011). The emitted long-life components, e.g., black carbon, might further be transported to the distant areas and measured at the surface far from the burned areas.
During the last few decades, several burning episodes have been observed over PEEX area by satellites (as fire counts), specifically over Siberia and central Russia. Fire activity can also be seen in increasing Aerosol Optical depth (AOD) retrieved from satellites, as well as fire radiative power (FRP) calculated using the satellite data. In the current work, we study the time series of the fire activity, FRP and AOD over PEEX area and specifically over selected cities.
How to cite: Sogacheva, L., Sundström, A.-M., de Leeuw, G., Arola, A., Petäjä, T., Lappalainen, H. K., and Kulmala, M.: Fire activity and Aerosol Optical Depth over PEEX area for the last two decades, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13175, https://doi.org/10.5194/egusphere-egu2020-13175, 2020.
EGU2020-2679 | Displays | ITS2.15/BG2.25
Analysis of CO2 content near Russian cities from OCO-2 satellite measurementsAnastasia Nikitenko, Yury Timofeyev, Yana Virolainen, Ivan Berezin, and Alexander Polyakov
The growth of greenhouse gases, primarily carbon dioxide, is the main cause of modern changes in the Earth's climate. At the same time, despite the relatively small area (~ 3%) of the territory of cities, they are responsible for more than 70% of anthropogenic emissions caused by energy supply systems. Therefore, studies of CO2 distributions in cities and surrounding regions, as well as quantitative estimates of urban emissions are an urgent problem.
The paper presents a comparative analysis of CO2 contents and their variations for a number of Russian cities (Moscow, St. Petersburg, Yekaterinburg, Magnitogorsk and Norilsk) on the basis of the OCO-2 satellite measurement data. The studies were carried out using satellite data sets that vary from high to average quality. These ensembles differ for all these cities in the number of measurement days, the total number of CO2 measurements, and the spatial and temporal coverage. For example, a high-quality ensemble covers ~ 90% of the spring and summer months, i.e. provides an opportunity to study CO2 variations in the warm season. The ensemble of measurements with average accuracy more evenly covers the entire year.
The paper studied various characteristics of the column averaged dry-air mole fraction of CO2 (XCO2) for 5 cities, namely, minimal and maximal values, amplitudes of variations, daily average maximal and minimal values, standard deviations, etc. Possibilities of using the OSO-2 data for estimating of anthropogenic emissions in different cities are considered.
How to cite: Nikitenko, A., Timofeyev, Y., Virolainen, Y., Berezin, I., and Polyakov, A.: Analysis of CO2 content near Russian cities from OCO-2 satellite measurements, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2679, https://doi.org/10.5194/egusphere-egu2020-2679, 2020.
The growth of greenhouse gases, primarily carbon dioxide, is the main cause of modern changes in the Earth's climate. At the same time, despite the relatively small area (~ 3%) of the territory of cities, they are responsible for more than 70% of anthropogenic emissions caused by energy supply systems. Therefore, studies of CO2 distributions in cities and surrounding regions, as well as quantitative estimates of urban emissions are an urgent problem.
The paper presents a comparative analysis of CO2 contents and their variations for a number of Russian cities (Moscow, St. Petersburg, Yekaterinburg, Magnitogorsk and Norilsk) on the basis of the OCO-2 satellite measurement data. The studies were carried out using satellite data sets that vary from high to average quality. These ensembles differ for all these cities in the number of measurement days, the total number of CO2 measurements, and the spatial and temporal coverage. For example, a high-quality ensemble covers ~ 90% of the spring and summer months, i.e. provides an opportunity to study CO2 variations in the warm season. The ensemble of measurements with average accuracy more evenly covers the entire year.
The paper studied various characteristics of the column averaged dry-air mole fraction of CO2 (XCO2) for 5 cities, namely, minimal and maximal values, amplitudes of variations, daily average maximal and minimal values, standard deviations, etc. Possibilities of using the OSO-2 data for estimating of anthropogenic emissions in different cities are considered.
How to cite: Nikitenko, A., Timofeyev, Y., Virolainen, Y., Berezin, I., and Polyakov, A.: Analysis of CO2 content near Russian cities from OCO-2 satellite measurements, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2679, https://doi.org/10.5194/egusphere-egu2020-2679, 2020.
EGU2020-18080 | Displays | ITS2.15/BG2.25
Satellite-derived spatiotemporal patterns of environmental changes caused by 2018-2019 wildfires in Arctic-Boreal RussiaElena Cherepanova, Valery Bondur, Viktor Zamshin, and Natalia Feoktistova
Forest fires affect environmental changes both directly, changing the type of land cover, causing local and regional air pollution through emissions of greenhouse gases and aerosols, and indirectly through a secondary effect on atmospheric, soil and hydrological processes. The increase in the number and area of uncontrolled wildfires, the degradation of permafrost in high latitude areas leads to a change in the balance of greenhouse gases in the atmosphere, and it results in the negative impact on the Earth’s climatic system.
This study examined the Arctic-Boreal territories of the Russian Federation, where huge forest fires were observed in 2018-2019. In most of these areas, forest fire detection is carried out only by means of the satellite monitoring without aviation support. The sparsely populated and inaccessible territories are a major factor of the rapid spread of fires over large areas. Most of the forest areas in the region are so-called control zones, where the authorities may decide not to extinguish the fires if they do not threaten settlements and economic facilities, and consider the salvation of forests economically unprofitable. However, there is no reliable data on the environmental consequences of large forest fires in the Arctic-Boreal territories.
Satellite monitoring of wildfires provides the detection of fire locations, an assessment of their area and burning time. In our study, we used various indices calculated from remote sensing data for the pre-fire and post-fire periods to identify the spatiotemporal patterns of environmental change caused by large wildfires. The Sentinel 5 TROPOMI time series have been analyzed for the short-term and long-term atmospheric composition anomalies detection caused by forest fires in the region. In the process of comparing the methane concentrations time series for the 2018- 2019 fire seasons the constantly high values anomaly zones were found. We believe that these anomalies are resulting from Sentinel-5 CH4 algorithm constrains, which requires additional work on data validation with relation to the local conditions.
The reported study was funded by RFBR, MOST (China) and DST (India) according to the research project № 19-55-80021
How to cite: Cherepanova, E., Bondur, V., Zamshin, V., and Feoktistova, N.: Satellite-derived spatiotemporal patterns of environmental changes caused by 2018-2019 wildfires in Arctic-Boreal Russia , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18080, https://doi.org/10.5194/egusphere-egu2020-18080, 2020.
Forest fires affect environmental changes both directly, changing the type of land cover, causing local and regional air pollution through emissions of greenhouse gases and aerosols, and indirectly through a secondary effect on atmospheric, soil and hydrological processes. The increase in the number and area of uncontrolled wildfires, the degradation of permafrost in high latitude areas leads to a change in the balance of greenhouse gases in the atmosphere, and it results in the negative impact on the Earth’s climatic system.
This study examined the Arctic-Boreal territories of the Russian Federation, where huge forest fires were observed in 2018-2019. In most of these areas, forest fire detection is carried out only by means of the satellite monitoring without aviation support. The sparsely populated and inaccessible territories are a major factor of the rapid spread of fires over large areas. Most of the forest areas in the region are so-called control zones, where the authorities may decide not to extinguish the fires if they do not threaten settlements and economic facilities, and consider the salvation of forests economically unprofitable. However, there is no reliable data on the environmental consequences of large forest fires in the Arctic-Boreal territories.
Satellite monitoring of wildfires provides the detection of fire locations, an assessment of their area and burning time. In our study, we used various indices calculated from remote sensing data for the pre-fire and post-fire periods to identify the spatiotemporal patterns of environmental change caused by large wildfires. The Sentinel 5 TROPOMI time series have been analyzed for the short-term and long-term atmospheric composition anomalies detection caused by forest fires in the region. In the process of comparing the methane concentrations time series for the 2018- 2019 fire seasons the constantly high values anomaly zones were found. We believe that these anomalies are resulting from Sentinel-5 CH4 algorithm constrains, which requires additional work on data validation with relation to the local conditions.
The reported study was funded by RFBR, MOST (China) and DST (India) according to the research project № 19-55-80021
How to cite: Cherepanova, E., Bondur, V., Zamshin, V., and Feoktistova, N.: Satellite-derived spatiotemporal patterns of environmental changes caused by 2018-2019 wildfires in Arctic-Boreal Russia , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18080, https://doi.org/10.5194/egusphere-egu2020-18080, 2020.
EGU2020-6795 | Displays | ITS2.15/BG2.25
Investigating chemical composition of the troposphere over the Russian Arctic using the "Optik" Tu-134 aircraft laboratoryBoris D. Belan, Pavel N. Antokhin, Mikhail Yu. Arshinov, Sergey B. Belan, Denis K. Davydov, Georgii A. Ivlev, Artem V. Kozlov, Igor V. Ptashnik, Denis E. Savkin, Denis V. Simonenkov, Gennadii N. Tolmachev, and Alexandr V. Fofonov
The need to undertake a comprehensive investigation of the atmospheric composition over the Russian segment of the Arctic is caused by a serious lack and irregularity in obtaining observational data from this regio of the Earth. In addition, a comparison of the aircraft in-situ measurements with satellite data retrieved for the Kara Sea region in 2017 revealed large uncertainties in determining the vertical distribution of greenhouse gas concentrations using remote sensing methods. The development and improvement of the last ones needs at least their periodic verification by means of undertaking precise in-situ aircraft measurements.
The general scheme of the proposed experiment is as follows (map is attached): flight from Novosibirsk to Naryan-Mar via Sabetta. From Naryan-Mar, flight to a water area of the Bering Sea (up to 1000 km). Flight from Naryan-Mar to Sabetta. From here, flight to a water area of the Kara Sea (up to 1000 km). Then, flight to Tiksi. Flight from Tiksi to a water area of the Laptev Sea (up to 1000 km). Flight to Chokurdakh or Chersky. From there, flight to a water area of the East Siberian Sea (up to 1000 km). Flight to Cape Schmidt. Flight to a water area of the Chukchi Sea (up to 1000 km). Return route: Cape Shmidt–Chersky (or Chokurdah)–Yakutsk–Bratsk–Novosibirsk. It will take about 100 hours of flying time to implement the entire aircraft campaign. Campaign period is about 2-3 weeks. It is better to undertake the campaign during summer when the ocean is open. Flights over the land surface are assumed to be undertaken from 0.5 km to 11 km above ground level while above the sea from 0.2 km to 11 km. The flight profile is variable from the maximum possible height to the minimum allowed one. Vertical profiles of gas and aerosol composition will be obtained, including black carbon and organic components, as well as basic meteorological quantities.
Satellite data will be verified that do not yet provide acceptable accuracy. For the first time, unique information will be obtained over the least explored region of the Arctic, which is crucial for the whole planet in terms of climate formation and the impact of global warming.
How to cite: Belan, B. D., Antokhin, P. N., Arshinov, M. Yu., Belan, S. B., Davydov, D. K., Ivlev, G. A., Kozlov, A. V., Ptashnik, I. V., Savkin, D. E., Simonenkov, D. V., Tolmachev, G. N., and Fofonov, A. V.: Investigating chemical composition of the troposphere over the Russian Arctic using the "Optik" Tu-134 aircraft laboratory, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6795, https://doi.org/10.5194/egusphere-egu2020-6795, 2020.
The need to undertake a comprehensive investigation of the atmospheric composition over the Russian segment of the Arctic is caused by a serious lack and irregularity in obtaining observational data from this regio of the Earth. In addition, a comparison of the aircraft in-situ measurements with satellite data retrieved for the Kara Sea region in 2017 revealed large uncertainties in determining the vertical distribution of greenhouse gas concentrations using remote sensing methods. The development and improvement of the last ones needs at least their periodic verification by means of undertaking precise in-situ aircraft measurements.
The general scheme of the proposed experiment is as follows (map is attached): flight from Novosibirsk to Naryan-Mar via Sabetta. From Naryan-Mar, flight to a water area of the Bering Sea (up to 1000 km). Flight from Naryan-Mar to Sabetta. From here, flight to a water area of the Kara Sea (up to 1000 km). Then, flight to Tiksi. Flight from Tiksi to a water area of the Laptev Sea (up to 1000 km). Flight to Chokurdakh or Chersky. From there, flight to a water area of the East Siberian Sea (up to 1000 km). Flight to Cape Schmidt. Flight to a water area of the Chukchi Sea (up to 1000 km). Return route: Cape Shmidt–Chersky (or Chokurdah)–Yakutsk–Bratsk–Novosibirsk. It will take about 100 hours of flying time to implement the entire aircraft campaign. Campaign period is about 2-3 weeks. It is better to undertake the campaign during summer when the ocean is open. Flights over the land surface are assumed to be undertaken from 0.5 km to 11 km above ground level while above the sea from 0.2 km to 11 km. The flight profile is variable from the maximum possible height to the minimum allowed one. Vertical profiles of gas and aerosol composition will be obtained, including black carbon and organic components, as well as basic meteorological quantities.
Satellite data will be verified that do not yet provide acceptable accuracy. For the first time, unique information will be obtained over the least explored region of the Arctic, which is crucial for the whole planet in terms of climate formation and the impact of global warming.
How to cite: Belan, B. D., Antokhin, P. N., Arshinov, M. Yu., Belan, S. B., Davydov, D. K., Ivlev, G. A., Kozlov, A. V., Ptashnik, I. V., Savkin, D. E., Simonenkov, D. V., Tolmachev, G. N., and Fofonov, A. V.: Investigating chemical composition of the troposphere over the Russian Arctic using the "Optik" Tu-134 aircraft laboratory, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6795, https://doi.org/10.5194/egusphere-egu2020-6795, 2020.
EGU2020-13244 | Displays | ITS2.15/BG2.25 | Highlight
Arctic Datasets as Part af PEEX International CollaborationNuria Altimir, Alexander Mahura, Tuukka Petäjä, Hanna K Lappalainen, Alla Borisova, Iryna Bashmakova, Steffen Noe, Ella-Maria Duplissy, Päivi Haapanala, Jaana Bäck, Fidel Pankratov, Vladimir Schevchenko, Pavel Konstantinov, Mikhail Vaventsov, Sergey Chalov, Alexander Baklanov, Igor Ezau, Sergei Zilitinkevich, and Markku Kulmala and the SMEAR Measurement Concept
Keywords:
Arctic datasets, research infrastructures, in-situ observations, PEEX e-Catalogue, INTAROS, iCUPE
The INAR is leading the Pan-Eurasian EXperiment (PEEX; www.atm.helsinki.fi/peex) initiative. The PEEX Research Infrastructure’s has 3 components: observation, data and modelling. Observations networks produce large volumes of raw data to be pre/processed/analysed and delivered in a form of datasets (or products) to research and stakeholders/end-users communities. Here, steps taken are discussed and include an overview (as PEEX-e-Catalogue) of measurement capacity of exiting stations and linkages to INTAROS (intaros.nersc.no) and iCUPE (www.atm.helsinki.fi/icupe).
In-Situ Atmospheric-Ecosystem Collaborating Stations
Although more than 200 stations are presented in the PEEX regions of interest, but so far only about 60+ Russian stations have metadata information available. The station metadata enables to categorize stations in a systematic manner and to connect them to international observation networks, such as WMO-GAWP, CERN and perform standardization of data formats. As part of the INAR activities with Russian partners, an e-catalogue was published as a living document (to be updated as new stations will joinin the PEEX network). This catalogue (www.atm.helsinki.fi/peex/index.php/peex-russia-in-situ-stations-e-catalogue) introduces information on measurements and contacts of the Russian stations in the collaboration network, and promotes research collaboration and stations as partners of the collaboration network and to give wider visibility to the stations activities.
Integrated Arctic Observation System (INTAROS)
For Arctic region, 11 stations were selected for the Atmospheric, Terrestrial and Cryospheric parts/themes. The updated metadata were obtained for these measurement stations located within the Russian Arctic territories. Metadata include basic information, physico-geographical and infrastructure description of the sites and details on atmosphere and ecosystem (soils–forest–lakes–urban–peatland–tundra) measurements. Measurements at these sites represent more local conditions of immediate surrounding environment and datasets (as time-series) are available under request. For SMEAR-I (Station for Measuring Atmosphere-Ecosystem Relations) station included in the INTAROS web-based catalogue (catalog-intaros.nersc.no/dataset), the measurement programme includes meteorological (wind speed and direction, air temperature and relative humidity), radiation (global, reflected, net), chemistry/aerosols (CO2, SO2, O3, NOx, etc.); ecosystem, photosynthesis, irradiance related measurements.
Integrative and Comprehensive Understanding on Polar Environments (iCUPE)
More than 20 open access datasets as products for researchers, decision- and policy makers, stakeholders and end-users communities are produced. A list of expected datasets is presented at www.atm.helsinki.fi/icupe/index.php/datasets/list-of-datasets-as-deliverables. These datasets are promoted to larger science and public communities through so-called “teasers” (www.atm.helsinki.fi/icupe/index.php/submitted-datasets). For the Russian Arctic regions, these also include those from the iCUPE Russian collaborators: atmospheric mercury measurements at Amderma station; elemental and organic carbon over the north-western coast of the Kandalaksha Bay of the White Sea; micro-climatic features and Urban Heat Island intensity in cities of Arctic region; and others. Delivered datasets (www.atm.helsinki.fi/icupe/index.php/datasets/delivered-datasets) are directly linked (and downloadable) at website, and corresponding Read-Me files are available with detailed description and metadata information included. Selected datasets are also to be tested for pre/post-processing/analysis on several cloud-based online platforms.
How to cite: Altimir, N., Mahura, A., Petäjä, T., Lappalainen, H. K., Borisova, A., Bashmakova, I., Noe, S., Duplissy, E.-M., Haapanala, P., Bäck, J., Pankratov, F., Schevchenko, V., Konstantinov, P., Vaventsov, M., Chalov, S., Baklanov, A., Ezau, I., Zilitinkevich, S., and Kulmala, M. and the SMEAR Measurement Concept: Arctic Datasets as Part af PEEX International Collaboration, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13244, https://doi.org/10.5194/egusphere-egu2020-13244, 2020.
Keywords:
Arctic datasets, research infrastructures, in-situ observations, PEEX e-Catalogue, INTAROS, iCUPE
The INAR is leading the Pan-Eurasian EXperiment (PEEX; www.atm.helsinki.fi/peex) initiative. The PEEX Research Infrastructure’s has 3 components: observation, data and modelling. Observations networks produce large volumes of raw data to be pre/processed/analysed and delivered in a form of datasets (or products) to research and stakeholders/end-users communities. Here, steps taken are discussed and include an overview (as PEEX-e-Catalogue) of measurement capacity of exiting stations and linkages to INTAROS (intaros.nersc.no) and iCUPE (www.atm.helsinki.fi/icupe).
In-Situ Atmospheric-Ecosystem Collaborating Stations
Although more than 200 stations are presented in the PEEX regions of interest, but so far only about 60+ Russian stations have metadata information available. The station metadata enables to categorize stations in a systematic manner and to connect them to international observation networks, such as WMO-GAWP, CERN and perform standardization of data formats. As part of the INAR activities with Russian partners, an e-catalogue was published as a living document (to be updated as new stations will joinin the PEEX network). This catalogue (www.atm.helsinki.fi/peex/index.php/peex-russia-in-situ-stations-e-catalogue) introduces information on measurements and contacts of the Russian stations in the collaboration network, and promotes research collaboration and stations as partners of the collaboration network and to give wider visibility to the stations activities.
Integrated Arctic Observation System (INTAROS)
For Arctic region, 11 stations were selected for the Atmospheric, Terrestrial and Cryospheric parts/themes. The updated metadata were obtained for these measurement stations located within the Russian Arctic territories. Metadata include basic information, physico-geographical and infrastructure description of the sites and details on atmosphere and ecosystem (soils–forest–lakes–urban–peatland–tundra) measurements. Measurements at these sites represent more local conditions of immediate surrounding environment and datasets (as time-series) are available under request. For SMEAR-I (Station for Measuring Atmosphere-Ecosystem Relations) station included in the INTAROS web-based catalogue (catalog-intaros.nersc.no/dataset), the measurement programme includes meteorological (wind speed and direction, air temperature and relative humidity), radiation (global, reflected, net), chemistry/aerosols (CO2, SO2, O3, NOx, etc.); ecosystem, photosynthesis, irradiance related measurements.
Integrative and Comprehensive Understanding on Polar Environments (iCUPE)
More than 20 open access datasets as products for researchers, decision- and policy makers, stakeholders and end-users communities are produced. A list of expected datasets is presented at www.atm.helsinki.fi/icupe/index.php/datasets/list-of-datasets-as-deliverables. These datasets are promoted to larger science and public communities through so-called “teasers” (www.atm.helsinki.fi/icupe/index.php/submitted-datasets). For the Russian Arctic regions, these also include those from the iCUPE Russian collaborators: atmospheric mercury measurements at Amderma station; elemental and organic carbon over the north-western coast of the Kandalaksha Bay of the White Sea; micro-climatic features and Urban Heat Island intensity in cities of Arctic region; and others. Delivered datasets (www.atm.helsinki.fi/icupe/index.php/datasets/delivered-datasets) are directly linked (and downloadable) at website, and corresponding Read-Me files are available with detailed description and metadata information included. Selected datasets are also to be tested for pre/post-processing/analysis on several cloud-based online platforms.
How to cite: Altimir, N., Mahura, A., Petäjä, T., Lappalainen, H. K., Borisova, A., Bashmakova, I., Noe, S., Duplissy, E.-M., Haapanala, P., Bäck, J., Pankratov, F., Schevchenko, V., Konstantinov, P., Vaventsov, M., Chalov, S., Baklanov, A., Ezau, I., Zilitinkevich, S., and Kulmala, M. and the SMEAR Measurement Concept: Arctic Datasets as Part af PEEX International Collaboration, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13244, https://doi.org/10.5194/egusphere-egu2020-13244, 2020.
EGU2020-12864 | Displays | ITS2.15/BG2.25
Verification of temperature and humidity conditions of mineral soils in the active layer modelVasiliy Bogomolov, Dyukarev Egor, and Stepanenko Victor
Detailed monitoring of the temperature of the soil layer provides a unique experimental material for studying the complex processes of heat transfer from the surface layer of the atmosphere to soils. According to the data of autonomous devices of air temperature, it was found that within each key area there are no significant differences between the observation sits. According to the annual (2011-2018) observations of the temperature regime of the soil and ground, it has been found that the microclimatic specificity of bog ecosystems is clearly manifested in the characteristics of the daily and annual variations in soil temperature. A regression model describing the change in the maximum freezing depth during the winter has been proposed, using air temperature, snow depth and bog water level as predictors. The effects of BWL and snow cover have similar values, which indicates an approximately equal contribution of BWL variations and snow depth to changes in freezing. The thickness of the seasonally frozen layer at all sites is 20-60 cm and the maximum freezing of the peat layer is reached in February-March. Degradation of the seasonally frozen layer occurs both from above and below.
It was found that similar bog ecosystems in different bog massifs have significantly different temperature regimes. The peat stratum of northern bogs can be both warmer (in winter) and colder (in summer) in comparison with bogs, located 520 km to the south and 860 km to the west.
How to cite: Bogomolov, V., Egor, D., and Victor, S.: Verification of temperature and humidity conditions of mineral soils in the active layer model, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12864, https://doi.org/10.5194/egusphere-egu2020-12864, 2020.
Detailed monitoring of the temperature of the soil layer provides a unique experimental material for studying the complex processes of heat transfer from the surface layer of the atmosphere to soils. According to the data of autonomous devices of air temperature, it was found that within each key area there are no significant differences between the observation sits. According to the annual (2011-2018) observations of the temperature regime of the soil and ground, it has been found that the microclimatic specificity of bog ecosystems is clearly manifested in the characteristics of the daily and annual variations in soil temperature. A regression model describing the change in the maximum freezing depth during the winter has been proposed, using air temperature, snow depth and bog water level as predictors. The effects of BWL and snow cover have similar values, which indicates an approximately equal contribution of BWL variations and snow depth to changes in freezing. The thickness of the seasonally frozen layer at all sites is 20-60 cm and the maximum freezing of the peat layer is reached in February-March. Degradation of the seasonally frozen layer occurs both from above and below.
It was found that similar bog ecosystems in different bog massifs have significantly different temperature regimes. The peat stratum of northern bogs can be both warmer (in winter) and colder (in summer) in comparison with bogs, located 520 km to the south and 860 km to the west.
How to cite: Bogomolov, V., Egor, D., and Victor, S.: Verification of temperature and humidity conditions of mineral soils in the active layer model, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12864, https://doi.org/10.5194/egusphere-egu2020-12864, 2020.
EGU2020-5999 | Displays | ITS2.15/BG2.25
Chemical composition of underground ice of the Marre-Sale Cape (West Siberia)Vladislav Butakov and Elena Slagoda
EGU2020-10449 | Displays | ITS2.15/BG2.25
The link between precipitation and recent outbreak of anthrax in North-West SiberiaEkaterina Ezhova, Dmitry Orlov, Elli Suhonen, Dmitry Kaverin, Alexander Mahura, Victor Gennadinik, Ilmo Kukkonen, Dmitry Drozdov, Hanna Lappalainen, Vladimir Melnikov, Tuukka Petäjä, Veli-Matti Kerminen, Sergey Zilitinkevich, Svetlana Malkhazova, Torben Christensen, and Markku Kulmala
Anthrax is a bacterial disease affecting mainly livestock but also posing a risk for humans. During the outbreak of anthrax on Yamal peninsula in 2016, 36 humans were infected and more than 2.5 thousand reindeer died or were killed to prevent further contamination [1]. Anthrax is a natural focal disease, which means that its agents depend on climatic conditions. The revival of bacteria in previously epidemiologically stable region was attributed to thawing permafrost, intensified during the heat wave of 2016. We studied recent dynamics of air temperature as well as summer and winter precipitation in the region. In addition, we analysed the effect of winter precipitation and air temperature on the dynamics of active layer thickness using data from Circumpolar Active Layer Monitoring sites [2]. Our analysis suggests that permafrost was thawing intensively during several years before the outbreak, when snowy cold winters followed warmer winters. Thick snow prevented soil from freezing and enhanced permafrost thawing. In addition, we showed that summer precipitation drastically decreased in the region of outbreak during recent years, likely contributing to the spread of disease.
[1] Popova, A.Yu. et al. Outbreak of Anthrax in the Yamalo-Nenets Autonomous District in 2016, Epidemiological Peculiarities. Problemy Osobo Opasnykh Infektsii [Problems of Particularly Dangerous Infections]. 4, 42–46 (2016).
[2] Circumpolar Active Layer Monitoring site: https://www2.gwu.edu/~calm/ [2/08/2019].
How to cite: Ezhova, E., Orlov, D., Suhonen, E., Kaverin, D., Mahura, A., Gennadinik, V., Kukkonen, I., Drozdov, D., Lappalainen, H., Melnikov, V., Petäjä, T., Kerminen, V.-M., Zilitinkevich, S., Malkhazova, S., Christensen, T., and Kulmala, M.: The link between precipitation and recent outbreak of anthrax in North-West Siberia, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10449, https://doi.org/10.5194/egusphere-egu2020-10449, 2020.
Anthrax is a bacterial disease affecting mainly livestock but also posing a risk for humans. During the outbreak of anthrax on Yamal peninsula in 2016, 36 humans were infected and more than 2.5 thousand reindeer died or were killed to prevent further contamination [1]. Anthrax is a natural focal disease, which means that its agents depend on climatic conditions. The revival of bacteria in previously epidemiologically stable region was attributed to thawing permafrost, intensified during the heat wave of 2016. We studied recent dynamics of air temperature as well as summer and winter precipitation in the region. In addition, we analysed the effect of winter precipitation and air temperature on the dynamics of active layer thickness using data from Circumpolar Active Layer Monitoring sites [2]. Our analysis suggests that permafrost was thawing intensively during several years before the outbreak, when snowy cold winters followed warmer winters. Thick snow prevented soil from freezing and enhanced permafrost thawing. In addition, we showed that summer precipitation drastically decreased in the region of outbreak during recent years, likely contributing to the spread of disease.
[1] Popova, A.Yu. et al. Outbreak of Anthrax in the Yamalo-Nenets Autonomous District in 2016, Epidemiological Peculiarities. Problemy Osobo Opasnykh Infektsii [Problems of Particularly Dangerous Infections]. 4, 42–46 (2016).
[2] Circumpolar Active Layer Monitoring site: https://www2.gwu.edu/~calm/ [2/08/2019].
How to cite: Ezhova, E., Orlov, D., Suhonen, E., Kaverin, D., Mahura, A., Gennadinik, V., Kukkonen, I., Drozdov, D., Lappalainen, H., Melnikov, V., Petäjä, T., Kerminen, V.-M., Zilitinkevich, S., Malkhazova, S., Christensen, T., and Kulmala, M.: The link between precipitation and recent outbreak of anthrax in North-West Siberia, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10449, https://doi.org/10.5194/egusphere-egu2020-10449, 2020.
EGU2020-11452 | Displays | ITS2.15/BG2.25
Soil CO2 fluxes and surface microtopography in a mixed hemiboreal forest: space, time and models.Dmitrii Krasnov, Alisa Krasnova, and Steffen Noe
The understanding of biophysical mechanisms influencing the spatial and temporal distribution of CO2 flux is important for predicting the response of forest ecosystem on any environmental changes. It has been shown, that the most important controlling factors responsible for CO2 flux fluctuation from the forest soils are soil moisture, temperature and the type of the forest stand. In our work, we present three years of soil CO2 flux measurements in the hemiboreal forest that is characterized by high spatial heterogeneity of vegetation and soil. Three sample plots that represent the main common tree species (Pinus sylvestris, Picea abies and Betula sp) were chosen to assess the influence of tree species composition on the soil CO2 flux. The chosen sample plots have clear microtopographical structure with depressions, elevations and flat zones. The data were collected from three sample plots according to forest floor microtopography using manual closed dynamic chamber equipped with IRGA sensor (The Vaisala GMP343 probe), humidity and temperature sensors (Vaisala HMP155). Obvious temporal resolution limitation of manual chamber method is compensated by higher spatial coverage.
Previous research has indicated that one of the major sources of uncertainties in the flux estimation is the choice of the model for flux calculation. We compared the commonly used models (linear, exponential and HMR) using two available R packages: “gasflux” and “flux” packages. Additionally, we developed the algorithm that allows for automatically choosing the best model based on widely used criteria (MAE, RAE, AIC, RMSE).
The results showed that in most of the cases linear and exponential models performed better. The comparison of sample plot showed that the biggest influence of microtopography was in the birch forest but the moisture had a bigger effect in the pine forest stand.
How to cite: Krasnov, D., Krasnova, A., and Noe, S.: Soil CO2 fluxes and surface microtopography in a mixed hemiboreal forest: space, time and models., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11452, https://doi.org/10.5194/egusphere-egu2020-11452, 2020.
The understanding of biophysical mechanisms influencing the spatial and temporal distribution of CO2 flux is important for predicting the response of forest ecosystem on any environmental changes. It has been shown, that the most important controlling factors responsible for CO2 flux fluctuation from the forest soils are soil moisture, temperature and the type of the forest stand. In our work, we present three years of soil CO2 flux measurements in the hemiboreal forest that is characterized by high spatial heterogeneity of vegetation and soil. Three sample plots that represent the main common tree species (Pinus sylvestris, Picea abies and Betula sp) were chosen to assess the influence of tree species composition on the soil CO2 flux. The chosen sample plots have clear microtopographical structure with depressions, elevations and flat zones. The data were collected from three sample plots according to forest floor microtopography using manual closed dynamic chamber equipped with IRGA sensor (The Vaisala GMP343 probe), humidity and temperature sensors (Vaisala HMP155). Obvious temporal resolution limitation of manual chamber method is compensated by higher spatial coverage.
Previous research has indicated that one of the major sources of uncertainties in the flux estimation is the choice of the model for flux calculation. We compared the commonly used models (linear, exponential and HMR) using two available R packages: “gasflux” and “flux” packages. Additionally, we developed the algorithm that allows for automatically choosing the best model based on widely used criteria (MAE, RAE, AIC, RMSE).
The results showed that in most of the cases linear and exponential models performed better. The comparison of sample plot showed that the biggest influence of microtopography was in the birch forest but the moisture had a bigger effect in the pine forest stand.
How to cite: Krasnov, D., Krasnova, A., and Noe, S.: Soil CO2 fluxes and surface microtopography in a mixed hemiboreal forest: space, time and models., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11452, https://doi.org/10.5194/egusphere-egu2020-11452, 2020.
EGU2020-21045 | Displays | ITS2.15/BG2.25
Heterogenous snow cover derived uncertainty in Arctic carbon budgetHotaek Park and Youngwook Kim
The winter of northern Arctic regions is characterized by strong winds that lead to frequent blowing snow and thus heterogeneous snow cover, which critically affects permafrost hydrothermal processes and the associated feedbacks across the northern regions. However until now, observations and models have not documented the blowing snow impacts. The blowing snow process has coupled into a land surface model CHANGE, and the improved model was applied to observational sites in the northeastern Siberia for 1979–2016. The simulated snow depth and soil temperature showed general agreements with the observations. To quantify the impacts of blowing snow on permafrost temperatures and the associated greenhouse gases, two decadal experiments that included or excluded blowing snow, were conducted for the observational sites and over the pan-Arctic scale. The differences between the two experiments represent impacts of the blowing snow on the analytical components. The blowing snow-induced thinner snow depth resulted in cooler permafrost temperature and lower active layer thickness; this lower temperature limited the vegetation photosynthetic activity due to the increased soil moisture stress in terms of larger soil ice portion and hence lower ecosystem productivity. The cooler permafrost temperature is also linked to less decomposition of soil organic matter and lower releases of CO2 and CH4 to the atmosphere. These results suggest that the most land models without a blowing snow component likely overestimate the release of greenhouse gases from the tundra regions. There is a strong need to improve land surface models for better simulations and future projections of the northern environmental changes.
How to cite: Park, H. and Kim, Y.: Heterogenous snow cover derived uncertainty in Arctic carbon budget, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21045, https://doi.org/10.5194/egusphere-egu2020-21045, 2020.
The winter of northern Arctic regions is characterized by strong winds that lead to frequent blowing snow and thus heterogeneous snow cover, which critically affects permafrost hydrothermal processes and the associated feedbacks across the northern regions. However until now, observations and models have not documented the blowing snow impacts. The blowing snow process has coupled into a land surface model CHANGE, and the improved model was applied to observational sites in the northeastern Siberia for 1979–2016. The simulated snow depth and soil temperature showed general agreements with the observations. To quantify the impacts of blowing snow on permafrost temperatures and the associated greenhouse gases, two decadal experiments that included or excluded blowing snow, were conducted for the observational sites and over the pan-Arctic scale. The differences between the two experiments represent impacts of the blowing snow on the analytical components. The blowing snow-induced thinner snow depth resulted in cooler permafrost temperature and lower active layer thickness; this lower temperature limited the vegetation photosynthetic activity due to the increased soil moisture stress in terms of larger soil ice portion and hence lower ecosystem productivity. The cooler permafrost temperature is also linked to less decomposition of soil organic matter and lower releases of CO2 and CH4 to the atmosphere. These results suggest that the most land models without a blowing snow component likely overestimate the release of greenhouse gases from the tundra regions. There is a strong need to improve land surface models for better simulations and future projections of the northern environmental changes.
How to cite: Park, H. and Kim, Y.: Heterogenous snow cover derived uncertainty in Arctic carbon budget, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21045, https://doi.org/10.5194/egusphere-egu2020-21045, 2020.
EGU2020-2437 | Displays | ITS2.15/BG2.25
A Decomposition of Feedback Contributions to the Arctic Temperature Biases in the CMIP5 Climate ModelsTae-Won Park and Doo-Sun Park
The systematic temperature biases over the Arctic Sea in the CMIP5 models are decomposed into partial biases due to physical and dynamical processes, based upon the climate feedback-responses analysis method (CFRAM). In the frame of the CFRAM, physical processes are also divided into water vapor, cloud, and albedo feedbacks. Though the Arctic temperature biases largely depend on models, considerable cold bias are found in most of models and ensemble mean. Overall, temperature biases corresponding to physical and dynamical processes tend to cancel each other out and total biases equal to their sums are geographically similar to those related to physical processes. For the physics-related biases, a contribution of albedo feedback is the largest, followed by cloud and water vapor feedbacks in turn. Quantitative contributions of the processes to temperature biases are evaluated from area-mean values over the entire Arctic Sea, Barents-Kara Sea, and Beaufort Sea. While relationships between total and partial biases over the Arctic Sea show the large model-dependency, in the local-scale, total temperature biases over Barents-Kara Sea and Beaufort Sea are made from consistent contributions among models. An overestimate (underestimate) of specific humidity and cloud fraction in models are responsible for an overall warm (cold) biases through longwave heating rates of the greenhouse effect. Shortwave cloud forcing by cloud fraction biases offsets a substantial part of biases related to longwave cloud forcing, while shortwave effect of specific humidity bias plays a minor role on water vapor feedback. The fact that geographical distribution of sea-ice biases is mostly opposite to that of partial temperature bias due to albedo feedback indicates that the biased simulation of sea-ice in models are the main contributors in albedo feedback.
How to cite: Park, T.-W. and Park, D.-S.: A Decomposition of Feedback Contributions to the Arctic Temperature Biases in the CMIP5 Climate Models, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2437, https://doi.org/10.5194/egusphere-egu2020-2437, 2020.
The systematic temperature biases over the Arctic Sea in the CMIP5 models are decomposed into partial biases due to physical and dynamical processes, based upon the climate feedback-responses analysis method (CFRAM). In the frame of the CFRAM, physical processes are also divided into water vapor, cloud, and albedo feedbacks. Though the Arctic temperature biases largely depend on models, considerable cold bias are found in most of models and ensemble mean. Overall, temperature biases corresponding to physical and dynamical processes tend to cancel each other out and total biases equal to their sums are geographically similar to those related to physical processes. For the physics-related biases, a contribution of albedo feedback is the largest, followed by cloud and water vapor feedbacks in turn. Quantitative contributions of the processes to temperature biases are evaluated from area-mean values over the entire Arctic Sea, Barents-Kara Sea, and Beaufort Sea. While relationships between total and partial biases over the Arctic Sea show the large model-dependency, in the local-scale, total temperature biases over Barents-Kara Sea and Beaufort Sea are made from consistent contributions among models. An overestimate (underestimate) of specific humidity and cloud fraction in models are responsible for an overall warm (cold) biases through longwave heating rates of the greenhouse effect. Shortwave cloud forcing by cloud fraction biases offsets a substantial part of biases related to longwave cloud forcing, while shortwave effect of specific humidity bias plays a minor role on water vapor feedback. The fact that geographical distribution of sea-ice biases is mostly opposite to that of partial temperature bias due to albedo feedback indicates that the biased simulation of sea-ice in models are the main contributors in albedo feedback.
How to cite: Park, T.-W. and Park, D.-S.: A Decomposition of Feedback Contributions to the Arctic Temperature Biases in the CMIP5 Climate Models, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2437, https://doi.org/10.5194/egusphere-egu2020-2437, 2020.
EGU2020-19377 | Displays | ITS2.15/BG2.25
Relationships linking satellite-retrieved ocean color data with atmospheric components in the ArcticMarjan Marbouti, Sehyun Jang, Silvia Becagli, Tuomo Nieminen, Gabriel Navarro, Veli-Matti Kerminen, Mikko Sipilä, and Markku Kulmala
We examined the relationships linking in-situ measurements of gas-phase methanesulfonic acid (MSA), sulfuric acid (SA), iodic acid (HIO3), Highly Oxidized Organic Molecules (HOM) and aerosol size-distributions with satellite-derived chlorophyll (Chl-a) and oceanic primary production (PP). Atmospheric data were collected at Ny-Ålesund site during spring-summer 2017 (30th March-4th August). We compared ocean color data from Barents Sea and Greenland Sea with concentrations of low-volatile vapours and new particle formation. The aim is to understand the main factors controlling the concentrations of atmospheric components in the Arctic in different ocean domains and seasons. Early phytoplanktonic bloom starting in April at the marginal ice zone caused Chl-a and PP in the Barents Sea to be higher than in the Greenland Sea during spring, whereas the pattern was opposite in summer. We found the correlation between ocean color data (Chl-a and PP) and MSA decreasing from spring to summer in Barents Sea and increasing in Greenland Sea. This establishes relationship between sea ice melting and phytoplanktonic bloom, which starts by sea ice melting. Similar pattern was observed for SA. Also HIO3 in both ocean domains correlated with Chl-a and PP during spring time. Greenland Sea was more active than Barents Sea. These results suggest that marine phytoplankton metabolism is an important source of MSA and SA, as expected, but also a source of HIO3 precursors (such as I2). HOMs had low correlation with ocean color parameters in comparison to other atmospheric vapours in this study both in spring and summer. The plausible explanation for low correlation is that the primary source of Volatile Organic Compounds (VOC) – precursors of HOM – is the soil of Svalbard archipelago rather than ocean. During spring, nucleation mode particles were found to correlate with Chl-a at Barents Sea and with PP at Greenland Sea. This means that biogenic productivity has a strong impact on new particle formation in spring although small particles are not related to biogenic parameters in summer.
How to cite: Marbouti, M., Jang, S., Becagli, S., Nieminen, T., Navarro, G., Kerminen, V.-M., Sipilä, M., and Kulmala, M.: Relationships linking satellite-retrieved ocean color data with atmospheric components in the Arctic, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19377, https://doi.org/10.5194/egusphere-egu2020-19377, 2020.
We examined the relationships linking in-situ measurements of gas-phase methanesulfonic acid (MSA), sulfuric acid (SA), iodic acid (HIO3), Highly Oxidized Organic Molecules (HOM) and aerosol size-distributions with satellite-derived chlorophyll (Chl-a) and oceanic primary production (PP). Atmospheric data were collected at Ny-Ålesund site during spring-summer 2017 (30th March-4th August). We compared ocean color data from Barents Sea and Greenland Sea with concentrations of low-volatile vapours and new particle formation. The aim is to understand the main factors controlling the concentrations of atmospheric components in the Arctic in different ocean domains and seasons. Early phytoplanktonic bloom starting in April at the marginal ice zone caused Chl-a and PP in the Barents Sea to be higher than in the Greenland Sea during spring, whereas the pattern was opposite in summer. We found the correlation between ocean color data (Chl-a and PP) and MSA decreasing from spring to summer in Barents Sea and increasing in Greenland Sea. This establishes relationship between sea ice melting and phytoplanktonic bloom, which starts by sea ice melting. Similar pattern was observed for SA. Also HIO3 in both ocean domains correlated with Chl-a and PP during spring time. Greenland Sea was more active than Barents Sea. These results suggest that marine phytoplankton metabolism is an important source of MSA and SA, as expected, but also a source of HIO3 precursors (such as I2). HOMs had low correlation with ocean color parameters in comparison to other atmospheric vapours in this study both in spring and summer. The plausible explanation for low correlation is that the primary source of Volatile Organic Compounds (VOC) – precursors of HOM – is the soil of Svalbard archipelago rather than ocean. During spring, nucleation mode particles were found to correlate with Chl-a at Barents Sea and with PP at Greenland Sea. This means that biogenic productivity has a strong impact on new particle formation in spring although small particles are not related to biogenic parameters in summer.
How to cite: Marbouti, M., Jang, S., Becagli, S., Nieminen, T., Navarro, G., Kerminen, V.-M., Sipilä, M., and Kulmala, M.: Relationships linking satellite-retrieved ocean color data with atmospheric components in the Arctic, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19377, https://doi.org/10.5194/egusphere-egu2020-19377, 2020.
EGU2020-12485 | Displays | ITS2.15/BG2.25
The outgassing of carbon dioxide from aquatic ecosystems of Western Siberia (ZOTTO area) and implications for the regional carbon budgetProkushkin Anatoly, Panov Alexey, Polosukhina Daria, Urban Anastasia, and Karlsson Jan
The lateral migration of dissolved carbon dioxide (CO2) with soil solutions to fresh water aquatic systems and in situ mineralization of soil-derived organic carbon (OC) often causes supersaturation of the inland waters with CO2. An evasion of excess CO2 from lake and stream surfaces to the atmosphere is important, but underestimated, pathway of carbon flux in the coupled terrestrial-aquatic carbon cycle. As a result, the loss of terrestrial OC as CO2 through the drainage networks remains poorly accounted in regional carbon budgets estimated on the basis of eddy covariance measurements. In this study we have made an attempt to quantify fluxes of dissolved CO2 (pCO2) and CO2 emissions (fCO2) in fluvial and lacustrine waterbodies located within the peat-bog dominated landscape of Western Siberia (ZOTTO area, 60oN, 89oE). For two consecutive years (2018-2019) we studied the seasonal and diurnal dynamics of pCO2 and fCO2 in several different order streams (1-4) and ponds within a peatbog. Dissolved pCO2 was measured by portable IRGA Vaisala GMP222 placed in PTFE membrane. Carbon dioxide emissions were analyzed using floating chamber equipped with same portable IRGA (Vaisala GMP222). Despite, the pCO2 values were highest in winter season (350-820 umol/l) we did not detect sizeable emissions from water surface in that period. The peaks of pCO2 in summer-fall season (up to 360 umol/l) occurred at stormflow regimes. The frost-free season emission of CO2 from stream surfaces ranged from 0.2 to 7.5 umol/m2/s and decreased with the order of stream. An averaged for the season CO2 evasion from the Razvilki stream (2nd order stream) was 4.9±1.3 umol/m2/s, which is comparable to the seasonal mean of soil CO2 emissions in the study area. However, in opposite to soil respiration, which maxima often corresponds to highest soil temperatures, peaks of CO2 outgassing occur at high flow regimes. The fCO2 values were correlated with discharge (r = 0.60, p<0.05) and DOC concentrations (r = 0.69, p<0.05). Aquatic C losses are still under analysis in terms of surface water area estimation.
How to cite: Anatoly, P., Alexey, P., Daria, P., Anastasia, U., and Jan, K.: The outgassing of carbon dioxide from aquatic ecosystems of Western Siberia (ZOTTO area) and implications for the regional carbon budget, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12485, https://doi.org/10.5194/egusphere-egu2020-12485, 2020.
The lateral migration of dissolved carbon dioxide (CO2) with soil solutions to fresh water aquatic systems and in situ mineralization of soil-derived organic carbon (OC) often causes supersaturation of the inland waters with CO2. An evasion of excess CO2 from lake and stream surfaces to the atmosphere is important, but underestimated, pathway of carbon flux in the coupled terrestrial-aquatic carbon cycle. As a result, the loss of terrestrial OC as CO2 through the drainage networks remains poorly accounted in regional carbon budgets estimated on the basis of eddy covariance measurements. In this study we have made an attempt to quantify fluxes of dissolved CO2 (pCO2) and CO2 emissions (fCO2) in fluvial and lacustrine waterbodies located within the peat-bog dominated landscape of Western Siberia (ZOTTO area, 60oN, 89oE). For two consecutive years (2018-2019) we studied the seasonal and diurnal dynamics of pCO2 and fCO2 in several different order streams (1-4) and ponds within a peatbog. Dissolved pCO2 was measured by portable IRGA Vaisala GMP222 placed in PTFE membrane. Carbon dioxide emissions were analyzed using floating chamber equipped with same portable IRGA (Vaisala GMP222). Despite, the pCO2 values were highest in winter season (350-820 umol/l) we did not detect sizeable emissions from water surface in that period. The peaks of pCO2 in summer-fall season (up to 360 umol/l) occurred at stormflow regimes. The frost-free season emission of CO2 from stream surfaces ranged from 0.2 to 7.5 umol/m2/s and decreased with the order of stream. An averaged for the season CO2 evasion from the Razvilki stream (2nd order stream) was 4.9±1.3 umol/m2/s, which is comparable to the seasonal mean of soil CO2 emissions in the study area. However, in opposite to soil respiration, which maxima often corresponds to highest soil temperatures, peaks of CO2 outgassing occur at high flow regimes. The fCO2 values were correlated with discharge (r = 0.60, p<0.05) and DOC concentrations (r = 0.69, p<0.05). Aquatic C losses are still under analysis in terms of surface water area estimation.
How to cite: Anatoly, P., Alexey, P., Daria, P., Anastasia, U., and Jan, K.: The outgassing of carbon dioxide from aquatic ecosystems of Western Siberia (ZOTTO area) and implications for the regional carbon budget, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12485, https://doi.org/10.5194/egusphere-egu2020-12485, 2020.
EGU2020-10114 | Displays | ITS2.15/BG2.25
Interseasonal impact of Siberian snow cover formation rate on the baroclinicity and wave activity over Northern EurasiaYuliya V. Martynova and Vladimir N. Krupchatnikov
Climate changes can cause a change of pattern of the atmospheric interaction between polar and middle latitudes, which can cause a change in the cyclone formation regime, which in turn can provoke extreme and hazardous phenomena intensification. Therefore, it is essential to understand the nature of the atmospheric interaction in question clearly.
Due to climatic features, it is precisely in the Siberian part of Eurasia that the most extensive snow cover forms, causing significant radiation cooling in this territory. The snow cover area, and consequently the intensity of radiation cooling, can vary significantly from year to year. It can make a significant impact on the interaction of the troposphere and lower stratosphere of middle and Arctic latitudes not only during the establishment of snow cover but also in the following winter season. Knowledge of the features and patterns of the manifestation of the influence of local disturbances (arising on the surface in the autumn due to the formation of autumn snow cover) on the atmospheric conditions of the following winter can be used as additional information when making seasonal weather forecasts.
The present study aims to assess the response of the troposphere and lower stratosphere over Northern Eurasia during the autumn-winter period to a rate of the snow cover formation in Siberia.
We separated the years with the sharp intensive and smooth slow snow cover formation. We analyzed for them the baroclinicity index and its components (the zonal and meridional potential temperature gradient and the Brent-Väisälä frequency) for various isobaric levels (up to 200 hPa), and Eliassen-Palm flux. The results obtained suggest that anomalies in the snow cover formation rate in Siberia can contribute and cause anomalies in atmospheric circulation in the autumn-winter period. However, there is no complete clarity regarding the mechanism of distribution of this influence.
This work was supported by the Russian Science Foundation grant No. 19-17-00248.
How to cite: Martynova, Y. V. and Krupchatnikov, V. N.: Interseasonal impact of Siberian snow cover formation rate on the baroclinicity and wave activity over Northern Eurasia, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10114, https://doi.org/10.5194/egusphere-egu2020-10114, 2020.
Climate changes can cause a change of pattern of the atmospheric interaction between polar and middle latitudes, which can cause a change in the cyclone formation regime, which in turn can provoke extreme and hazardous phenomena intensification. Therefore, it is essential to understand the nature of the atmospheric interaction in question clearly.
Due to climatic features, it is precisely in the Siberian part of Eurasia that the most extensive snow cover forms, causing significant radiation cooling in this territory. The snow cover area, and consequently the intensity of radiation cooling, can vary significantly from year to year. It can make a significant impact on the interaction of the troposphere and lower stratosphere of middle and Arctic latitudes not only during the establishment of snow cover but also in the following winter season. Knowledge of the features and patterns of the manifestation of the influence of local disturbances (arising on the surface in the autumn due to the formation of autumn snow cover) on the atmospheric conditions of the following winter can be used as additional information when making seasonal weather forecasts.
The present study aims to assess the response of the troposphere and lower stratosphere over Northern Eurasia during the autumn-winter period to a rate of the snow cover formation in Siberia.
We separated the years with the sharp intensive and smooth slow snow cover formation. We analyzed for them the baroclinicity index and its components (the zonal and meridional potential temperature gradient and the Brent-Väisälä frequency) for various isobaric levels (up to 200 hPa), and Eliassen-Palm flux. The results obtained suggest that anomalies in the snow cover formation rate in Siberia can contribute and cause anomalies in atmospheric circulation in the autumn-winter period. However, there is no complete clarity regarding the mechanism of distribution of this influence.
This work was supported by the Russian Science Foundation grant No. 19-17-00248.
How to cite: Martynova, Y. V. and Krupchatnikov, V. N.: Interseasonal impact of Siberian snow cover formation rate on the baroclinicity and wave activity over Northern Eurasia, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10114, https://doi.org/10.5194/egusphere-egu2020-10114, 2020.
EGU2020-3851 | Displays | ITS2.15/BG2.25
The energy budget of West Siberian wetland in summerIrina Repina, Victor Stepanenko, and Arseniy Artamonov
This observational study reports variations of surface fluxes (turbulent, radiative, and soil heat) and ancillary atmospheric/surface/soil data based on in-situ measurements conducted at Mukhrino field station located in the middle taiga zone of the West Siberian Lowland. We estimated carbon dioxide flux and energy budget in a typical wetland of the western Siberian based on July measurements in 2019. Turbulent fluxes of momentum, sensible and latent heat, and CO2 were measured with the eddy covariance technique. The footprint of measured fluxes consisted of a homogeneous surface with tree-covered and smooth moss-covered surface. Measurements in the atmosphere were supplemented by measurements of heat flux through the soil, net radiation components and soil temperature at several depths. Turbulent heat fluxes (sensible and latent) show a diurnal variation typical of land ecosystems, being in phase with net radiation. Most of the available energy is released as latent heat flux, while maximum sensible heat fluxes are more than 3 times lower. Net CO2 sink was high but was typical for a wetland area. The influence of moss cover on the temperature regime of soil is considered. Based on soil temperature and heat flux measurements the thermal conductivity of moss layer was estimated. The thermal and dynamic roughness lengths of the moss-covered surface in the summer were also studied. The dependence of the dynamic roughness length on the atmospheric stability is established, and the coefficients relating the ratio of thermal and dynamic roughness length to the roughness Reynolds number are determined. The parametrizations obtained in this work can be used in Earth System models to represent wetland surfaces. The work was supported by RFBR grant 18-05-60126.
How to cite: Repina, I., Stepanenko, V., and Artamonov, A.: The energy budget of West Siberian wetland in summer, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3851, https://doi.org/10.5194/egusphere-egu2020-3851, 2020.
This observational study reports variations of surface fluxes (turbulent, radiative, and soil heat) and ancillary atmospheric/surface/soil data based on in-situ measurements conducted at Mukhrino field station located in the middle taiga zone of the West Siberian Lowland. We estimated carbon dioxide flux and energy budget in a typical wetland of the western Siberian based on July measurements in 2019. Turbulent fluxes of momentum, sensible and latent heat, and CO2 were measured with the eddy covariance technique. The footprint of measured fluxes consisted of a homogeneous surface with tree-covered and smooth moss-covered surface. Measurements in the atmosphere were supplemented by measurements of heat flux through the soil, net radiation components and soil temperature at several depths. Turbulent heat fluxes (sensible and latent) show a diurnal variation typical of land ecosystems, being in phase with net radiation. Most of the available energy is released as latent heat flux, while maximum sensible heat fluxes are more than 3 times lower. Net CO2 sink was high but was typical for a wetland area. The influence of moss cover on the temperature regime of soil is considered. Based on soil temperature and heat flux measurements the thermal conductivity of moss layer was estimated. The thermal and dynamic roughness lengths of the moss-covered surface in the summer were also studied. The dependence of the dynamic roughness length on the atmospheric stability is established, and the coefficients relating the ratio of thermal and dynamic roughness length to the roughness Reynolds number are determined. The parametrizations obtained in this work can be used in Earth System models to represent wetland surfaces. The work was supported by RFBR grant 18-05-60126.
How to cite: Repina, I., Stepanenko, V., and Artamonov, A.: The energy budget of West Siberian wetland in summer, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3851, https://doi.org/10.5194/egusphere-egu2020-3851, 2020.
EGU2020-6249 | Displays | ITS2.15/BG2.25
Carrying capacity of winter geese in the largest freshwater floodplain in different hydrological scenariosShaoxia Xia, Zhujian Meng, Jiakun Teng, and Xiubo Yu
The maintenance of wetland functions closely relates to the hydrological regime in floodplains. Poyang Lake, a large freshwater floodplain, is one of the most important wintering grounds for geese along East Asian-Australasian Flyway. Wintering geese prefer Carex spp as their main food source whose consequent growth is greatly affected by flooding duration and exposure time of the meadow. Therefore, hydrological condition affects the carrying capacity of the geese through wet meadow growth process.
Combining with remote sensing data and Digital Elevation Models, as well as field study, we identified the exposure time of meadow and the effective growth time of Carex spp. Applying geese’s feeding characteristics with logistic equation, we deduced the time window fit for geese's feeding on vegetation from growth curve. The distribution pattern of Carex spp suitable for geese feeding is also mapped according to the flood recession dates and digital elevation model. In addition, we modelled the above ground biomass using the vegetation index and in-situ experiments data in the wintering period of the wet year (2016), the normal year (2015) and the dry year (2006). Therefrom, we estimated the carrying capacity in wintering period referring the daily energy demand of geese in three different hydrological scenarios.
The results show that the exposure time of the dry year is brought forward 41 days and 56 days compared to the normal year and the wet year respectively. Among them, the average biomass of wet year is the highest, which is about 5.7×104t, while it decreased by 12% and 4.4% of those in dry year and the normal year. The carrying capacity of the geese in Poyang Lake in the three hydrological scenarios are all in surplus compared with the actual amount of geese according to the waterbirds survey of the same year. The maximum carrying capacity in the dry year is in September, while in November for the normal and wet year. In general, the growth process of Carex spp in the normal year and the wet year match the requirements of wintering geese in their peak period better than the wet year. However, the growth process in dry year may even have negative effects on the feeding of geese. This study is very important for appropriate hydrological regulation and wetland management in Poyang Lake, and for predicting habitat carrying capacity and formulating conservation strategies with scientific data.
How to cite: Xia, S., Meng, Z., Teng, J., and Yu, X.: Carrying capacity of winter geese in the largest freshwater floodplain in different hydrological scenarios, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6249, https://doi.org/10.5194/egusphere-egu2020-6249, 2020.
The maintenance of wetland functions closely relates to the hydrological regime in floodplains. Poyang Lake, a large freshwater floodplain, is one of the most important wintering grounds for geese along East Asian-Australasian Flyway. Wintering geese prefer Carex spp as their main food source whose consequent growth is greatly affected by flooding duration and exposure time of the meadow. Therefore, hydrological condition affects the carrying capacity of the geese through wet meadow growth process.
Combining with remote sensing data and Digital Elevation Models, as well as field study, we identified the exposure time of meadow and the effective growth time of Carex spp. Applying geese’s feeding characteristics with logistic equation, we deduced the time window fit for geese's feeding on vegetation from growth curve. The distribution pattern of Carex spp suitable for geese feeding is also mapped according to the flood recession dates and digital elevation model. In addition, we modelled the above ground biomass using the vegetation index and in-situ experiments data in the wintering period of the wet year (2016), the normal year (2015) and the dry year (2006). Therefrom, we estimated the carrying capacity in wintering period referring the daily energy demand of geese in three different hydrological scenarios.
The results show that the exposure time of the dry year is brought forward 41 days and 56 days compared to the normal year and the wet year respectively. Among them, the average biomass of wet year is the highest, which is about 5.7×104t, while it decreased by 12% and 4.4% of those in dry year and the normal year. The carrying capacity of the geese in Poyang Lake in the three hydrological scenarios are all in surplus compared with the actual amount of geese according to the waterbirds survey of the same year. The maximum carrying capacity in the dry year is in September, while in November for the normal and wet year. In general, the growth process of Carex spp in the normal year and the wet year match the requirements of wintering geese in their peak period better than the wet year. However, the growth process in dry year may even have negative effects on the feeding of geese. This study is very important for appropriate hydrological regulation and wetland management in Poyang Lake, and for predicting habitat carrying capacity and formulating conservation strategies with scientific data.
How to cite: Xia, S., Meng, Z., Teng, J., and Yu, X.: Carrying capacity of winter geese in the largest freshwater floodplain in different hydrological scenarios, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6249, https://doi.org/10.5194/egusphere-egu2020-6249, 2020.
EGU2020-5907 | Displays | ITS2.15/BG2.25
Participation of Ukrainian Educational and Research Institutions in Implementation of the Tasks of the Pan-Eurasian Experiment (PEEX)Sergiy Stepanenko and Anatoliy Polovyi
In 2019 three Ukrainian institutions (Taras Shevchenko National University of Kyiv, Ukrainian Hydrometeorological Institute and Odessa State Environmental University) joined the Pan-Eurasian Experiment (PEEX). This can be considered as a practical result of implementation of the Erasmus+ international educational project of ‘Adaptive Learning Environment for Competence in Economic and Societal Impacts of Local Weather, Air Quality and Climate’ (ECOIMPACT) being in line with the ideology of PEEX (research – research infrastructure – education) and covering part of the project’s geographic region.
Inclusion of the Ukrainian institutions in the project will make it possible to develop studies into climate change issues, their impact on air quality, dynamics of carbon cycles in ecosystems, biodiversity loss, greenhouse gas emissions and forest fires, public health, chemization of industry and agriculture, food provision, energy production and access to fresh water in Ukraine – all those tasks which are designated as priority ones in the PEEX project.
Under the framework of Infrastructure subprogramme of the PEEX, Ukrainian partners plan to create a long-term research infrastructure to consist of an extensive network of research stations, being only standard meteorological stations so far. Unfortunately, in Ukraine there are no FluxNet micrometeorological stations, let alone flagship research stations, to provide for measurement of a complete set of characteristics of the ecosystem-to-atmosphere interaction.
Having regard to the joint research plan of the Ukrainian project partners for 2020 it is supposed to revise the capacities of the existing network of hydrometeorological stations and the feasibility of its expansion by means of automatic weather stations ‘Inspector-Meteo’ (AWS-IM) and air quality transmitters ‘Vaisala’ AQT-420 available at three Ukrainian universities as a result of the Erasmus+ project ECOIMPACT, as well as acquisition of data from the network of automatic stations of the Ukrainian company IT-LYNX, which established a network of 55 AWS-IM for agribusiness purposes. The AWS-IM will expand the range of standard meteorological observations, and supplementation of it with models of environmental processes will make it possible to simulate the state of natural and man-made ecosystems in spatial and temporal scales.
It is additionally proposed to include AQT-420 transmitters available to the three Ukrainian universities due to the acquisition under the Erasmus+ project ECOIMPACT into the programme of monitoring air quality in large cities of Ukraine, with a view to the probable subsequent co-operation with the MegaSense project.
A detailed research plan of the Ukrainian participants for PEEX programme collaboration for the year 2020 is to be presented at the PEEX Inter- and Transdisciplinary Session at the EGU General Assembly.
Participation of Ukrainian universities, being the project partners in the PEEX educational subprogramme Transfer of Knowledge, is also important in order to provide training for a new generation of researchers in Ukraine who will use the new opportunities and tools gained over the course of implementation of the PEEX programme, including those ones that could be aimed at adaptation, mitigation of the climate change effects as well as dissemination of new knowledge and technologies acquired under the project to all concerned decision makers and the wider public.
How to cite: Stepanenko, S. and Polovyi, A.: Participation of Ukrainian Educational and Research Institutions in Implementation of the Tasks of the Pan-Eurasian Experiment (PEEX), EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5907, https://doi.org/10.5194/egusphere-egu2020-5907, 2020.
In 2019 three Ukrainian institutions (Taras Shevchenko National University of Kyiv, Ukrainian Hydrometeorological Institute and Odessa State Environmental University) joined the Pan-Eurasian Experiment (PEEX). This can be considered as a practical result of implementation of the Erasmus+ international educational project of ‘Adaptive Learning Environment for Competence in Economic and Societal Impacts of Local Weather, Air Quality and Climate’ (ECOIMPACT) being in line with the ideology of PEEX (research – research infrastructure – education) and covering part of the project’s geographic region.
Inclusion of the Ukrainian institutions in the project will make it possible to develop studies into climate change issues, their impact on air quality, dynamics of carbon cycles in ecosystems, biodiversity loss, greenhouse gas emissions and forest fires, public health, chemization of industry and agriculture, food provision, energy production and access to fresh water in Ukraine – all those tasks which are designated as priority ones in the PEEX project.
Under the framework of Infrastructure subprogramme of the PEEX, Ukrainian partners plan to create a long-term research infrastructure to consist of an extensive network of research stations, being only standard meteorological stations so far. Unfortunately, in Ukraine there are no FluxNet micrometeorological stations, let alone flagship research stations, to provide for measurement of a complete set of characteristics of the ecosystem-to-atmosphere interaction.
Having regard to the joint research plan of the Ukrainian project partners for 2020 it is supposed to revise the capacities of the existing network of hydrometeorological stations and the feasibility of its expansion by means of automatic weather stations ‘Inspector-Meteo’ (AWS-IM) and air quality transmitters ‘Vaisala’ AQT-420 available at three Ukrainian universities as a result of the Erasmus+ project ECOIMPACT, as well as acquisition of data from the network of automatic stations of the Ukrainian company IT-LYNX, which established a network of 55 AWS-IM for agribusiness purposes. The AWS-IM will expand the range of standard meteorological observations, and supplementation of it with models of environmental processes will make it possible to simulate the state of natural and man-made ecosystems in spatial and temporal scales.
It is additionally proposed to include AQT-420 transmitters available to the three Ukrainian universities due to the acquisition under the Erasmus+ project ECOIMPACT into the programme of monitoring air quality in large cities of Ukraine, with a view to the probable subsequent co-operation with the MegaSense project.
A detailed research plan of the Ukrainian participants for PEEX programme collaboration for the year 2020 is to be presented at the PEEX Inter- and Transdisciplinary Session at the EGU General Assembly.
Participation of Ukrainian universities, being the project partners in the PEEX educational subprogramme Transfer of Knowledge, is also important in order to provide training for a new generation of researchers in Ukraine who will use the new opportunities and tools gained over the course of implementation of the PEEX programme, including those ones that could be aimed at adaptation, mitigation of the climate change effects as well as dissemination of new knowledge and technologies acquired under the project to all concerned decision makers and the wider public.
How to cite: Stepanenko, S. and Polovyi, A.: Participation of Ukrainian Educational and Research Institutions in Implementation of the Tasks of the Pan-Eurasian Experiment (PEEX), EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5907, https://doi.org/10.5194/egusphere-egu2020-5907, 2020.
EGU2020-22604 | Displays | ITS2.15/BG2.25
Transfering best practices of doctoral training between EU, Russia, Belarus and ArmeniaKatja Anniina Lauri, Sini Karppinen, Alexander Mahura, Timo Vesala, Tuukka Petaja, Inga Skendere, and Irina Obukhova
MODEST (Modernization of Doctoral Education in Science and Improvement Teaching Methodologies) is a new capacity building project funded by the Erasmus+ programme. The project is coordinated by the University of Latvia. There are three other EU partners (from Finland, Poland and the United Kingdom) and a total of ten partners from three partner countries (Russia, Belarus and Armenia). Aims of the project include:
The work is carried out in three phases: preparation, development, and dissemination & exploitation. In the preparation phase, a detailed analysis of organization of doctoral studies and research management structures is done in both EU and partner countries. The development phase includes preparation of training materials, a series of study visits and training sessions, and creation of DTC’s. The dissemination & exploitation phase includes open access learning material, dissemination conferences, publications and workshop/conference presentations, as well as events and open resources for stakeholders, policymakers, students and the general public.To partly serve similar purposes as MODEST, University of Helsinki and Russian State Hydrometeorological University have introduced a new project, PEEX-AC (PEEX Academic Challenge). The aims of PEEX-AC are to share knowledge and experience, to promote state-of-the-art research and educational tools through organization of research training intensive course on “Multi-Scales and -Processes Modelling and Assessment for Environmental Applications”, to improve added value of research-oriented education in Finnish and Russian Universities, and to boost the PEEX international collaboration.The MODEST and PEEX-AC projects serve as a great examples of transfer of good practices in higher education, especially on doctoral level, but they also create new connections for educational and scientific collaboration. From the PEEX perspective, MODEST is an important initiative strengthening connections between European universities and institutions in Russia, Belarus and Armenia. The project will continue until autumn 2022.
How to cite: Lauri, K. A., Karppinen, S., Mahura, A., Vesala, T., Petaja, T., Skendere, I., and Obukhova, I.: Transfering best practices of doctoral training between EU, Russia, Belarus and Armenia, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22604, https://doi.org/10.5194/egusphere-egu2020-22604, 2020.
MODEST (Modernization of Doctoral Education in Science and Improvement Teaching Methodologies) is a new capacity building project funded by the Erasmus+ programme. The project is coordinated by the University of Latvia. There are three other EU partners (from Finland, Poland and the United Kingdom) and a total of ten partners from three partner countries (Russia, Belarus and Armenia). Aims of the project include:
The work is carried out in three phases: preparation, development, and dissemination & exploitation. In the preparation phase, a detailed analysis of organization of doctoral studies and research management structures is done in both EU and partner countries. The development phase includes preparation of training materials, a series of study visits and training sessions, and creation of DTC’s. The dissemination & exploitation phase includes open access learning material, dissemination conferences, publications and workshop/conference presentations, as well as events and open resources for stakeholders, policymakers, students and the general public.To partly serve similar purposes as MODEST, University of Helsinki and Russian State Hydrometeorological University have introduced a new project, PEEX-AC (PEEX Academic Challenge). The aims of PEEX-AC are to share knowledge and experience, to promote state-of-the-art research and educational tools through organization of research training intensive course on “Multi-Scales and -Processes Modelling and Assessment for Environmental Applications”, to improve added value of research-oriented education in Finnish and Russian Universities, and to boost the PEEX international collaboration.The MODEST and PEEX-AC projects serve as a great examples of transfer of good practices in higher education, especially on doctoral level, but they also create new connections for educational and scientific collaboration. From the PEEX perspective, MODEST is an important initiative strengthening connections between European universities and institutions in Russia, Belarus and Armenia. The project will continue until autumn 2022.
How to cite: Lauri, K. A., Karppinen, S., Mahura, A., Vesala, T., Petaja, T., Skendere, I., and Obukhova, I.: Transfering best practices of doctoral training between EU, Russia, Belarus and Armenia, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22604, https://doi.org/10.5194/egusphere-egu2020-22604, 2020.
EGU2020-15881 | Displays | ITS2.15/BG2.25
Communication channels to build a stronger PEEX networkStephany Buenrostro Mazon, Alla Borisova, Nuria Altimir, Alexander Mahura, and Hanna K. Lappalainen
In a transdisciplinary and pan-region programme like PEEX, building a strong communications platform with an efficient reach across countries is vital to foster collaborations, announce local findings or events to a wider audience, and build a sense of international community. In addition to its website, PEEX offers a quarterly e-newsletter and online Blog to its community.
The PEEX e-newsletter is sent to ~650 international subscribers, where the majority of readers come from Russia, China, Northern Europe and the USA. It is particularly important given that China and Russia have their own national social media channels like Weebo, WeChat and VK, which are popular alternatives to Twitter, Facebook or Instagram. Thus, having an online platform that integrates readers from Russia and China is vital for information exchange.
In addition to having an international readership, your article contribution to the PEEX Blog and e-newsletter facilitates your own efforts of dissemination. All articles have their unique web link, hosted in the PEEX blog, and hence can be embedded in your own research group’s website, news, social media accounts, etc.
The PEEX blog and newsletter welcomes articles from early career scientists. Great examples of these include field work stories that communicate the research aim but also the infrastructure or instrumentation available across the PEEX domain. Additionally, it provides a training opportunity for students to write popular scientific articles.
UArctic and FutureEarth are both partner programmes of PEEX. As the PEEX newsletter evolves, the aim is to integrate more overlapping relevant news and opportunities across these partners. Presently, UArctic shares the PEEX newsletter among its channels, as part of the thematic network the Arctic Boreal Hub.
Expanding the offer of channels for dissemination within the PEEX community and overall public will allow us to discuss science in a more collaborative, open and inclusive manner.
How to cite: Buenrostro Mazon, S., Borisova, A., Altimir, N., Mahura, A., and Lappalainen, H. K.: Communication channels to build a stronger PEEX network, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-15881, https://doi.org/10.5194/egusphere-egu2020-15881, 2020.
In a transdisciplinary and pan-region programme like PEEX, building a strong communications platform with an efficient reach across countries is vital to foster collaborations, announce local findings or events to a wider audience, and build a sense of international community. In addition to its website, PEEX offers a quarterly e-newsletter and online Blog to its community.
The PEEX e-newsletter is sent to ~650 international subscribers, where the majority of readers come from Russia, China, Northern Europe and the USA. It is particularly important given that China and Russia have their own national social media channels like Weebo, WeChat and VK, which are popular alternatives to Twitter, Facebook or Instagram. Thus, having an online platform that integrates readers from Russia and China is vital for information exchange.
In addition to having an international readership, your article contribution to the PEEX Blog and e-newsletter facilitates your own efforts of dissemination. All articles have their unique web link, hosted in the PEEX blog, and hence can be embedded in your own research group’s website, news, social media accounts, etc.
The PEEX blog and newsletter welcomes articles from early career scientists. Great examples of these include field work stories that communicate the research aim but also the infrastructure or instrumentation available across the PEEX domain. Additionally, it provides a training opportunity for students to write popular scientific articles.
UArctic and FutureEarth are both partner programmes of PEEX. As the PEEX newsletter evolves, the aim is to integrate more overlapping relevant news and opportunities across these partners. Presently, UArctic shares the PEEX newsletter among its channels, as part of the thematic network the Arctic Boreal Hub.
Expanding the offer of channels for dissemination within the PEEX community and overall public will allow us to discuss science in a more collaborative, open and inclusive manner.
How to cite: Buenrostro Mazon, S., Borisova, A., Altimir, N., Mahura, A., and Lappalainen, H. K.: Communication channels to build a stronger PEEX network, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-15881, https://doi.org/10.5194/egusphere-egu2020-15881, 2020.
ITS2.16/NH10.6 – Compound weather and climate events
EGU2020-271 | Displays | ITS2.16/NH10.6 | Highlight
Droughts, heatwaves, and wildfires: exploring compound and cascading events of dry hazards at the pan-European scaleSamuel Jonson Sutanto, Claudia Vitolo, Claudia Di Napoli, Mirko D'Andrea, and Henny Van Lanen
Compound and cascading natural hazards usually cause more severe impacts than any of the single hazard events alone. Despite the significant impacts of compound hazards, many studies have only focused on single hazards. The aim of this paper is to investigate spatio-temporal patterns of compound and cascading hazards using historical data for dry hazards, namely heatwaves, droughts, and fires across Europe. We streamlined a simple methodology to explore the occurrence of such events on a daily basis. Droughts in soil moisture were analyzed using time series of a threshold-based index, obtained from the LISFLOOD hydrological model forced with observations. Heatwave and fire events were analyzed using the ERA5-based temperature and Fire Weather Index datasets. The data used in this study relates to the summer seasons from 1990 to 2018. Our results show that joint dry hazard occurrences were identified in west, central, and east Europe, and with a lower frequency in southern Europe and eastern Scandinavia. Drought plays a substantial role in the occurrence of the compound and cascading events of dry hazards, especially in southern Europe as it drives the duration of cascading events. Moreover, drought is the most frequent hazard-precursor in cascading events, followed by compound drought-fire events. Changing the definition of a cascading dry hazard by increasing the number of days without a hazard from 1 to 21 within the event (inter-event criterion), lowers as expected, the maximum number of cascading events from 94 to 42, and extends the maximum average duration of cascading events from 38 to 86 days. We had to use proxy observed data to determine the three selected dry hazards because long time series of reported dry hazards do not exist. A complete and specific database with reported hazards is a prerequisite to obtain a more comprehensive insight into compound and cascading dry hazards.
How to cite: Sutanto, S. J., Vitolo, C., Di Napoli, C., D'Andrea, M., and Van Lanen, H.: Droughts, heatwaves, and wildfires: exploring compound and cascading events of dry hazards at the pan-European scale, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-271, https://doi.org/10.5194/egusphere-egu2020-271, 2020.
Compound and cascading natural hazards usually cause more severe impacts than any of the single hazard events alone. Despite the significant impacts of compound hazards, many studies have only focused on single hazards. The aim of this paper is to investigate spatio-temporal patterns of compound and cascading hazards using historical data for dry hazards, namely heatwaves, droughts, and fires across Europe. We streamlined a simple methodology to explore the occurrence of such events on a daily basis. Droughts in soil moisture were analyzed using time series of a threshold-based index, obtained from the LISFLOOD hydrological model forced with observations. Heatwave and fire events were analyzed using the ERA5-based temperature and Fire Weather Index datasets. The data used in this study relates to the summer seasons from 1990 to 2018. Our results show that joint dry hazard occurrences were identified in west, central, and east Europe, and with a lower frequency in southern Europe and eastern Scandinavia. Drought plays a substantial role in the occurrence of the compound and cascading events of dry hazards, especially in southern Europe as it drives the duration of cascading events. Moreover, drought is the most frequent hazard-precursor in cascading events, followed by compound drought-fire events. Changing the definition of a cascading dry hazard by increasing the number of days without a hazard from 1 to 21 within the event (inter-event criterion), lowers as expected, the maximum number of cascading events from 94 to 42, and extends the maximum average duration of cascading events from 38 to 86 days. We had to use proxy observed data to determine the three selected dry hazards because long time series of reported dry hazards do not exist. A complete and specific database with reported hazards is a prerequisite to obtain a more comprehensive insight into compound and cascading dry hazards.
How to cite: Sutanto, S. J., Vitolo, C., Di Napoli, C., D'Andrea, M., and Van Lanen, H.: Droughts, heatwaves, and wildfires: exploring compound and cascading events of dry hazards at the pan-European scale, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-271, https://doi.org/10.5194/egusphere-egu2020-271, 2020.
EGU2020-6439 | Displays | ITS2.16/NH10.6
Compounding Effects of Riverine and Coastal Floods and Its Implications for Coastal Urban Flood ResiliencePoulomi Ganguli and Bruno Merz
Globally more than 600 million people reside in the low elevation (< 10 meters elevation) coastal zone. The densely populated low-lying deltas are vulnerable to flooding primarily in two ways: (1) Due to extreme coastal water level (ECWL) because of either storm surges or heavy rain-induced river floods generated by a severe storm episode. (2) Co-occurrence or successive occurrence of ECWL and river floods as a result of storm-producing synoptic weather conditions leading to compound floods that causes a severe impact than when each of these extremes occurs in an isolation at different times. Most of the earlier assessments that analyzed compound floods, often do not consider the delay between rainfall and streamflow events. River runoff, which also includes subsurface groundwater recharge component, cannot be adequately described by extreme precipitation alone. While most of the literature is limited to analyzing joint dependence between variables considering only central dependence, challenges to flood hazard assessment include difficulty in delineating the severity of riverine floods, especially due to long upper tails of the variables that influence interdependencies between underlying drivers. Despite uncertainties, utilizing the rich database of northwestern Europe, here we assess compound flood severity and its trend by examining spatial interdependencies between annual maxima coastal water level (as an indicator of ECWL) and d-day lagged peak discharge within ±7 days of the occurrence of the ECWL event. Our analysis reveals a spatially coherent dependence pattern with strong positive dependence for gauges located between 52° and 60°N latitude, whereas a weak positive dependence across gauges in > 60°N latitude. Based on a newly proposed index, Compound Hazard Ratio (CHR) that compares the severity of compound floods with at-site design floods, our proof-of-principal analysis suggests nearly half of the stream gauges show amplifications in fluvial flood hazard during 2013/2014’s catastrophic winter storm Xaver that affected most of northern Europe. Furthermore, a multi-decadal (1889 – 2014) temporal evolution of compound flood reveals the existence of a flood-rich period between 1960s and 1980s, especially for the mid-latitude gauges (located within 47° to 60°N), which might be closely linked to the North Atlantic Oscillation (NAO) teleconnection pattern prevailing in the region. On the other hand, gauges at high-latitude (> 60°N) show decreasing to no trend in compound floods. The approach presented here can serve as a basis for developing coastal urban flood risk management portfolios aiding improved resilience and reduce vulnerability in the affected areas.
How to cite: Ganguli, P. and Merz, B.: Compounding Effects of Riverine and Coastal Floods and Its Implications for Coastal Urban Flood Resilience, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6439, https://doi.org/10.5194/egusphere-egu2020-6439, 2020.
Globally more than 600 million people reside in the low elevation (< 10 meters elevation) coastal zone. The densely populated low-lying deltas are vulnerable to flooding primarily in two ways: (1) Due to extreme coastal water level (ECWL) because of either storm surges or heavy rain-induced river floods generated by a severe storm episode. (2) Co-occurrence or successive occurrence of ECWL and river floods as a result of storm-producing synoptic weather conditions leading to compound floods that causes a severe impact than when each of these extremes occurs in an isolation at different times. Most of the earlier assessments that analyzed compound floods, often do not consider the delay between rainfall and streamflow events. River runoff, which also includes subsurface groundwater recharge component, cannot be adequately described by extreme precipitation alone. While most of the literature is limited to analyzing joint dependence between variables considering only central dependence, challenges to flood hazard assessment include difficulty in delineating the severity of riverine floods, especially due to long upper tails of the variables that influence interdependencies between underlying drivers. Despite uncertainties, utilizing the rich database of northwestern Europe, here we assess compound flood severity and its trend by examining spatial interdependencies between annual maxima coastal water level (as an indicator of ECWL) and d-day lagged peak discharge within ±7 days of the occurrence of the ECWL event. Our analysis reveals a spatially coherent dependence pattern with strong positive dependence for gauges located between 52° and 60°N latitude, whereas a weak positive dependence across gauges in > 60°N latitude. Based on a newly proposed index, Compound Hazard Ratio (CHR) that compares the severity of compound floods with at-site design floods, our proof-of-principal analysis suggests nearly half of the stream gauges show amplifications in fluvial flood hazard during 2013/2014’s catastrophic winter storm Xaver that affected most of northern Europe. Furthermore, a multi-decadal (1889 – 2014) temporal evolution of compound flood reveals the existence of a flood-rich period between 1960s and 1980s, especially for the mid-latitude gauges (located within 47° to 60°N), which might be closely linked to the North Atlantic Oscillation (NAO) teleconnection pattern prevailing in the region. On the other hand, gauges at high-latitude (> 60°N) show decreasing to no trend in compound floods. The approach presented here can serve as a basis for developing coastal urban flood risk management portfolios aiding improved resilience and reduce vulnerability in the affected areas.
How to cite: Ganguli, P. and Merz, B.: Compounding Effects of Riverine and Coastal Floods and Its Implications for Coastal Urban Flood Resilience, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6439, https://doi.org/10.5194/egusphere-egu2020-6439, 2020.
EGU2020-4455 | Displays | ITS2.16/NH10.6
The Quantitative Analysis of Synchronized River Flood on the Global Scale Considering Multiple Flood PeaksMd Robiul Islam and Dai Yamazaki
Flood is one of the most frequent and severe natural disasters all over the world. Among different types of floods, the compound flood has been considered as an alarming threat over the years due to climate change. River flood synchronization acts as a compound event which is mainly occurred at the downstream of the large river confluence zones. When multiple rivers become flooded at the same time, the resultant flood magnitude and flood duration at their confluences zone raise drastically. Only a few previous research addressed synchronized flood risk at a local scale, where simply yearly peak synchronization has been considered to avoid the complexity of detecting multiple peaks. However, several rivers occasionally show significant multiple peaks in a single year and sometimes yearly peak stays much below the flood limit. Therefore, the existing technique either over-estimate or under-estimate synchronized flood risk, apparently, it is not applicable at the global scale.
The quantitative analysis of synchronized flood at a global scale considering multiple peaks is still a big challenge. Here, we have developed a flood return period-based new methodology to quantify synchronous flood precisely as well as to detect the background of the synchronous flood, either multiple peak synchronization or yearly peak synchronization. To find out the suitable confluence points on the global scale, we set two conditions. Firstly, the drainage area of contributing rivers is large enough to become different hydrological features and secondly, both rivers contribute a significant amount of discharge for the generation of flood at the respective confluence points. The next-generation global river routing model, CaMa-Flood, has been employed to compute discharge for return period-based analysis of selected rivers and confluence points. Finally, we check the contributing rivers return period discharge when the respective confluence point is at flooded condition, and if both rivers exceed corresponding 2-year return period discharge, those events are considered as synchronized floods.
We have found 53 confluence points on the global scale where catastrophic flood hazards may occur due to flood synchronization. The historical floods in 49 confluence points show different degrees of synchronization. Moreover, the Confluence zone flood at high latitude mostly affected by yearly peak synchronization may be due to the snowmelt domination. In contrast, the historical synchronized floods in tropical and sub-tropical regions affected by different levels of multiple peak synchronization where rainfall timing might be playing a significant role. This method emerges the physical mechanism of the historical catastrophic fluvial flood that took place where the large rivers have merged.
How to cite: Islam, M. R. and Yamazaki, D.: The Quantitative Analysis of Synchronized River Flood on the Global Scale Considering Multiple Flood Peaks, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4455, https://doi.org/10.5194/egusphere-egu2020-4455, 2020.
Flood is one of the most frequent and severe natural disasters all over the world. Among different types of floods, the compound flood has been considered as an alarming threat over the years due to climate change. River flood synchronization acts as a compound event which is mainly occurred at the downstream of the large river confluence zones. When multiple rivers become flooded at the same time, the resultant flood magnitude and flood duration at their confluences zone raise drastically. Only a few previous research addressed synchronized flood risk at a local scale, where simply yearly peak synchronization has been considered to avoid the complexity of detecting multiple peaks. However, several rivers occasionally show significant multiple peaks in a single year and sometimes yearly peak stays much below the flood limit. Therefore, the existing technique either over-estimate or under-estimate synchronized flood risk, apparently, it is not applicable at the global scale.
The quantitative analysis of synchronized flood at a global scale considering multiple peaks is still a big challenge. Here, we have developed a flood return period-based new methodology to quantify synchronous flood precisely as well as to detect the background of the synchronous flood, either multiple peak synchronization or yearly peak synchronization. To find out the suitable confluence points on the global scale, we set two conditions. Firstly, the drainage area of contributing rivers is large enough to become different hydrological features and secondly, both rivers contribute a significant amount of discharge for the generation of flood at the respective confluence points. The next-generation global river routing model, CaMa-Flood, has been employed to compute discharge for return period-based analysis of selected rivers and confluence points. Finally, we check the contributing rivers return period discharge when the respective confluence point is at flooded condition, and if both rivers exceed corresponding 2-year return period discharge, those events are considered as synchronized floods.
We have found 53 confluence points on the global scale where catastrophic flood hazards may occur due to flood synchronization. The historical floods in 49 confluence points show different degrees of synchronization. Moreover, the Confluence zone flood at high latitude mostly affected by yearly peak synchronization may be due to the snowmelt domination. In contrast, the historical synchronized floods in tropical and sub-tropical regions affected by different levels of multiple peak synchronization where rainfall timing might be playing a significant role. This method emerges the physical mechanism of the historical catastrophic fluvial flood that took place where the large rivers have merged.
How to cite: Islam, M. R. and Yamazaki, D.: The Quantitative Analysis of Synchronized River Flood on the Global Scale Considering Multiple Flood Peaks, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4455, https://doi.org/10.5194/egusphere-egu2020-4455, 2020.
EGU2020-1841 | Displays | ITS2.16/NH10.6
Measuring temporal and spatial scales of compound events in the United KingdomAloïs Tilloy, Bruce Malamud, Hugo Winter, and Amelie Joly-Laugel
Multi-hazard events have the potential to cause damages to infrastructures and people that may differ greatly from the associated risks posed by singular hazards. Interrelations between natural hazards also operate on different spatial and temporal scales than single natural hazards. Therefore, the measure of spatial and temporal scales of natural hazard interrelations still remain challenging. The objective of this study is to refine and measure temporal and spatial scales of natural hazards and their interrelations by using a spatiotemporal clustering technique. To do so, spatiotemporal information about natural hazards are extracted from the ERA5 climate reanalysis. We focus here on the interrelation between two natural hazards (extreme precipitation and extreme wind gust) during the period 1969-2019 within a region including Great Britain and North-West France. The characteristics of our input data (i.e. important size, high noise level) and the absence of assumption about the shape of our hazard clusters guided the choice of a clustering algorithm toward the DBSCAN clustering algorithm. To create hazard clusters, we retain only extreme values (above the 99% quantile) of precipitation and wind gust. We analyse the characteristics (eg., size, duration, season, intensity) of single and compound events of rain and wind impacting our study area. We then measure the impact of the spatial and temporal scales defined in this study on the nature of the interrelation between extreme rainfall and extreme wind in the UK. We therefore demonstrate how this methodology can be applied to a different set of natural hazards.
How to cite: Tilloy, A., Malamud, B., Winter, H., and Joly-Laugel, A.: Measuring temporal and spatial scales of compound events in the United Kingdom, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1841, https://doi.org/10.5194/egusphere-egu2020-1841, 2020.
Multi-hazard events have the potential to cause damages to infrastructures and people that may differ greatly from the associated risks posed by singular hazards. Interrelations between natural hazards also operate on different spatial and temporal scales than single natural hazards. Therefore, the measure of spatial and temporal scales of natural hazard interrelations still remain challenging. The objective of this study is to refine and measure temporal and spatial scales of natural hazards and their interrelations by using a spatiotemporal clustering technique. To do so, spatiotemporal information about natural hazards are extracted from the ERA5 climate reanalysis. We focus here on the interrelation between two natural hazards (extreme precipitation and extreme wind gust) during the period 1969-2019 within a region including Great Britain and North-West France. The characteristics of our input data (i.e. important size, high noise level) and the absence of assumption about the shape of our hazard clusters guided the choice of a clustering algorithm toward the DBSCAN clustering algorithm. To create hazard clusters, we retain only extreme values (above the 99% quantile) of precipitation and wind gust. We analyse the characteristics (eg., size, duration, season, intensity) of single and compound events of rain and wind impacting our study area. We then measure the impact of the spatial and temporal scales defined in this study on the nature of the interrelation between extreme rainfall and extreme wind in the UK. We therefore demonstrate how this methodology can be applied to a different set of natural hazards.
How to cite: Tilloy, A., Malamud, B., Winter, H., and Joly-Laugel, A.: Measuring temporal and spatial scales of compound events in the United Kingdom, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1841, https://doi.org/10.5194/egusphere-egu2020-1841, 2020.
EGU2020-13408 | Displays | ITS2.16/NH10.6
Multiple hazards under future UK Climate ProjectionsFreya Garry and Dan Bernie
When two or more extreme weather events occur either simultaneously or in close succession, there may be more severe societal and economic impacts than when extreme hazards occur alone. Impacts may also cascade across different sectors of society or amplify impacts in another sector. Perturbed parameter ensemble simulations of projections to 2080 have been generated at the UK Met Office to cover the UK at high spatial (12 km or 2.2 km) and temporal resolution (daily or sub-daily) resolution as part of the “UK Climate Projections”. We use the regional 12 km model simulations at daily resolution to consider how the frequency, duration and spatial extent of multiple extreme hazard events in the UK changes over the 21st century. We will show case studies of multiple extreme hazard pairings that pose a risk to UK sectors, for example, the risk of hot and dry weather to agricultural harvests. By working with stakeholders that have a good understanding of their vulnerabilities and exposure, we consider multiple extreme events in a risk projection framework. This work is funded under the Strategic Priority Fund for UK Climate Resilience.
How to cite: Garry, F. and Bernie, D.: Multiple hazards under future UK Climate Projections , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13408, https://doi.org/10.5194/egusphere-egu2020-13408, 2020.
When two or more extreme weather events occur either simultaneously or in close succession, there may be more severe societal and economic impacts than when extreme hazards occur alone. Impacts may also cascade across different sectors of society or amplify impacts in another sector. Perturbed parameter ensemble simulations of projections to 2080 have been generated at the UK Met Office to cover the UK at high spatial (12 km or 2.2 km) and temporal resolution (daily or sub-daily) resolution as part of the “UK Climate Projections”. We use the regional 12 km model simulations at daily resolution to consider how the frequency, duration and spatial extent of multiple extreme hazard events in the UK changes over the 21st century. We will show case studies of multiple extreme hazard pairings that pose a risk to UK sectors, for example, the risk of hot and dry weather to agricultural harvests. By working with stakeholders that have a good understanding of their vulnerabilities and exposure, we consider multiple extreme events in a risk projection framework. This work is funded under the Strategic Priority Fund for UK Climate Resilience.
How to cite: Garry, F. and Bernie, D.: Multiple hazards under future UK Climate Projections , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13408, https://doi.org/10.5194/egusphere-egu2020-13408, 2020.
EGU2020-18987 | Displays | ITS2.16/NH10.6
Nonstationary Compound Weather Extremes in Canada based on Large Ensemble Climate SimulationsMohammad Reza Najafi, Harsimrenjit Singh, and Alex Cannon
Compound weather extremes including warm-wet and warm-dry events can lead to catastrophes such as wildfires, droughts and flooding. We use three large ensembles (3 × 50 members) of climate simulations to study the non-stationarity of compound events based on an ensemble pooling approach: the Canadian Regional Climate Model Large Ensemble (CanRCM4-LE), and the Canadian Large Ensembles Adjusted Datasets (CanLEAD1&2). The CanLEAD products include daily precipitation, maximum and minimum temperature from CanRCM4-LE that are bias-corrected using a novel statistical approach, which preserves the multivariate structure of the climate variables and corrects for univariate biases. Each ensemble member is validated against the NRCANmet observed data over Canada for 1951-2000 using a hierarchical Bayesian framework. Additionally, the performance of the models to mimic the dependence structure of the observation is tested using copulas. Extreme climate indices are estimated for a baseline period and changes in extremes are explored across four future warming scenarios corresponding to +1.5°C, +2.0°C, +3.0°C and +4.0°C warming above the pre-industrial period of 1850-1900. The ensemble pooling approach allows for the quantification of changes in the dependence structure and its subsequent effects on compound extremes in the future.
Results show that the CanLEAD products can reduce warm and wet biases in CanRCM4-LE over the majority of Canadian regions in all seasons except for winter. The ensembles unanimously project significant warming and wetting trends over most of southern Canada excluding the Canadian Prairies in summer, which show a drying trend towards the end of the 21st century. The overall trend shows an increase in hot extremes in central and southeastern Canada and a significant increase in wet extremes in western coastal regions. Results from compound extreme analysis show that there is significant under-estimation of extremes when the dependence between temperature and precipitation is ignored. For example, a 100-year hot and dry event under the assumption of independence becomes a ~60-year event when the dependence is characterized using copulas.
How to cite: Najafi, M. R., Singh, H., and Cannon, A.: Nonstationary Compound Weather Extremes in Canada based on Large Ensemble Climate Simulations, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18987, https://doi.org/10.5194/egusphere-egu2020-18987, 2020.
Compound weather extremes including warm-wet and warm-dry events can lead to catastrophes such as wildfires, droughts and flooding. We use three large ensembles (3 × 50 members) of climate simulations to study the non-stationarity of compound events based on an ensemble pooling approach: the Canadian Regional Climate Model Large Ensemble (CanRCM4-LE), and the Canadian Large Ensembles Adjusted Datasets (CanLEAD1&2). The CanLEAD products include daily precipitation, maximum and minimum temperature from CanRCM4-LE that are bias-corrected using a novel statistical approach, which preserves the multivariate structure of the climate variables and corrects for univariate biases. Each ensemble member is validated against the NRCANmet observed data over Canada for 1951-2000 using a hierarchical Bayesian framework. Additionally, the performance of the models to mimic the dependence structure of the observation is tested using copulas. Extreme climate indices are estimated for a baseline period and changes in extremes are explored across four future warming scenarios corresponding to +1.5°C, +2.0°C, +3.0°C and +4.0°C warming above the pre-industrial period of 1850-1900. The ensemble pooling approach allows for the quantification of changes in the dependence structure and its subsequent effects on compound extremes in the future.
Results show that the CanLEAD products can reduce warm and wet biases in CanRCM4-LE over the majority of Canadian regions in all seasons except for winter. The ensembles unanimously project significant warming and wetting trends over most of southern Canada excluding the Canadian Prairies in summer, which show a drying trend towards the end of the 21st century. The overall trend shows an increase in hot extremes in central and southeastern Canada and a significant increase in wet extremes in western coastal regions. Results from compound extreme analysis show that there is significant under-estimation of extremes when the dependence between temperature and precipitation is ignored. For example, a 100-year hot and dry event under the assumption of independence becomes a ~60-year event when the dependence is characterized using copulas.
How to cite: Najafi, M. R., Singh, H., and Cannon, A.: Nonstationary Compound Weather Extremes in Canada based on Large Ensemble Climate Simulations, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18987, https://doi.org/10.5194/egusphere-egu2020-18987, 2020.
EGU2020-20116 | Displays | ITS2.16/NH10.6 | Highlight
Causes and implications of the unforeseen 2016 extreme yield loss in France’s breadbasketTamara Ben Ari
The 2016 wheat harvest in France suffered from an unforeseen and unprecedented production loss. At 5.4 tonnes ha-1, wheat yield was the lowest recorded since 1986 and 30% below the five-year average. Crop yield forecasting can be considered as near-real-time impact modelling, but unfortunately, none of the forecasting systems in place anticipated the extent of the impact. The 2015/2016 growing season was characterized by compounding warm autumn temperatures and abnormally wet conditions in the following spring. High rainfall and high temperatures leading to fungal diseases, soil water lodging and anoxia, low radiation affecting grain filling, and leaching of nitrogen from the root-zone have all been suggested as important factors ultimately leading to the yield loss. The use of binomial logistic regressions accounting for autumn and spring temperatures and precipitation, suggests that the odds of an extreme yield loss in 2016 was times 35 higher than expected. The challenge now is to further identify the variety of biotic and abiotic processes interacting at different timescales. Collecting relevant insights on the field or from trial experiments, and confronting these with statistical and biophysical crop modelling will be key to achieve this. Improved impact relevant indicators will need to be integrated into operational crop yield forecasting systems in preparation for future compound events.
How to cite: Ben Ari, T.: Causes and implications of the unforeseen 2016 extreme yield loss in France’s breadbasket, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20116, https://doi.org/10.5194/egusphere-egu2020-20116, 2020.
The 2016 wheat harvest in France suffered from an unforeseen and unprecedented production loss. At 5.4 tonnes ha-1, wheat yield was the lowest recorded since 1986 and 30% below the five-year average. Crop yield forecasting can be considered as near-real-time impact modelling, but unfortunately, none of the forecasting systems in place anticipated the extent of the impact. The 2015/2016 growing season was characterized by compounding warm autumn temperatures and abnormally wet conditions in the following spring. High rainfall and high temperatures leading to fungal diseases, soil water lodging and anoxia, low radiation affecting grain filling, and leaching of nitrogen from the root-zone have all been suggested as important factors ultimately leading to the yield loss. The use of binomial logistic regressions accounting for autumn and spring temperatures and precipitation, suggests that the odds of an extreme yield loss in 2016 was times 35 higher than expected. The challenge now is to further identify the variety of biotic and abiotic processes interacting at different timescales. Collecting relevant insights on the field or from trial experiments, and confronting these with statistical and biophysical crop modelling will be key to achieve this. Improved impact relevant indicators will need to be integrated into operational crop yield forecasting systems in preparation for future compound events.
How to cite: Ben Ari, T.: Causes and implications of the unforeseen 2016 extreme yield loss in France’s breadbasket, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20116, https://doi.org/10.5194/egusphere-egu2020-20116, 2020.
EGU2020-11527 | Displays | ITS2.16/NH10.6
Increasing risks of apple tree frost damage under climate changeInga Menke, Peter Pfleiderer, and Carl-Friedrich Schleussner
The impacts of global warming on agriculture and crop production are already visible today and are projected to intensify in the future. As horticultural and agricultural systems are complex organisms, their responses to changing climate can be non-linear and at times counter-intuitive. These systems undergo yearly cycles of growth with different plant characteristics in each of their phenological phases. They are thus especially sensitive to changes in seasonality besides changes in the annual mean and single extreme events.
Here we show that as a result of warmer winters, the risk of frost damages on apple trees in Germany is projected to be about 10% higher in a 2°C world compared to today. Warmer winters lead to less frost days but also to earlier apple blossom. This can result in overall increase in years where frost days occur after blossom.
Using large ensemble climate simulations, we analyze this compound event of frost days after blossom – frost days after warm winters. Although the projected shift in blossom day and the decrease in frost days is relatively homogeneous over Germany, the change in frost risk varies considerably between regions. Our results highlight the importance of treating frost risk as a compound event of frost days after warm winters instead of comparing the average shift in blossom days with the decrease in frost days.
Reference: Pfleiderer, P., Menke, I. & Schleussner, C.-F. Increasing risks of apple tree frost damage under climate change. Clim. Change (2019). doi:10.1007/s10584-019-02570-y
How to cite: Menke, I., Pfleiderer, P., and Schleussner, C.-F.: Increasing risks of apple tree frost damage under climate change , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11527, https://doi.org/10.5194/egusphere-egu2020-11527, 2020.
The impacts of global warming on agriculture and crop production are already visible today and are projected to intensify in the future. As horticultural and agricultural systems are complex organisms, their responses to changing climate can be non-linear and at times counter-intuitive. These systems undergo yearly cycles of growth with different plant characteristics in each of their phenological phases. They are thus especially sensitive to changes in seasonality besides changes in the annual mean and single extreme events.
Here we show that as a result of warmer winters, the risk of frost damages on apple trees in Germany is projected to be about 10% higher in a 2°C world compared to today. Warmer winters lead to less frost days but also to earlier apple blossom. This can result in overall increase in years where frost days occur after blossom.
Using large ensemble climate simulations, we analyze this compound event of frost days after blossom – frost days after warm winters. Although the projected shift in blossom day and the decrease in frost days is relatively homogeneous over Germany, the change in frost risk varies considerably between regions. Our results highlight the importance of treating frost risk as a compound event of frost days after warm winters instead of comparing the average shift in blossom days with the decrease in frost days.
Reference: Pfleiderer, P., Menke, I. & Schleussner, C.-F. Increasing risks of apple tree frost damage under climate change. Clim. Change (2019). doi:10.1007/s10584-019-02570-y
How to cite: Menke, I., Pfleiderer, P., and Schleussner, C.-F.: Increasing risks of apple tree frost damage under climate change , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11527, https://doi.org/10.5194/egusphere-egu2020-11527, 2020.
EGU2020-674 | Displays | ITS2.16/NH10.6
Risk of crop-failure due to compound hot and dry extremes in the Iberian PeninsulaAndreia Ribeiro, Ana Russo, Célia Gouveia, Patrícia Páscoa, and Jakob Zscheischler
Crop health and favourable yields depend strongly on precipitation and temperature patterns during the crop’s growing season. Compound events, such as co-occurring drought and heat can lead to extreme crop failure and cause larger damages than the impacts of the individual drought or heat alone.
Here we assess the relative role of hot and dry conditions (HDC) in crop yields and evaluate in what manner compound HDC enhance the probability of failure in rainfed cropping systems in the Iberian Peninsula. We use annual wheat yield data at the province level and cluster provinces with similar sensitivities of yields to climate conditions. Copula theory was applied to model the trivariate dependence between 3-monthly means of maximum temperature, 3-monthly means of precipitation and wheat yields. The climate variables and averaging periods have been chosen to maximize the dependence between the driver climate conditions during growing season and the annual yields. Copulas enable for the estimation of conditional probabilities of crop-loss under different hot and dry severity levels based on their trivariate joint distribution.
Our results demonstrate that the probability of wheat loss increases with the severity of the compound HDC and that losses are significantly larger during co-occurring drought and heat compared to individual water- or heat-stress. Moreover, the difference between heat impacts and compound heat and drought related impacts is larger than the difference between drought impacts and compound heat and drought related impacts, suggesting that water-stress is the major driver of wheat losses. These findings can help contribute to design management options to mitigate climate-related crop impacts and guide the decision-making process in agricultural practices.
Acknowledgements: A.F.S.Ribeiro would like to acknowledge the financial support through FCT (Fundação para a Ciência e a Tecnologia, Portugal) under the projects UIDB/50019/2020 – IDL and PTDC/CTA-CLI/28902/201 (IMPECAF). A.F.S.Ribeiro is also thankful to FCT for the grant PD/BD/114481/2016 and to the COST Action CA17109 for a Short Term Scientific Mission (STSM) grant to develop the present work.
How to cite: Ribeiro, A., Russo, A., Gouveia, C., Páscoa, P., and Zscheischler, J.: Risk of crop-failure due to compound hot and dry extremes in the Iberian Peninsula, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-674, https://doi.org/10.5194/egusphere-egu2020-674, 2020.
Crop health and favourable yields depend strongly on precipitation and temperature patterns during the crop’s growing season. Compound events, such as co-occurring drought and heat can lead to extreme crop failure and cause larger damages than the impacts of the individual drought or heat alone.
Here we assess the relative role of hot and dry conditions (HDC) in crop yields and evaluate in what manner compound HDC enhance the probability of failure in rainfed cropping systems in the Iberian Peninsula. We use annual wheat yield data at the province level and cluster provinces with similar sensitivities of yields to climate conditions. Copula theory was applied to model the trivariate dependence between 3-monthly means of maximum temperature, 3-monthly means of precipitation and wheat yields. The climate variables and averaging periods have been chosen to maximize the dependence between the driver climate conditions during growing season and the annual yields. Copulas enable for the estimation of conditional probabilities of crop-loss under different hot and dry severity levels based on their trivariate joint distribution.
Our results demonstrate that the probability of wheat loss increases with the severity of the compound HDC and that losses are significantly larger during co-occurring drought and heat compared to individual water- or heat-stress. Moreover, the difference between heat impacts and compound heat and drought related impacts is larger than the difference between drought impacts and compound heat and drought related impacts, suggesting that water-stress is the major driver of wheat losses. These findings can help contribute to design management options to mitigate climate-related crop impacts and guide the decision-making process in agricultural practices.
Acknowledgements: A.F.S.Ribeiro would like to acknowledge the financial support through FCT (Fundação para a Ciência e a Tecnologia, Portugal) under the projects UIDB/50019/2020 – IDL and PTDC/CTA-CLI/28902/201 (IMPECAF). A.F.S.Ribeiro is also thankful to FCT for the grant PD/BD/114481/2016 and to the COST Action CA17109 for a Short Term Scientific Mission (STSM) grant to develop the present work.
How to cite: Ribeiro, A., Russo, A., Gouveia, C., Páscoa, P., and Zscheischler, J.: Risk of crop-failure due to compound hot and dry extremes in the Iberian Peninsula, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-674, https://doi.org/10.5194/egusphere-egu2020-674, 2020.
EGU2020-18514 | Displays | ITS2.16/NH10.6
Identifying compound meteorological drivers of extreme wheat yield loss using Lasso regressionChristoph Sauter, Cristina Deidda, Leila Rahimi, Pauline Rivoire, Elisabeth Tschumi, Johannes Vogel, Karin van der Wiel, and Jakob Zscheischler
Compound weather events may lead to extreme impacts that can affect many aspects of society including agriculture. The identification of the underlying mechanisms that cause extreme impacts, such as crop failure, is of crucial importance to improve their understanding and forecasting. Here we investigate whether key meteorological drivers of extreme yield loss can be identified using Least Absolute Shrinkage and Selection Operator (Lasso) in a model environment.
We use yearly wheat yields as simulated by the APSIM crop model driven by 1600 years of daily weather data from a global climate model (EC-Earth v2.3) under present-day conditions for the Northern Hemisphere. We define extreme yield loss as years with yield below the 5th percentile. We apply logistic Lasso regression to predict whether weather conditions during the growing season lead to crop failure. Lasso selects the most relevant variables from a large set of predictors that best explain the target variable via regularization. Our input variables include monthly averaged values of maximum temperature, vapour pressure deficit and precipitation as well as established extreme event indicators such as maximum and minimum temperature during the growing season, diurnal temperature range, total number of frost days, and maximum five-day precipitation sum.
We obtain good model performance in Central Europe and the American Corn Belt, while yield losses in Asian and African regions are less accurately predicted. Model performance and mean wheat yield strongly correlate, i.e. model performance is highest in regions with relatively large mean yield. Based on the selected predictors, we identify regions where crop loss is predominantly influenced by a single variable and regions where it is driven by the interplay of several variables, i.e. compound events. Especially in the Midwest and Eastern regions of the USA, several variables are required to correctly predict yield losses. This illustrates the importance of accounting for the interplay of various weather conditions over the course of the growing season to be able to determine crop yield losses more precisely.
We conclude that the Lasso regression is a useful tool to detect the compound drivers of extreme impacts, which can be applied for other impact variables such as fires or floods. As the detected relationships are of purely correlative nature, more detailed analyses are required to establish the causal structure between drivers and impacts. Furthermore, using the same model environment, the robustness of the identified relationships will be tested in a climate change context.
How to cite: Sauter, C., Deidda, C., Rahimi, L., Rivoire, P., Tschumi, E., Vogel, J., van der Wiel, K., and Zscheischler, J.: Identifying compound meteorological drivers of extreme wheat yield loss using Lasso regression, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18514, https://doi.org/10.5194/egusphere-egu2020-18514, 2020.
Compound weather events may lead to extreme impacts that can affect many aspects of society including agriculture. The identification of the underlying mechanisms that cause extreme impacts, such as crop failure, is of crucial importance to improve their understanding and forecasting. Here we investigate whether key meteorological drivers of extreme yield loss can be identified using Least Absolute Shrinkage and Selection Operator (Lasso) in a model environment.
We use yearly wheat yields as simulated by the APSIM crop model driven by 1600 years of daily weather data from a global climate model (EC-Earth v2.3) under present-day conditions for the Northern Hemisphere. We define extreme yield loss as years with yield below the 5th percentile. We apply logistic Lasso regression to predict whether weather conditions during the growing season lead to crop failure. Lasso selects the most relevant variables from a large set of predictors that best explain the target variable via regularization. Our input variables include monthly averaged values of maximum temperature, vapour pressure deficit and precipitation as well as established extreme event indicators such as maximum and minimum temperature during the growing season, diurnal temperature range, total number of frost days, and maximum five-day precipitation sum.
We obtain good model performance in Central Europe and the American Corn Belt, while yield losses in Asian and African regions are less accurately predicted. Model performance and mean wheat yield strongly correlate, i.e. model performance is highest in regions with relatively large mean yield. Based on the selected predictors, we identify regions where crop loss is predominantly influenced by a single variable and regions where it is driven by the interplay of several variables, i.e. compound events. Especially in the Midwest and Eastern regions of the USA, several variables are required to correctly predict yield losses. This illustrates the importance of accounting for the interplay of various weather conditions over the course of the growing season to be able to determine crop yield losses more precisely.
We conclude that the Lasso regression is a useful tool to detect the compound drivers of extreme impacts, which can be applied for other impact variables such as fires or floods. As the detected relationships are of purely correlative nature, more detailed analyses are required to establish the causal structure between drivers and impacts. Furthermore, using the same model environment, the robustness of the identified relationships will be tested in a climate change context.
How to cite: Sauter, C., Deidda, C., Rahimi, L., Rivoire, P., Tschumi, E., Vogel, J., van der Wiel, K., and Zscheischler, J.: Identifying compound meteorological drivers of extreme wheat yield loss using Lasso regression, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18514, https://doi.org/10.5194/egusphere-egu2020-18514, 2020.
EGU2020-9004 | Displays | ITS2.16/NH10.6
Investigating the impact of different drought-heat signatures on carbon dynamics using a dynamic global vegetation modelElisabeth Tschumi, Sebastian Lienert, Karin van der Wiel, Fortunat Joos, and Jakob Zscheischler
Droughts and heat waves have large impacts on the terrestrial carbon cycle. They lead to reductions in gross and net carbon uptake or anomalous increases in carbon emissions to the atmosphere because of responses such as stomatal closure, hydraulic failure and vegetation mortality. The impacts are particularly strong when drought and heat occur at the same time. Climate model simulations diverge in their occurrence frequency of compound hot and dry events, and it is unclear how these differences affect carbon dynamics. Furthermore, it is unknown whether an increase in frequency of droughts and heat waves leads to long-term changes in carbon dynamics, and how such an increase might affect vegetation composition.
To study the immediate and long-term effects of varying signatures of droughts and heat waves on carbon dynamics such as inter-annual variability of carbon fluxes and cumulative carbon uptake, we employ the state-of-the-art dynamic global vegetation model LPX-Bern (v1.4) under different drought-heat scenarios.
We have constructed five 100-yr long scenarios with different drought-heat signatures, representing a “control”, “close to mean seasonal cycle”, “drought only”, “heat only”, and “compound drought and heat” climate forcing to LPX-Bern. This is done by sampling daily climate variables from a 2000-year stationary simulation of a General Circulation Model (EC-Earth) for present-day climate conditions. Such a sampling ensures physically-consistent co-variability between climate variables in the climate forcing.
We investigate the carbon-cycle response and changes in vegetation structure to different drought-heat signatures on a global grid, representing different land cover types and climate zones. Our results provide a better understanding of the links between hot and dry conditions and carbon dynamics. This may help to reduce uncertainties in carbon cycle projections, which is important for constraining carbon cycle-climate feedbacks.
How to cite: Tschumi, E., Lienert, S., van der Wiel, K., Joos, F., and Zscheischler, J.: Investigating the impact of different drought-heat signatures on carbon dynamics using a dynamic global vegetation model, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9004, https://doi.org/10.5194/egusphere-egu2020-9004, 2020.
Droughts and heat waves have large impacts on the terrestrial carbon cycle. They lead to reductions in gross and net carbon uptake or anomalous increases in carbon emissions to the atmosphere because of responses such as stomatal closure, hydraulic failure and vegetation mortality. The impacts are particularly strong when drought and heat occur at the same time. Climate model simulations diverge in their occurrence frequency of compound hot and dry events, and it is unclear how these differences affect carbon dynamics. Furthermore, it is unknown whether an increase in frequency of droughts and heat waves leads to long-term changes in carbon dynamics, and how such an increase might affect vegetation composition.
To study the immediate and long-term effects of varying signatures of droughts and heat waves on carbon dynamics such as inter-annual variability of carbon fluxes and cumulative carbon uptake, we employ the state-of-the-art dynamic global vegetation model LPX-Bern (v1.4) under different drought-heat scenarios.
We have constructed five 100-yr long scenarios with different drought-heat signatures, representing a “control”, “close to mean seasonal cycle”, “drought only”, “heat only”, and “compound drought and heat” climate forcing to LPX-Bern. This is done by sampling daily climate variables from a 2000-year stationary simulation of a General Circulation Model (EC-Earth) for present-day climate conditions. Such a sampling ensures physically-consistent co-variability between climate variables in the climate forcing.
We investigate the carbon-cycle response and changes in vegetation structure to different drought-heat signatures on a global grid, representing different land cover types and climate zones. Our results provide a better understanding of the links between hot and dry conditions and carbon dynamics. This may help to reduce uncertainties in carbon cycle projections, which is important for constraining carbon cycle-climate feedbacks.
How to cite: Tschumi, E., Lienert, S., van der Wiel, K., Joos, F., and Zscheischler, J.: Investigating the impact of different drought-heat signatures on carbon dynamics using a dynamic global vegetation model, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9004, https://doi.org/10.5194/egusphere-egu2020-9004, 2020.
EGU2020-3707 | Displays | ITS2.16/NH10.6
Increasing compound warm spells and droughts during the growing season in the Mediterranean BasinJohannes Vogel and Eva Paton
The Mediterranean Basin is known as a hot spot of climate change and therefore especially prone to increasing frequencies of warm spells and droughts. Investigating these events in isolation neglects their interactions, which illustrates the need to account for such compound events in a holistic manner. We analysed during which months the frequency of compound warm spells and droughts increased most over the 40-year period from 1979 – 2018. Warm spells and droughts were detected using daily maximum air temperature, precipitation and potential evaporation data from ERA 5. The two drought indices Standardised Precipitation Index (SPI) and Standardised Precipitation-Evapotranspiration Index (SPEI) were calculated.
Our results show the number of compound events increases substantially for almost the entire Mediterranean indicating that novel climatic conditions are occurring. The increases in compound events are predominantly driven by the rising number of warm spells, whereas SPI droughts remain almost constant. However, the rising temperatures lead to higher evapotranspiration, which alters the water balance in the Mediterranean. Therefore, the SPEI droughts shows significant increases in contrast to the SPI, indicating that even though the amount of precipitation does not decrease, the Mediterranean Basin is likely facing drier conditions due to increasing evapotranspiration. The highest changes in the number of compound warm spells and droughts occur in the time span from late winter to early summer. This finding is particularly relevant for Mediterranean ecosystems because this period encompasses the main growing season, and therefore ecosystem productivity and carbon sequestration might be reduced.
How to cite: Vogel, J. and Paton, E.: Increasing compound warm spells and droughts during the growing season in the Mediterranean Basin, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3707, https://doi.org/10.5194/egusphere-egu2020-3707, 2020.
The Mediterranean Basin is known as a hot spot of climate change and therefore especially prone to increasing frequencies of warm spells and droughts. Investigating these events in isolation neglects their interactions, which illustrates the need to account for such compound events in a holistic manner. We analysed during which months the frequency of compound warm spells and droughts increased most over the 40-year period from 1979 – 2018. Warm spells and droughts were detected using daily maximum air temperature, precipitation and potential evaporation data from ERA 5. The two drought indices Standardised Precipitation Index (SPI) and Standardised Precipitation-Evapotranspiration Index (SPEI) were calculated.
Our results show the number of compound events increases substantially for almost the entire Mediterranean indicating that novel climatic conditions are occurring. The increases in compound events are predominantly driven by the rising number of warm spells, whereas SPI droughts remain almost constant. However, the rising temperatures lead to higher evapotranspiration, which alters the water balance in the Mediterranean. Therefore, the SPEI droughts shows significant increases in contrast to the SPI, indicating that even though the amount of precipitation does not decrease, the Mediterranean Basin is likely facing drier conditions due to increasing evapotranspiration. The highest changes in the number of compound warm spells and droughts occur in the time span from late winter to early summer. This finding is particularly relevant for Mediterranean ecosystems because this period encompasses the main growing season, and therefore ecosystem productivity and carbon sequestration might be reduced.
How to cite: Vogel, J. and Paton, E.: Increasing compound warm spells and droughts during the growing season in the Mediterranean Basin, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3707, https://doi.org/10.5194/egusphere-egu2020-3707, 2020.
EGU2020-4590 | Displays | ITS2.16/NH10.6
Climate change effects on hydrometeorological compound events over southern NorwayBenjamin Poschlod, Jakob Zscheischler, Jana Sillmann, Raul R. Wood, and Ralf Ludwig
Compound events are characterized as a combination of multiple drivers and/or hazards which contributes to societal, economical or environmental risk. In southern Norway, hydrometeorological compound events can trigger severe floods, for instance the joint occurrence of rainfall and snowmelt in south-eastern Norway in 1995 and 2013.
Due to this high impact, the investigation of compound events is important, but is hampered by some limiting factors. The multivariate character and the associated very rare occurrence of these events require a large database in order to conduct statistically robust investigations, whereas the available meteorological observations are too scarce in space and time.
With this current study, we present a quantile-based framework to define and examine compound events within a single model initial condition large ensemble (SMILE). To overcome the limitation of data scarcity, we use 50 high-resolution climate simulations from the SMILE CRCM5-LE to investigate two hydrometeorological compound event types in southern Norway:
(1) Heavy rainfall on saturated soil during the summer months (June, July, August, September),
(2) Concurrent heavy rainfall and snowmelt (also often referred to as rain-on-snow).
Furthermore, the application of climate model data enables us to quantify the impact of climate change on the frequency and spatial distribution of both types of compound events. Thereby, we compare current climate conditions (1980-2009) with future conditions (2070-2099) under the high-emission scenario RCP 8.5. We find that the frequency of heavy rainfall on saturated soil increases by 38% until 2070-2099 on average. In contrast, the occurrence probability of rain-on-snow is projected to decrease by 48% over the whole study area, largely driven by decreases in snowfall. The spatial patterns of both events are found to shift. Additionally, we assess the range of the natural variability of the drivers and of the compound event probability within the 50 members of the CRCM5-LE. The univariate spread of the meteorological drivers is found to be relatively small, whereas the occurrence probability of both compound events shows a high inter-member variability. Hence, we conclude that the frequency of the joint occurrence of the contributing drivers is highly variable, which is why a SMILE is needed to assess this probability.
Our current work shows the limitations of regional climate models, stressing the need for even higher-resolution setups to resolve the complex topography of Norway. However, it also highlights the benefits of SMILE simulations for the analysis of compound events.
How to cite: Poschlod, B., Zscheischler, J., Sillmann, J., Wood, R. R., and Ludwig, R.: Climate change effects on hydrometeorological compound events over southern Norway, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4590, https://doi.org/10.5194/egusphere-egu2020-4590, 2020.
Compound events are characterized as a combination of multiple drivers and/or hazards which contributes to societal, economical or environmental risk. In southern Norway, hydrometeorological compound events can trigger severe floods, for instance the joint occurrence of rainfall and snowmelt in south-eastern Norway in 1995 and 2013.
Due to this high impact, the investigation of compound events is important, but is hampered by some limiting factors. The multivariate character and the associated very rare occurrence of these events require a large database in order to conduct statistically robust investigations, whereas the available meteorological observations are too scarce in space and time.
With this current study, we present a quantile-based framework to define and examine compound events within a single model initial condition large ensemble (SMILE). To overcome the limitation of data scarcity, we use 50 high-resolution climate simulations from the SMILE CRCM5-LE to investigate two hydrometeorological compound event types in southern Norway:
(1) Heavy rainfall on saturated soil during the summer months (June, July, August, September),
(2) Concurrent heavy rainfall and snowmelt (also often referred to as rain-on-snow).
Furthermore, the application of climate model data enables us to quantify the impact of climate change on the frequency and spatial distribution of both types of compound events. Thereby, we compare current climate conditions (1980-2009) with future conditions (2070-2099) under the high-emission scenario RCP 8.5. We find that the frequency of heavy rainfall on saturated soil increases by 38% until 2070-2099 on average. In contrast, the occurrence probability of rain-on-snow is projected to decrease by 48% over the whole study area, largely driven by decreases in snowfall. The spatial patterns of both events are found to shift. Additionally, we assess the range of the natural variability of the drivers and of the compound event probability within the 50 members of the CRCM5-LE. The univariate spread of the meteorological drivers is found to be relatively small, whereas the occurrence probability of both compound events shows a high inter-member variability. Hence, we conclude that the frequency of the joint occurrence of the contributing drivers is highly variable, which is why a SMILE is needed to assess this probability.
Our current work shows the limitations of regional climate models, stressing the need for even higher-resolution setups to resolve the complex topography of Norway. However, it also highlights the benefits of SMILE simulations for the analysis of compound events.
How to cite: Poschlod, B., Zscheischler, J., Sillmann, J., Wood, R. R., and Ludwig, R.: Climate change effects on hydrometeorological compound events over southern Norway, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4590, https://doi.org/10.5194/egusphere-egu2020-4590, 2020.
EGU2020-6597 | Displays | ITS2.16/NH10.6
Drought monitoring over Semi-Humid rain-fed winter wheat region in northwest China using remote sensing data from 1981 to 2010Ni Guo, Wei Wang, and Lijuan Wang
Drought is a widespread climate phenomenon throughout the world, as well as one of the natural disasters that seriously impact agricultural. Losses caused by drought in China reach up to about 15 percent of the all losses caused by natural disasters every year. Therefore, to monitoring the drought real-time and effectively, to improving the level of drought monitoring and early warning capacity have important significance to defense drought effectively. Satellite remote sensing technique of drought developed rapidly and had been one of the significant methods that widely used throughout the world since 1980s. Studies have shown that remote sensing drought index, especially the Vegetation drought Index (VIs) is the most suitable one that can be used in semi-arid and semi-humid climate region. We choose semi-arid region of Longdong rain-fed agriculture area in the northwest of Gansu Province as the study area, which is the most frequency area in China that drought occurs. To estimate the drought characteristics from 1981 to 2010, monthly NDVI data, the VCI and AVI index data got from NDVI data, the Comprehensive meteorological drought Index (CI) data during this period, and soil moisture observation data in 20 cm were used. Results show that:
- The frequency and severity of drought in Longdong region appeared a low-high-low trend from 1981 to 2010. 1980s showed a lowest value, 1990s showed a highest value and 2000s showed a falling trend in the frequency and severity.
- AVI and VCI showed a good consistency of drought monitoring together with CI and soil moisture, but a higher volatility and lagged behind for 1 month.
- A Winter Wheat Drought Index (WWDI) was proposed through the analyses of inter-annual NDVI data during the winter wheat growth period and it represents the drought degree in the whole growth period commendably. Thus provide an efficient index to the winter wheat disaster assessment.
- The winter wheat drought degree in the study region from 1981 to 2010 was obtained using WWDI data. The most drought years got from WWDI data were 1995, 2000, 1992, 1996 and 1997, which displayed a very high consistency with the actual disaster situations.
How to cite: Guo, N., Wang, W., and Wang, L.: Drought monitoring over Semi-Humid rain-fed winter wheat region in northwest China using remote sensing data from 1981 to 2010, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6597, https://doi.org/10.5194/egusphere-egu2020-6597, 2020.
Drought is a widespread climate phenomenon throughout the world, as well as one of the natural disasters that seriously impact agricultural. Losses caused by drought in China reach up to about 15 percent of the all losses caused by natural disasters every year. Therefore, to monitoring the drought real-time and effectively, to improving the level of drought monitoring and early warning capacity have important significance to defense drought effectively. Satellite remote sensing technique of drought developed rapidly and had been one of the significant methods that widely used throughout the world since 1980s. Studies have shown that remote sensing drought index, especially the Vegetation drought Index (VIs) is the most suitable one that can be used in semi-arid and semi-humid climate region. We choose semi-arid region of Longdong rain-fed agriculture area in the northwest of Gansu Province as the study area, which is the most frequency area in China that drought occurs. To estimate the drought characteristics from 1981 to 2010, monthly NDVI data, the VCI and AVI index data got from NDVI data, the Comprehensive meteorological drought Index (CI) data during this period, and soil moisture observation data in 20 cm were used. Results show that:
- The frequency and severity of drought in Longdong region appeared a low-high-low trend from 1981 to 2010. 1980s showed a lowest value, 1990s showed a highest value and 2000s showed a falling trend in the frequency and severity.
- AVI and VCI showed a good consistency of drought monitoring together with CI and soil moisture, but a higher volatility and lagged behind for 1 month.
- A Winter Wheat Drought Index (WWDI) was proposed through the analyses of inter-annual NDVI data during the winter wheat growth period and it represents the drought degree in the whole growth period commendably. Thus provide an efficient index to the winter wheat disaster assessment.
- The winter wheat drought degree in the study region from 1981 to 2010 was obtained using WWDI data. The most drought years got from WWDI data were 1995, 2000, 1992, 1996 and 1997, which displayed a very high consistency with the actual disaster situations.
How to cite: Guo, N., Wang, W., and Wang, L.: Drought monitoring over Semi-Humid rain-fed winter wheat region in northwest China using remote sensing data from 1981 to 2010, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6597, https://doi.org/10.5194/egusphere-egu2020-6597, 2020.
EGU2020-17387 | Displays | ITS2.16/NH10.6
Comparison of univariate and multivariate bias-adjusting methods for hydrological impact assessment under climate change conditionsJorn Van de Velde, Bernard De Baets, Matthias Demuzere, and Niko Verhoest
Climate change is one of the largest challenges currently faced by society, with an impact on many systems, such as hydrology. To locally assess this impact, Regional Climate Model (RCM) data are often used as an input for hydrological rainfall-runoff models. However, RCMs are still biased in comparison with the observations. Many methods have been developed to adjust this, but only during the last few years, methods to adjust biases in the variable correlation have become available. This is especially important for hydrological impact assessment, as the hydrological models often need multiple locally correct input variables. In contrast to univariate bias-adjusting methods, the multivariate methods have not yet been thoroughly compared. In this study, two univariate and three multivariate bias-adjusting methods are compared with respect to their performance under climate change conditions. To do this, the methods are calibrated in the late 20th century (1970-1989) and validated in the early 21st century (1998-2017), in which the effect of climate change is already visible. The variables adjusted are precipitation, evaporation and temperature, of which the resulting evaporation and precipitation are used as an input for a rainfall-runoff model, to allow for the validation of the methods on discharge. The methods are also evaluated using indices based on the calibrated variables, the temporal structure, and the multivariate correlation. For precipitation, all methods decrease the bias in a comparable manner. However, for many other indices the results differ considerable between the bias-adjusting methods. The multivariate methods often perform worse than the univariate methods, a result that is especially pronounced for temperature and evaporation.
How to cite: Van de Velde, J., De Baets, B., Demuzere, M., and Verhoest, N.: Comparison of univariate and multivariate bias-adjusting methods for hydrological impact assessment under climate change conditions, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-17387, https://doi.org/10.5194/egusphere-egu2020-17387, 2020.
Climate change is one of the largest challenges currently faced by society, with an impact on many systems, such as hydrology. To locally assess this impact, Regional Climate Model (RCM) data are often used as an input for hydrological rainfall-runoff models. However, RCMs are still biased in comparison with the observations. Many methods have been developed to adjust this, but only during the last few years, methods to adjust biases in the variable correlation have become available. This is especially important for hydrological impact assessment, as the hydrological models often need multiple locally correct input variables. In contrast to univariate bias-adjusting methods, the multivariate methods have not yet been thoroughly compared. In this study, two univariate and three multivariate bias-adjusting methods are compared with respect to their performance under climate change conditions. To do this, the methods are calibrated in the late 20th century (1970-1989) and validated in the early 21st century (1998-2017), in which the effect of climate change is already visible. The variables adjusted are precipitation, evaporation and temperature, of which the resulting evaporation and precipitation are used as an input for a rainfall-runoff model, to allow for the validation of the methods on discharge. The methods are also evaluated using indices based on the calibrated variables, the temporal structure, and the multivariate correlation. For precipitation, all methods decrease the bias in a comparable manner. However, for many other indices the results differ considerable between the bias-adjusting methods. The multivariate methods often perform worse than the univariate methods, a result that is especially pronounced for temperature and evaporation.
How to cite: Van de Velde, J., De Baets, B., Demuzere, M., and Verhoest, N.: Comparison of univariate and multivariate bias-adjusting methods for hydrological impact assessment under climate change conditions, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-17387, https://doi.org/10.5194/egusphere-egu2020-17387, 2020.
EGU2020-2327 | Displays | ITS2.16/NH10.6
Building Industry Resilience to Compound EventsHugo Winter, Alois Tilloy, Alistair Hendry, and Amelie joly-Laugel
Key Words: Compound events; Multi-hazards; Industry application; Multivariate extreme value theory.
Abstract:
Resilience of pre-existing and new-build infrastructure to natural hazards is of key interest for many different industries (e.g. energy, water, transport). In most situations, studies analyzing the risk posed by single natural hazards have already been undertaken and relevant protection measures have been implemented. However, when considering the potential impacts of compound events or multi-hazards there can be less confidence in which combinations need to be considered and how to estimate the risks associated to these multi-hazard scenarios. Certain industries (e.g. nuclear) have already undertaken several projects on the occurrence and risks posed by multi-hazards (e.g. ASAMPSA-E, NARSIS), whereas other industries are still trying to understand their risks and which questions need to be posed.
The EDF Energy R&D UK Centre are part of an industry scheme funded by NERC called the Environmental Risks for Infrastructure Innovation Programme (ERIIP) which aims to connect academics to industrial organisations and undertake translational research. One of the key topics of further research identified by this group is the topic of compound events and multi-hazards. A recent review of knowledge on multi-hazards was undertaken by the British Geological Survey (BGS) which highlighted the state of knowledge across UK infrastructure owners.
This presentation will start by summarizing this review to pull out some key themes for future research in this area. Then, two different ongoing research projects will be outlined which look to address the key themes coming out of the review. One project is attempting to better understand the different multivariate statistical methods that are available for assessing the probability of multi-hazards. The application of the different models outlined in this work will be shown on an example of extreme precipitation and wind speed. The other project aims to better understand the overarching meteorological conditions that can lead to compound flooding at coastal sites around the UK. This focuses less on estimating joint probabilities, but more on producing clear visualisations for end-users.
How to cite: Winter, H., Tilloy, A., Hendry, A., and joly-Laugel, A.: Building Industry Resilience to Compound Events, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2327, https://doi.org/10.5194/egusphere-egu2020-2327, 2020.
Key Words: Compound events; Multi-hazards; Industry application; Multivariate extreme value theory.
Abstract:
Resilience of pre-existing and new-build infrastructure to natural hazards is of key interest for many different industries (e.g. energy, water, transport). In most situations, studies analyzing the risk posed by single natural hazards have already been undertaken and relevant protection measures have been implemented. However, when considering the potential impacts of compound events or multi-hazards there can be less confidence in which combinations need to be considered and how to estimate the risks associated to these multi-hazard scenarios. Certain industries (e.g. nuclear) have already undertaken several projects on the occurrence and risks posed by multi-hazards (e.g. ASAMPSA-E, NARSIS), whereas other industries are still trying to understand their risks and which questions need to be posed.
The EDF Energy R&D UK Centre are part of an industry scheme funded by NERC called the Environmental Risks for Infrastructure Innovation Programme (ERIIP) which aims to connect academics to industrial organisations and undertake translational research. One of the key topics of further research identified by this group is the topic of compound events and multi-hazards. A recent review of knowledge on multi-hazards was undertaken by the British Geological Survey (BGS) which highlighted the state of knowledge across UK infrastructure owners.
This presentation will start by summarizing this review to pull out some key themes for future research in this area. Then, two different ongoing research projects will be outlined which look to address the key themes coming out of the review. One project is attempting to better understand the different multivariate statistical methods that are available for assessing the probability of multi-hazards. The application of the different models outlined in this work will be shown on an example of extreme precipitation and wind speed. The other project aims to better understand the overarching meteorological conditions that can lead to compound flooding at coastal sites around the UK. This focuses less on estimating joint probabilities, but more on producing clear visualisations for end-users.
How to cite: Winter, H., Tilloy, A., Hendry, A., and joly-Laugel, A.: Building Industry Resilience to Compound Events, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2327, https://doi.org/10.5194/egusphere-egu2020-2327, 2020.
EGU2020-3517 | Displays | ITS2.16/NH10.6
Wind characteristics in 2019 on the Polish Baltic coastKatarzyna Starosta and Andrzej Wyszogrodzki
Wind characteristics in 2019 on the Polish Baltic coast.
Katarzyna Starosta, Andrzej Wyszogrodzki
katarzyna.starosta@imgw.pl, andrzej.wyszogrodzki@imgw.pl
Wind is one of the main complex elements that affects the climate and weather of our planet. The topic of our presentation is to show characteristics of the wind for the Polish coastal areas. In our presentation we show distribution of wind speed and wind direction based on the COSMO (Consortium for Small-scale Modelling) model forecasts at a mesh resolution of 2.8 km and their verification with 24 hour measurements for a five synoptic stations: Swinoujscie, Kolobrzeg, Ustka, Leba and Hel. The Polish Institute of Meteorology and Water Management – National Research Institute (IMWM-NRI) runs an operational model COSMO using two nested domains at horizontal resolutions of 7 km and 2.8 km. The model produces 36 hour and 78 hour forecasts four times per day for 2.8 km and 7 km domain resolutions respectively. However, only the 00 UTC forecasts are utilized in this study. We show wind analyzes at synoptic stations for different time scales (hours, days, months). We also analyze situations of extreme winds, such as for example passage of hurricane Alfrida in January 2019 on the Polish coast. The results show characteristic distribution of wind speed and direction at the interface between sea and land . The wind is both destructive and conducive to human action. It causes local flooding, damage in ports, knocks down trees but also provides clean energy for wind farms or serves tourism activities as yachting or surfing.Poland plans recently to build an offshore wind farm in the Baltic Sea. Increasingly accurate wind forecasts are then one of the necessary elements for assessing the local climatology at the wind farm site and to further provide warnings and decisive support to its operation.
Presentation preference: POSTER
EGU/2020/session 35921
Institute of Meteorology and Water Management – National Research Institute
Podleśna 61
01-633 Warsaw
Poland
How to cite: Starosta, K. and Wyszogrodzki, A.: Wind characteristics in 2019 on the Polish Baltic coast, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3517, https://doi.org/10.5194/egusphere-egu2020-3517, 2020.
Wind characteristics in 2019 on the Polish Baltic coast.
Katarzyna Starosta, Andrzej Wyszogrodzki
katarzyna.starosta@imgw.pl, andrzej.wyszogrodzki@imgw.pl
Wind is one of the main complex elements that affects the climate and weather of our planet. The topic of our presentation is to show characteristics of the wind for the Polish coastal areas. In our presentation we show distribution of wind speed and wind direction based on the COSMO (Consortium for Small-scale Modelling) model forecasts at a mesh resolution of 2.8 km and their verification with 24 hour measurements for a five synoptic stations: Swinoujscie, Kolobrzeg, Ustka, Leba and Hel. The Polish Institute of Meteorology and Water Management – National Research Institute (IMWM-NRI) runs an operational model COSMO using two nested domains at horizontal resolutions of 7 km and 2.8 km. The model produces 36 hour and 78 hour forecasts four times per day for 2.8 km and 7 km domain resolutions respectively. However, only the 00 UTC forecasts are utilized in this study. We show wind analyzes at synoptic stations for different time scales (hours, days, months). We also analyze situations of extreme winds, such as for example passage of hurricane Alfrida in January 2019 on the Polish coast. The results show characteristic distribution of wind speed and direction at the interface between sea and land . The wind is both destructive and conducive to human action. It causes local flooding, damage in ports, knocks down trees but also provides clean energy for wind farms or serves tourism activities as yachting or surfing.Poland plans recently to build an offshore wind farm in the Baltic Sea. Increasingly accurate wind forecasts are then one of the necessary elements for assessing the local climatology at the wind farm site and to further provide warnings and decisive support to its operation.
Presentation preference: POSTER
EGU/2020/session 35921
Institute of Meteorology and Water Management – National Research Institute
Podleśna 61
01-633 Warsaw
Poland
How to cite: Starosta, K. and Wyszogrodzki, A.: Wind characteristics in 2019 on the Polish Baltic coast, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3517, https://doi.org/10.5194/egusphere-egu2020-3517, 2020.
EGU2020-19954 | Displays | ITS2.16/NH10.6
On the joint impact of marine (storms) and terrestrial (flash floods) extreme events along the Catalan coast (NW Mediterranean)Marc Sanuy Vazquez, Montserrat Llasat-Botija, Tomeu Rigo, Jose A. Jiménez, and M. Carme Llasat
The Mediterranean coastal zone is a hotspot to the impact of extreme events due to the combination of high values at exposure (concentration of population, large urban areas, infrastructures), large vulnerability (natural protection provided by beaches decreasing due to coastal erosion) and presence of extreme hydro-meteorological events of marine (storms) and terrestrial (flash floods) origins. The Catalan coast (NE Spain) can be considered a paradigm of Med hotspots. On one hand, due to its climatic conditions, orography and land use, flash floods are one of the main causes of inundation risks in the coastal fringe, inducing numerous damages and even casualties (e.g. Llasat et al. 2013). On the other hand, coastal damage associated to the impact of marine storms have been increasing during the last decades along this coast (Jiménez et al. 2012). However, existing studies have not analysed their joint impact to assess the most hazardous conditions, when the coastal zone would be subjected to the combined action of both types of extreme events.
Within this context, this works analyses the combined presentation of extreme events of terrestrial (flash floods) and marine (storms) origin in the Catalan coast. First, extreme events causing significant damage (based on reported damages, insurance costs and casualties) along the coast were identified for the period 1981-2014. 69 events were identified and classified according their origin (marine and/or terrestrial). Each event was characterized in terms of their marine (wave height, period, direction, storm duration) and rainfall characteristics. Since the coastline length is about 600 km, these events verify at specific locations. To cover this spatial variability, storms were locally characterized by using data from existing rain gauges and radar stations along the territory as well as hindcasted wave conditions along the entire coastal fringe. To fully characterize these events, synoptic conditions were also recorded.
From this, first, we directly obtained the corresponding marginal probabilities of each event. Then, compound frequencies were assessed and compared to the marginal ones. Finally, we identified synoptic situations with higher probability of associated compound hazards and bound the range of corresponding wave and rain conditions. By jointly considering the location where they verified, we identify coastal areas (and corresponding geomorphologic conditions) with higher probabilities of suffering damages due to impact of compound extreme events.
This work was carried out within the framework of the M-CostAdapt (CTM2017-83655-C2-1-R) research project, funded by the Spanish Ministry of Economy and Competitiveness (MINECO/AEI/FEDER, UE).
Jiménez et al. 2012. Storm-induced damages along the Catalan coast (NW Mediterranean) during the period 1958–2008. Geomorphology 143, 24-33.
Llasat et al. 2013. Towards a database on societal impact of Mediterranean floods within the framework of the HYMEX project. NHESS, 13, 1337-1350.
How to cite: Sanuy Vazquez, M., Llasat-Botija, M., Rigo, T., Jiménez, J. A., and Llasat, M. C.: On the joint impact of marine (storms) and terrestrial (flash floods) extreme events along the Catalan coast (NW Mediterranean), EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19954, https://doi.org/10.5194/egusphere-egu2020-19954, 2020.
The Mediterranean coastal zone is a hotspot to the impact of extreme events due to the combination of high values at exposure (concentration of population, large urban areas, infrastructures), large vulnerability (natural protection provided by beaches decreasing due to coastal erosion) and presence of extreme hydro-meteorological events of marine (storms) and terrestrial (flash floods) origins. The Catalan coast (NE Spain) can be considered a paradigm of Med hotspots. On one hand, due to its climatic conditions, orography and land use, flash floods are one of the main causes of inundation risks in the coastal fringe, inducing numerous damages and even casualties (e.g. Llasat et al. 2013). On the other hand, coastal damage associated to the impact of marine storms have been increasing during the last decades along this coast (Jiménez et al. 2012). However, existing studies have not analysed their joint impact to assess the most hazardous conditions, when the coastal zone would be subjected to the combined action of both types of extreme events.
Within this context, this works analyses the combined presentation of extreme events of terrestrial (flash floods) and marine (storms) origin in the Catalan coast. First, extreme events causing significant damage (based on reported damages, insurance costs and casualties) along the coast were identified for the period 1981-2014. 69 events were identified and classified according their origin (marine and/or terrestrial). Each event was characterized in terms of their marine (wave height, period, direction, storm duration) and rainfall characteristics. Since the coastline length is about 600 km, these events verify at specific locations. To cover this spatial variability, storms were locally characterized by using data from existing rain gauges and radar stations along the territory as well as hindcasted wave conditions along the entire coastal fringe. To fully characterize these events, synoptic conditions were also recorded.
From this, first, we directly obtained the corresponding marginal probabilities of each event. Then, compound frequencies were assessed and compared to the marginal ones. Finally, we identified synoptic situations with higher probability of associated compound hazards and bound the range of corresponding wave and rain conditions. By jointly considering the location where they verified, we identify coastal areas (and corresponding geomorphologic conditions) with higher probabilities of suffering damages due to impact of compound extreme events.
This work was carried out within the framework of the M-CostAdapt (CTM2017-83655-C2-1-R) research project, funded by the Spanish Ministry of Economy and Competitiveness (MINECO/AEI/FEDER, UE).
Jiménez et al. 2012. Storm-induced damages along the Catalan coast (NW Mediterranean) during the period 1958–2008. Geomorphology 143, 24-33.
Llasat et al. 2013. Towards a database on societal impact of Mediterranean floods within the framework of the HYMEX project. NHESS, 13, 1337-1350.
How to cite: Sanuy Vazquez, M., Llasat-Botija, M., Rigo, T., Jiménez, J. A., and Llasat, M. C.: On the joint impact of marine (storms) and terrestrial (flash floods) extreme events along the Catalan coast (NW Mediterranean), EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19954, https://doi.org/10.5194/egusphere-egu2020-19954, 2020.
EGU2020-1219 | Displays | ITS2.16/NH10.6
North American Warm-west/Cold-East air temperature pattern and its linkage to different El Nino typesYao Ge and Dehai Luo
In recent years, the surface air temperature (SAT) anomalies in winter over North America show a “warm-West/cool-East” (WWCE) dipole pattern. The underlying mechanism of the North American WWCE dipole pattern has been an important research topic. This study examines the physical cause of the WWCE dipole generation.
It is found that the positive phase (PNA+) of the Pacific North American (PNA) pattern can lead to the generation of the WWCE SAT dipole. However, the impact of the PNA+ on the WWCE SAT dipole over North America depends on the type of the El Nino SST anomaly. When an Eastern-Pacific (EP) type El Nino occurs, the anticyclonic anomaly center of the PNA+ over the North American continent is displaced eastward near 100°W due to intensified midlatitude westerly winds over North Pacific so that its anticyclonic anomaly dominates the whole North America. In this case, the cyclonic anomaly of the PNA+ almost disappears over the North America. Thus, the WWCE SAT dipole over the North America is weakened. In contrast, when a central-Pacific (CP) type El Nino appears, the anticyclonic anomaly center of the associated PNA+ is located over the North America west coast due to reduced midlatitude westerly winds over North Pacific. As a result, the cyclonic anomaly of the PNA+ can appear over the east United States to result in an intensified WWCE SAT dipole over the North America
How to cite: Ge, Y. and Luo, D.: North American Warm-west/Cold-East air temperature pattern and its linkage to different El Nino types, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1219, https://doi.org/10.5194/egusphere-egu2020-1219, 2020.
In recent years, the surface air temperature (SAT) anomalies in winter over North America show a “warm-West/cool-East” (WWCE) dipole pattern. The underlying mechanism of the North American WWCE dipole pattern has been an important research topic. This study examines the physical cause of the WWCE dipole generation.
It is found that the positive phase (PNA+) of the Pacific North American (PNA) pattern can lead to the generation of the WWCE SAT dipole. However, the impact of the PNA+ on the WWCE SAT dipole over North America depends on the type of the El Nino SST anomaly. When an Eastern-Pacific (EP) type El Nino occurs, the anticyclonic anomaly center of the PNA+ over the North American continent is displaced eastward near 100°W due to intensified midlatitude westerly winds over North Pacific so that its anticyclonic anomaly dominates the whole North America. In this case, the cyclonic anomaly of the PNA+ almost disappears over the North America. Thus, the WWCE SAT dipole over the North America is weakened. In contrast, when a central-Pacific (CP) type El Nino appears, the anticyclonic anomaly center of the associated PNA+ is located over the North America west coast due to reduced midlatitude westerly winds over North Pacific. As a result, the cyclonic anomaly of the PNA+ can appear over the east United States to result in an intensified WWCE SAT dipole over the North America
How to cite: Ge, Y. and Luo, D.: North American Warm-west/Cold-East air temperature pattern and its linkage to different El Nino types, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1219, https://doi.org/10.5194/egusphere-egu2020-1219, 2020.
EGU2020-20230 | Displays | ITS2.16/NH10.6
Understanding the relationship between extremes of wind and inland flooding in the UKOliver Halliday, Len Shaffrey, Dimosthenis Tsaknias, Hannah Cloke, and Alexander Siddaway
Windstorms and flooding pose a significant socio-economic threat to the United Kingdom andcan cause significant financial loss. For example, the great October storm of 1987 damaged whole elements of the national electricity grid in the west of the UK. Storms can also be associated with heavy precipitation, for example, extensive inland flooding was caused by a series of slow-moving storms in the case of the winter floods of 2013/14 in the South East of England. The UK Met Office and Environment Agency estimated the financial loss attributable to the 1987 and 2013/14 events at €6.4bn and €1.5bn respectively. The question of correlations between windstorm and flood events remains open, for example the risk of a 1987-scale event "colluding" with the economically adverse meteorology of the 2013/14 season being poorly unquantified. If wind and flood risk is correlated then insurers are under-estimating both capital requirements and risk policy price, exposing them to very substantial liabilities.
Here, a collaborative project between academics and insurers has been undertaken to improve our understanding of the spatial-temporal distribution of risk from extreme, compounded windstorm and inland flood events in the UK. Statistical analysis of different data sets (~40 years of winter ERA5 reanalysis daily maximum winds, as well as observational precipitation and river flow gauge data) reveals wind and inland flooding are modestly correlated across the UK. In addition, we find substantially more compound events than expected by chance, some of which can be linked to named UK storms.
In terms of the large-scale atmospheric drivers, there appears to be no particular preferred path for the storms associated with compound wind and flood events. However, we find that compound events appear to be moderated by the amount of rainfall in the days preceding a windstorm, rather than the overall storminess of any given year. Further, we investigate the relationship in very extreme (200-year return period) windstorms and precipitation from the 1000-years of high-resolution HiGEM climate simulations.
How to cite: Halliday, O., Shaffrey, L., Tsaknias, D., Cloke, H., and Siddaway, A.: Understanding the relationship between extremes of wind and inland flooding in the UK, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20230, https://doi.org/10.5194/egusphere-egu2020-20230, 2020.
Windstorms and flooding pose a significant socio-economic threat to the United Kingdom andcan cause significant financial loss. For example, the great October storm of 1987 damaged whole elements of the national electricity grid in the west of the UK. Storms can also be associated with heavy precipitation, for example, extensive inland flooding was caused by a series of slow-moving storms in the case of the winter floods of 2013/14 in the South East of England. The UK Met Office and Environment Agency estimated the financial loss attributable to the 1987 and 2013/14 events at €6.4bn and €1.5bn respectively. The question of correlations between windstorm and flood events remains open, for example the risk of a 1987-scale event "colluding" with the economically adverse meteorology of the 2013/14 season being poorly unquantified. If wind and flood risk is correlated then insurers are under-estimating both capital requirements and risk policy price, exposing them to very substantial liabilities.
Here, a collaborative project between academics and insurers has been undertaken to improve our understanding of the spatial-temporal distribution of risk from extreme, compounded windstorm and inland flood events in the UK. Statistical analysis of different data sets (~40 years of winter ERA5 reanalysis daily maximum winds, as well as observational precipitation and river flow gauge data) reveals wind and inland flooding are modestly correlated across the UK. In addition, we find substantially more compound events than expected by chance, some of which can be linked to named UK storms.
In terms of the large-scale atmospheric drivers, there appears to be no particular preferred path for the storms associated with compound wind and flood events. However, we find that compound events appear to be moderated by the amount of rainfall in the days preceding a windstorm, rather than the overall storminess of any given year. Further, we investigate the relationship in very extreme (200-year return period) windstorms and precipitation from the 1000-years of high-resolution HiGEM climate simulations.
How to cite: Halliday, O., Shaffrey, L., Tsaknias, D., Cloke, H., and Siddaway, A.: Understanding the relationship between extremes of wind and inland flooding in the UK, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20230, https://doi.org/10.5194/egusphere-egu2020-20230, 2020.
EGU2020-21541 | Displays | ITS2.16/NH10.6
Preliminary study of Compound Events in Greece using high-resolution downscaled climate dataAthanasios Sfetsos, Jason Markantonis, Stylianos Karozis, Nadia Politi, and Diamando Vlachogiannis
Climate change is set to result in an increase of extreme weather events such as extreme precipitation, heatwaves, floods, droughts etc. The study of the possibility of the increase of such events is of high importance, but equally important is to study the combination of these events, meaning the study of Compound Events. In our case we focus on the combination of extreme precipitation with extreme wind speed for the region of Greece.
Greece located in the region of the Eastern Mediterranean Sea is prone to Climate Change as the whole region of the Mediterranean Basin. So, it is crucial to understand how the country is affected by Compound Events of extreme precipitation and extreme wind speed. As a first step, we study the historic period 1980-2009 using the model output data. The data for the historic period analysis have been produced from Weather Research Forecast (WRF) 5km downscaled model output with temporal resolution of 6 hours, using as input ERAINTERIM data. The downscaling study that has produced the atmospheric model dataset is described in Politi, et al. (2018). The methodology for studying Compound Events in the area is presented together with the preliminary results.
How to cite: Sfetsos, A., Markantonis, J., Karozis, S., Politi, N., and Vlachogiannis, D.: Preliminary study of Compound Events in Greece using high-resolution downscaled climate data, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21541, https://doi.org/10.5194/egusphere-egu2020-21541, 2020.
Climate change is set to result in an increase of extreme weather events such as extreme precipitation, heatwaves, floods, droughts etc. The study of the possibility of the increase of such events is of high importance, but equally important is to study the combination of these events, meaning the study of Compound Events. In our case we focus on the combination of extreme precipitation with extreme wind speed for the region of Greece.
Greece located in the region of the Eastern Mediterranean Sea is prone to Climate Change as the whole region of the Mediterranean Basin. So, it is crucial to understand how the country is affected by Compound Events of extreme precipitation and extreme wind speed. As a first step, we study the historic period 1980-2009 using the model output data. The data for the historic period analysis have been produced from Weather Research Forecast (WRF) 5km downscaled model output with temporal resolution of 6 hours, using as input ERAINTERIM data. The downscaling study that has produced the atmospheric model dataset is described in Politi, et al. (2018). The methodology for studying Compound Events in the area is presented together with the preliminary results.
How to cite: Sfetsos, A., Markantonis, J., Karozis, S., Politi, N., and Vlachogiannis, D.: Preliminary study of Compound Events in Greece using high-resolution downscaled climate data, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21541, https://doi.org/10.5194/egusphere-egu2020-21541, 2020.
EGU2020-17947 | Displays | ITS2.16/NH10.6
Simulation of Spatial Temperature-Precipitation Compound Events with Circulation-Conditioned Weather GeneratorMartin Dubrovsky, Ondrej Lhotka, and Jiri Miksovsky
GRIMASA project aims to develop a spatial (not only, but especially a gridded version) stochastic weather generator (WG) applicable at various spatial and temporal scales, for both present and future climates. The multi-purpose SPAGETTA generator (Dubrovsky et al, 2019, Theoretical and Applied Climatology) being developed within this project is based on a parametric approach suggested by Wilks (1998, 2009). It was presented already at EGU-2017 and EGU-2018 conferences. It is run mainly at daily time step and allows to produce multivariate weather series for up to 100 (approximately) grid-points. In developing and validating the generator, we employ also various compound weather indices defined by multiple weather variables, which allows to account for the inter-variable correlations in the validation process. In our first experiments, the WG was run at 100 km resolution (50 km EOBS data were used for calibrating the WG) for eight European regions, and its performance was compared with RCMs (CORDEX simulations for EUR-44 domain). In our EGU-2019 contribution, our WG was validated in terms of characteristics of spatial temperature-precipitation compound spells (including dry-hot spells). Most recently, after implementing wind speed and humidity into the generator, the WG was run at much finer resolution (using data from irregularly distributed weather stations in Czechia and Sardinia) and validated in terms of spatial spells of wildfire-prone weather (using Fire Weather Index) (results were presented at AGU-2019).
Present project activities aim mainly at (A) going into finer spatial and temporal scales, and (B) conditioning the surface weather generator on larger scale circulation simulated by circulation weather generator run at much coarser resolution. The development of the circulation generator (CIRCULATOR) has started in 2019. It is based on the first-order multivariate autoregressive model (similar to the one used in SPAGETTA), and the set of generator’s variables consists of larger scale characteristics of atmospheric circulation (derived from the NCEP/NCAR reanalysis), temperature and precipitation defined for a 2.5 degree grid. In our contribution, we will show results related to these two activities, focusing on (i) WG’s ability to reproduce spatial temperature-precipitation spells at various spatial scales (down to EUR-11 resolution) for eight European regions, (ii) validation of the circulation generator in terms of its ability to reproduce frequencies of circulation patterns and larger-scale temperature and precipitation characteristics for the 8 regions, and (iii) assessing an effect of using the circulation generator to drive the surface weather generator on its ability to reproduce the compound spells.
Acknowledgements: Projects GRIMASA (Czech Science Foundation, project no. 18-15958S) and SustES (European Structural and Investment Funds, project no. CZ.02.1.01/0.0/0.0/16_019/0000797).
How to cite: Dubrovsky, M., Lhotka, O., and Miksovsky, J.: Simulation of Spatial Temperature-Precipitation Compound Events with Circulation-Conditioned Weather Generator, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-17947, https://doi.org/10.5194/egusphere-egu2020-17947, 2020.
GRIMASA project aims to develop a spatial (not only, but especially a gridded version) stochastic weather generator (WG) applicable at various spatial and temporal scales, for both present and future climates. The multi-purpose SPAGETTA generator (Dubrovsky et al, 2019, Theoretical and Applied Climatology) being developed within this project is based on a parametric approach suggested by Wilks (1998, 2009). It was presented already at EGU-2017 and EGU-2018 conferences. It is run mainly at daily time step and allows to produce multivariate weather series for up to 100 (approximately) grid-points. In developing and validating the generator, we employ also various compound weather indices defined by multiple weather variables, which allows to account for the inter-variable correlations in the validation process. In our first experiments, the WG was run at 100 km resolution (50 km EOBS data were used for calibrating the WG) for eight European regions, and its performance was compared with RCMs (CORDEX simulations for EUR-44 domain). In our EGU-2019 contribution, our WG was validated in terms of characteristics of spatial temperature-precipitation compound spells (including dry-hot spells). Most recently, after implementing wind speed and humidity into the generator, the WG was run at much finer resolution (using data from irregularly distributed weather stations in Czechia and Sardinia) and validated in terms of spatial spells of wildfire-prone weather (using Fire Weather Index) (results were presented at AGU-2019).
Present project activities aim mainly at (A) going into finer spatial and temporal scales, and (B) conditioning the surface weather generator on larger scale circulation simulated by circulation weather generator run at much coarser resolution. The development of the circulation generator (CIRCULATOR) has started in 2019. It is based on the first-order multivariate autoregressive model (similar to the one used in SPAGETTA), and the set of generator’s variables consists of larger scale characteristics of atmospheric circulation (derived from the NCEP/NCAR reanalysis), temperature and precipitation defined for a 2.5 degree grid. In our contribution, we will show results related to these two activities, focusing on (i) WG’s ability to reproduce spatial temperature-precipitation spells at various spatial scales (down to EUR-11 resolution) for eight European regions, (ii) validation of the circulation generator in terms of its ability to reproduce frequencies of circulation patterns and larger-scale temperature and precipitation characteristics for the 8 regions, and (iii) assessing an effect of using the circulation generator to drive the surface weather generator on its ability to reproduce the compound spells.
Acknowledgements: Projects GRIMASA (Czech Science Foundation, project no. 18-15958S) and SustES (European Structural and Investment Funds, project no. CZ.02.1.01/0.0/0.0/16_019/0000797).
How to cite: Dubrovsky, M., Lhotka, O., and Miksovsky, J.: Simulation of Spatial Temperature-Precipitation Compound Events with Circulation-Conditioned Weather Generator, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-17947, https://doi.org/10.5194/egusphere-egu2020-17947, 2020.
ITS2.17/SSS12.2 – Geochemistry, soil contamination and human health: theoretical basis and practical approaches towards improvement of risk assessment
EGU2020-11255 | Displays | ITS2.17/SSS12.2
Urban soils and human healthClaudio Bini and Mohammad Wahsha
Since the dawn of civilization, the anthropic activity has lead to a legacy of increased land degradation/contamination. Potentially harmful elements (PHEs) are among the most effective environmental contaminants, and their release into the environment is rising since the last decades. Interest in trace elements has been increased as a major scientific topic over the last 50 years when it was realized that some elements were essential to human health (e.g., Fe, Cu, Zn). In contrast, some others were toxic (e.g., As, Hg, Pb), and likely responsible for serious human diseases and lethal consequences. Since that time, great progress in knowledge of links between environmental geochemistry and human health has been achieved. The urban environment (nowadays the main habitat for the human population) is a potential PHEs source, with high risk for residents’ health. Indeed, PHEs concentration and distribution are related to traffic intensity, distance from roads, local topography, and heating. Industrial emissions also contribute to the release of toxic elements. Understanding the extent, distribution and fate of PHEs in the urban environment is therefore imperative to address the sustainable management of urban soils and gardens in relation to human health.
Despite the extensive researches addressed to this topic, the effects of most trace metals on human health are not yet fully understood. Uncertainty is still prevailing, particularly with non-essential elements that are “suspected” to be harmful to humans, causing severe health problems as intoxication, neurological disturbances and also cancer. Some of them (e.g., As, Cd, Hg, Pb) have attracted most attention worldwide due to their toxicity towards living organisms. Other elements (Al, B, Be, Bi, Co, Cr, Mn, Mo, Ni, Sb, Sn, Tl, V, W) are likely harmful, but may play some beneficial functions not yet well known, and should be more investigated.
Keywords: Urban soils; PHEs; Human health
How to cite: Bini, C. and Wahsha, M.: Urban soils and human health, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11255, https://doi.org/10.5194/egusphere-egu2020-11255, 2020.
Since the dawn of civilization, the anthropic activity has lead to a legacy of increased land degradation/contamination. Potentially harmful elements (PHEs) are among the most effective environmental contaminants, and their release into the environment is rising since the last decades. Interest in trace elements has been increased as a major scientific topic over the last 50 years when it was realized that some elements were essential to human health (e.g., Fe, Cu, Zn). In contrast, some others were toxic (e.g., As, Hg, Pb), and likely responsible for serious human diseases and lethal consequences. Since that time, great progress in knowledge of links between environmental geochemistry and human health has been achieved. The urban environment (nowadays the main habitat for the human population) is a potential PHEs source, with high risk for residents’ health. Indeed, PHEs concentration and distribution are related to traffic intensity, distance from roads, local topography, and heating. Industrial emissions also contribute to the release of toxic elements. Understanding the extent, distribution and fate of PHEs in the urban environment is therefore imperative to address the sustainable management of urban soils and gardens in relation to human health.
Despite the extensive researches addressed to this topic, the effects of most trace metals on human health are not yet fully understood. Uncertainty is still prevailing, particularly with non-essential elements that are “suspected” to be harmful to humans, causing severe health problems as intoxication, neurological disturbances and also cancer. Some of them (e.g., As, Cd, Hg, Pb) have attracted most attention worldwide due to their toxicity towards living organisms. Other elements (Al, B, Be, Bi, Co, Cr, Mn, Mo, Ni, Sb, Sn, Tl, V, W) are likely harmful, but may play some beneficial functions not yet well known, and should be more investigated.
Keywords: Urban soils; PHEs; Human health
How to cite: Bini, C. and Wahsha, M.: Urban soils and human health, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11255, https://doi.org/10.5194/egusphere-egu2020-11255, 2020.
EGU2020-1202 | Displays | ITS2.17/SSS12.2 | Highlight
Use of lead isotope ratios to assess sources of lead (Pb) dispersed in the urban environments during former industrial activity in the cities of Salgotarjan and Ozd (Hungary)Gorkhmaz Abbaszade, Davaakhuu Tserendorj, Tan Le, Nelson Salazar, Dóra Zacháry, Péter Völgyesi, and Csaba Szabó
Lead pollution is a global problem, known to be neurotoxic, especially to children because of its incompatibility and ability to replace essential elements in the body. However, lead in the environment is derived from multiple (natural and anthropogenic) sources over long time periods. Anthropogenic Pb pollution mainly originates from mining, smelting, industrial uses, waste incineration, coal and leaded gasoline combustion.
Identification of contamination and source apportionment of lead within the surface urban environment is a challenging task mostly because of coexistence of multiple factors contributing to the elevated concentrations. Lead has four natural stable isotopes (204Pb, 206Pb, 207Pb, and 208Pb) and due to the small fractional mass differences among these isotopes, ordinary chemical, physical or biological reactions cannot obviously influence the isotopic composition of Pb [1]. Thus, it is possible to use stable Pb isotopic composition and/or ratio to trace its sources and transports in the environment. The main aim of this study is to determine the ratios of lead isotopes in urban soil samples to assess contamination sources and finding their connections in two former industrial cites Salgótarján and Ózd, Hungary.
Urban soil samples were collected from residential areas (houses, parks, playgrounds and kindergartens) of Salgótarján and Ózd cities (36 and 60 samples, respectively) where both exposed to the harmful effects of industrial pollutants. The cities are situated in the same geological formation (e.g. brown coal deposit) and the distance between them is around 40 kilometers and both cities experienced similar industrial history.
Results showed that the average stable lead isotopic ratio of analyzed soil samples is 206Pb/207Pb:1.19 vs 208Pb/207Pb: 2.47 for Salgótarján and 206Pb/207Pb:1.19 vs 208Pb/207Pb: 2.40 for Ózd. In both cities, high significant correlation noted in 206Pb/204Pb vs 207Pb/204Pb and 206Pb/204Pb vs 208Pb/204Pb ratios (R2=0.7 and R2=0.8) reflecting the enrichment of Pb from anthropogenic sources. As the endmembers, stable isotopic ratio of local coal, slag and central European leaded gasoline (206Pb/207Pb:1.11 and 208Pb/207Pb: 2.37) were used. Surprisingly, the stable isotopic ratio depicted considerable difference for coal (206Pb/207Pb: 1.18 vs 208Pb/207Pb: 2.47 and 206Pb/207Pb:1.26 vs 208Pb/207Pb: 2.46) and slag (206Pb/207Pb: 1.18 vs 208Pb/207Pb: 2.46 and 206Pb/207Pb:1.16 vs 208Pb/207Pb: 2.41) from Salgótarján and Ózd. A particular anomaly in Salgótarján was observed for 206Pb/204Pb ratio around the coal-fired power plant where local coal was used as an energy source that might be a part of the explanation for the high ratio. Based on the comprehensive isotopic analysis, data suggested that coal combustion emissions and steelworks were the predominant Pb source in both cities, whereas, vehicular emissions and additional sources (e.g. leaded paint) are no exception.
Reference:
[1] M. Komárek, V. Ettler, V. Chrastný, and M. MihaljeviÄ, “Lead isotopes in environmental sciences: A review,” Environ. Int., vol. 34, no. 4, pp. 562–577, 2008.
How to cite: Abbaszade, G., Tserendorj, D., Le, T., Salazar, N., Zacháry, D., Völgyesi, P., and Szabó, C.: Use of lead isotope ratios to assess sources of lead (Pb) dispersed in the urban environments during former industrial activity in the cities of Salgotarjan and Ozd (Hungary) , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1202, https://doi.org/10.5194/egusphere-egu2020-1202, 2020.
Lead pollution is a global problem, known to be neurotoxic, especially to children because of its incompatibility and ability to replace essential elements in the body. However, lead in the environment is derived from multiple (natural and anthropogenic) sources over long time periods. Anthropogenic Pb pollution mainly originates from mining, smelting, industrial uses, waste incineration, coal and leaded gasoline combustion.
Identification of contamination and source apportionment of lead within the surface urban environment is a challenging task mostly because of coexistence of multiple factors contributing to the elevated concentrations. Lead has four natural stable isotopes (204Pb, 206Pb, 207Pb, and 208Pb) and due to the small fractional mass differences among these isotopes, ordinary chemical, physical or biological reactions cannot obviously influence the isotopic composition of Pb [1]. Thus, it is possible to use stable Pb isotopic composition and/or ratio to trace its sources and transports in the environment. The main aim of this study is to determine the ratios of lead isotopes in urban soil samples to assess contamination sources and finding their connections in two former industrial cites Salgótarján and Ózd, Hungary.
Urban soil samples were collected from residential areas (houses, parks, playgrounds and kindergartens) of Salgótarján and Ózd cities (36 and 60 samples, respectively) where both exposed to the harmful effects of industrial pollutants. The cities are situated in the same geological formation (e.g. brown coal deposit) and the distance between them is around 40 kilometers and both cities experienced similar industrial history.
Results showed that the average stable lead isotopic ratio of analyzed soil samples is 206Pb/207Pb:1.19 vs 208Pb/207Pb: 2.47 for Salgótarján and 206Pb/207Pb:1.19 vs 208Pb/207Pb: 2.40 for Ózd. In both cities, high significant correlation noted in 206Pb/204Pb vs 207Pb/204Pb and 206Pb/204Pb vs 208Pb/204Pb ratios (R2=0.7 and R2=0.8) reflecting the enrichment of Pb from anthropogenic sources. As the endmembers, stable isotopic ratio of local coal, slag and central European leaded gasoline (206Pb/207Pb:1.11 and 208Pb/207Pb: 2.37) were used. Surprisingly, the stable isotopic ratio depicted considerable difference for coal (206Pb/207Pb: 1.18 vs 208Pb/207Pb: 2.47 and 206Pb/207Pb:1.26 vs 208Pb/207Pb: 2.46) and slag (206Pb/207Pb: 1.18 vs 208Pb/207Pb: 2.46 and 206Pb/207Pb:1.16 vs 208Pb/207Pb: 2.41) from Salgótarján and Ózd. A particular anomaly in Salgótarján was observed for 206Pb/204Pb ratio around the coal-fired power plant where local coal was used as an energy source that might be a part of the explanation for the high ratio. Based on the comprehensive isotopic analysis, data suggested that coal combustion emissions and steelworks were the predominant Pb source in both cities, whereas, vehicular emissions and additional sources (e.g. leaded paint) are no exception.
Reference:
[1] M. Komárek, V. Ettler, V. Chrastný, and M. MihaljeviÄ, “Lead isotopes in environmental sciences: A review,” Environ. Int., vol. 34, no. 4, pp. 562–577, 2008.
How to cite: Abbaszade, G., Tserendorj, D., Le, T., Salazar, N., Zacháry, D., Völgyesi, P., and Szabó, C.: Use of lead isotope ratios to assess sources of lead (Pb) dispersed in the urban environments during former industrial activity in the cities of Salgotarjan and Ozd (Hungary) , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1202, https://doi.org/10.5194/egusphere-egu2020-1202, 2020.
EGU2020-20950 | Displays | ITS2.17/SSS12.2
Estimation and temporal-spatial variation analysis of non-point source pollution in China's planting industryWang Yafei, Zuo Lijun, Zhao Xiaoli, and Zhang Zengxiang
Nitrogen (N) and phosphorus (P) are important elements of the life system. China is the most populous country and the population continues to grow, causing an increasing demand for food. But China's arable land resource is limited, and as is the effective means to improve crop yields, fertilization also has resulted in the excessive use of nitrogen and phosphorus in crop planting systems at the same time. Non-point source pollution of planting industry is becoming more and more serious, which poses a great threat to water quality.
In this study, 14 kinds of major crops accounting for 76% of the sown area and 87% of the yield in China were selected. Based on land use/cover data, crop spatial distribution data and agricultural economic statistical survey data (from the 2010 China Statistical Yearbook), the data were distributed by spatial allocation model according to the county code, and the results of different crop fertilizer application rates were obtained. Then, the results were summed up to get the overall fertilizer application status of N and P in China.
Under this premise, combined with terrain data (DEM), arable land information (distribution of paddy fields and upland), planting patterns of 14 kinds of crops and non-point source pollution control division classified by climate types, the cropland is divided into 56 different N and P loss modes. In the first national agricultural non-point source pollution census, N and P loss coefficients under different modes were obtained through field monitoring and local investigation. On the basis of the coefficient table and fertilizer application rate, the N and P loss of planting industry in 2010 was calculated, and the results were analyzed to reach the following conclusions:
1.Fertilization of cultivated land in China covers a wide range, and the amount of fertilization varies greatly between different regions. The basic distribution law of N and P fertilizer application is relatively consistent across the country, and both more in the north than in the south, more in the east than in the west, more in the plain than in the mountain and plateau, and more in the dry land than in the paddy fields. The areas with high fertilization account for about 1/4 of the total fertilizing area in China, presenting a state of spatial aggregation.
2.Fertilizer loss amount of each kind of crops are not identical. Taking winter wheat, one of the most important major food crops in China, as an example, its P loss mainly concentrated in the semi-humid plains of the Huang-Huai-Hai Sea, the Chengdu Plains in the southern wet plain area and parts of Jiangsu Province and Anhui Province, while the central and southern are of Henan and Hebei provinces and Hanzhong plain area suffered the most severe loss.
The above research can provide scientific theoretical basis and decision-making support for the formulation of sustainable agricultural development strategy in China, and it can also be for the reduction of N and P loss and the control of water eutrophication worldwide.
How to cite: Yafei, W., Lijun, Z., Xiaoli, Z., and Zengxiang, Z.: Estimation and temporal-spatial variation analysis of non-point source pollution in China's planting industry, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20950, https://doi.org/10.5194/egusphere-egu2020-20950, 2020.
Nitrogen (N) and phosphorus (P) are important elements of the life system. China is the most populous country and the population continues to grow, causing an increasing demand for food. But China's arable land resource is limited, and as is the effective means to improve crop yields, fertilization also has resulted in the excessive use of nitrogen and phosphorus in crop planting systems at the same time. Non-point source pollution of planting industry is becoming more and more serious, which poses a great threat to water quality.
In this study, 14 kinds of major crops accounting for 76% of the sown area and 87% of the yield in China were selected. Based on land use/cover data, crop spatial distribution data and agricultural economic statistical survey data (from the 2010 China Statistical Yearbook), the data were distributed by spatial allocation model according to the county code, and the results of different crop fertilizer application rates were obtained. Then, the results were summed up to get the overall fertilizer application status of N and P in China.
Under this premise, combined with terrain data (DEM), arable land information (distribution of paddy fields and upland), planting patterns of 14 kinds of crops and non-point source pollution control division classified by climate types, the cropland is divided into 56 different N and P loss modes. In the first national agricultural non-point source pollution census, N and P loss coefficients under different modes were obtained through field monitoring and local investigation. On the basis of the coefficient table and fertilizer application rate, the N and P loss of planting industry in 2010 was calculated, and the results were analyzed to reach the following conclusions:
1.Fertilization of cultivated land in China covers a wide range, and the amount of fertilization varies greatly between different regions. The basic distribution law of N and P fertilizer application is relatively consistent across the country, and both more in the north than in the south, more in the east than in the west, more in the plain than in the mountain and plateau, and more in the dry land than in the paddy fields. The areas with high fertilization account for about 1/4 of the total fertilizing area in China, presenting a state of spatial aggregation.
2.Fertilizer loss amount of each kind of crops are not identical. Taking winter wheat, one of the most important major food crops in China, as an example, its P loss mainly concentrated in the semi-humid plains of the Huang-Huai-Hai Sea, the Chengdu Plains in the southern wet plain area and parts of Jiangsu Province and Anhui Province, while the central and southern are of Henan and Hebei provinces and Hanzhong plain area suffered the most severe loss.
The above research can provide scientific theoretical basis and decision-making support for the formulation of sustainable agricultural development strategy in China, and it can also be for the reduction of N and P loss and the control of water eutrophication worldwide.
How to cite: Yafei, W., Lijun, Z., Xiaoli, Z., and Zengxiang, Z.: Estimation and temporal-spatial variation analysis of non-point source pollution in China's planting industry, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20950, https://doi.org/10.5194/egusphere-egu2020-20950, 2020.
EGU2020-6729 | Displays | ITS2.17/SSS12.2
The impact of rainstorms on nutrient loading for a large and deep reservoirHuiyun Li, Liancong Luo, and Yiming Zhang
A main cause of water eutrophication in lakes or reservoirs are nutrient enrichment, of which variations can be influenced by climatic changes and anthropogenic activities. It was reported that rainfall may lead to a short-term increase in nutrients that threaten the water quality of lakes or reservoirs. Daily meteorological data collected over the last few years in the Lake Qiandaohu basin highlight an increase in the frequency of extreme rainfall events. The relationships between nutrient (nitrogen and phosphorus) concentrations in the lake and external loading discharged into the lake during 2013-2016 were analyzed. Meanwhile, the fluid movement and the diffusion of the pollutants from the catchment were simulated by the ELCOM model. These were in order to evaluate the possible role of extreme precipitation events in affecting the nutrient availability in the lake. Our results show that, for the largest inflow (the Xinanjiang River), 32.5% of annual TN and 32.8% of annual TP were caused by rainstorm, respectively. The time of pollutant migration could be greatly shortened by heavy rains. For lakes and reservoirs, extreme precipitation will not only lead to a sharp increase in inflow but also result in a significant increase in nutrient loading in a short period of time. Therefore, rainstorms can be an important factor of climate-induced eutrophication in lakes or reservoirs.
How to cite: Li, H., Luo, L., and Zhang, Y.: The impact of rainstorms on nutrient loading for a large and deep reservoir, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6729, https://doi.org/10.5194/egusphere-egu2020-6729, 2020.
A main cause of water eutrophication in lakes or reservoirs are nutrient enrichment, of which variations can be influenced by climatic changes and anthropogenic activities. It was reported that rainfall may lead to a short-term increase in nutrients that threaten the water quality of lakes or reservoirs. Daily meteorological data collected over the last few years in the Lake Qiandaohu basin highlight an increase in the frequency of extreme rainfall events. The relationships between nutrient (nitrogen and phosphorus) concentrations in the lake and external loading discharged into the lake during 2013-2016 were analyzed. Meanwhile, the fluid movement and the diffusion of the pollutants from the catchment were simulated by the ELCOM model. These were in order to evaluate the possible role of extreme precipitation events in affecting the nutrient availability in the lake. Our results show that, for the largest inflow (the Xinanjiang River), 32.5% of annual TN and 32.8% of annual TP were caused by rainstorm, respectively. The time of pollutant migration could be greatly shortened by heavy rains. For lakes and reservoirs, extreme precipitation will not only lead to a sharp increase in inflow but also result in a significant increase in nutrient loading in a short period of time. Therefore, rainstorms can be an important factor of climate-induced eutrophication in lakes or reservoirs.
How to cite: Li, H., Luo, L., and Zhang, Y.: The impact of rainstorms on nutrient loading for a large and deep reservoir, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6729, https://doi.org/10.5194/egusphere-egu2020-6729, 2020.
EGU2020-1587 | Displays | ITS2.17/SSS12.2
Soil contamination and ecological risk of heavy metals in alkaline vineyard soilNhung Thi Ha Pham, Izabella Babcsányi, and Andrea Farsang
The soil utilized for grape growing not only has faced the pollution problems but also could be suffered ecological risk by heavy metals from chemical fertilizers and Cu-fungicides. Hétszőlő vineyard (1.4 ha) with an alkaline reaction in soil (the average soil pH of the 0-10 cm soil layer was 8.02), which is located along and on the southern slope of Tokaj-hill, Tokaj-Hegyalja, Hungary was chosen as study area of this study. The total concentration of heavy metals, enrichment factors (EFs), pollution load index (PLI) and contamination factor (CF) were used to assess the current status and pollution degree of heavy metals in vineyard soil. Besides, the potential ecological risk would be evaluated via the ecological risk factor (Ei) of an individual metal (Zn, Pb, Co, Ni, Cr, Cu) and the potential ecological risk index (PER) of all studied metals.
Analysis results showed that all of the heavy metals had lower total contents on average compared with the Hungarian background and pollution limits (Joint Decree (6/2009. (IV. 14) KvVM-EüM-FVM and 10/2000. (VI. 2) KöM-EüM-FVM-KHVM), except for Cu (36.19 mg/kg), Ni (36.50 mg/kg) and Cr (60.26 mg/kg). Thus, the topsoil of Hétszőlő vineyard in Tokaj was contaminated by Ni, Cr, and Cu at a moderate level. EF analysis (Sc as reference element) reflected that Cu (EF = 2.70) was enriched moderately, in contrast Zn (EF = 1.22), Pb (EF = 1.05), Co (EF = 1.00) were not enriched in the vineyard topsoils. Although EF of Ni and Cr obtained at Tokaj were 1.66 and 2.30 respectively, EFmin of these studied metals were around 1 and they EFmax were higher than 2 demonstrated that these elements were enriched at some positions. The general assessment of EFs of all soil samples illustrated the anthropogenic origin of Cu, Cr, and Ni while Zn, Pb, and Co were enriched mainly from the geogenic process; and the enrichment process of heavy metals occurred more strongly at the bottom of the slope. CF, which was determined, could be divided into two groups in value, in which CF ≤ 1 presented a low contamination for Pb (CF = 0.71); Co (CF = 1.00), and 1 < CF < 3 was a moderate contamination for remaining metals Zn, Ni, Cr and Cu with CF figures were 1.06, 1.68, 2.28 and 2.08, respectively. Besides, the topsoil of Hétszőlő vineyard was considered in the moderate pollution status with FLI was 1.35. The results of Ei indicated that all heavy metal in the topsoil of vineyard showed a low ecological risk, with the descending order of contaminants was Cu (10.38) > Ni (10.07) > Co (4.98) > Cr (4.55) > Pb (3.54) > Zn (1.06). In addition, the mean PER was 34.59 and it revealed a low ecological risk for all metals in the vineyard soil. Even though there was a low potential ecological risk, the moderate level pollution of heavy metals, enrichment process, and the continuous using chemical compounds in viticulture could cause serious risk pollution by heavy metals in the future.
How to cite: Pham, N. T. H., Babcsányi, I., and Farsang, A.: Soil contamination and ecological risk of heavy metals in alkaline vineyard soil, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1587, https://doi.org/10.5194/egusphere-egu2020-1587, 2020.
The soil utilized for grape growing not only has faced the pollution problems but also could be suffered ecological risk by heavy metals from chemical fertilizers and Cu-fungicides. Hétszőlő vineyard (1.4 ha) with an alkaline reaction in soil (the average soil pH of the 0-10 cm soil layer was 8.02), which is located along and on the southern slope of Tokaj-hill, Tokaj-Hegyalja, Hungary was chosen as study area of this study. The total concentration of heavy metals, enrichment factors (EFs), pollution load index (PLI) and contamination factor (CF) were used to assess the current status and pollution degree of heavy metals in vineyard soil. Besides, the potential ecological risk would be evaluated via the ecological risk factor (Ei) of an individual metal (Zn, Pb, Co, Ni, Cr, Cu) and the potential ecological risk index (PER) of all studied metals.
Analysis results showed that all of the heavy metals had lower total contents on average compared with the Hungarian background and pollution limits (Joint Decree (6/2009. (IV. 14) KvVM-EüM-FVM and 10/2000. (VI. 2) KöM-EüM-FVM-KHVM), except for Cu (36.19 mg/kg), Ni (36.50 mg/kg) and Cr (60.26 mg/kg). Thus, the topsoil of Hétszőlő vineyard in Tokaj was contaminated by Ni, Cr, and Cu at a moderate level. EF analysis (Sc as reference element) reflected that Cu (EF = 2.70) was enriched moderately, in contrast Zn (EF = 1.22), Pb (EF = 1.05), Co (EF = 1.00) were not enriched in the vineyard topsoils. Although EF of Ni and Cr obtained at Tokaj were 1.66 and 2.30 respectively, EFmin of these studied metals were around 1 and they EFmax were higher than 2 demonstrated that these elements were enriched at some positions. The general assessment of EFs of all soil samples illustrated the anthropogenic origin of Cu, Cr, and Ni while Zn, Pb, and Co were enriched mainly from the geogenic process; and the enrichment process of heavy metals occurred more strongly at the bottom of the slope. CF, which was determined, could be divided into two groups in value, in which CF ≤ 1 presented a low contamination for Pb (CF = 0.71); Co (CF = 1.00), and 1 < CF < 3 was a moderate contamination for remaining metals Zn, Ni, Cr and Cu with CF figures were 1.06, 1.68, 2.28 and 2.08, respectively. Besides, the topsoil of Hétszőlő vineyard was considered in the moderate pollution status with FLI was 1.35. The results of Ei indicated that all heavy metal in the topsoil of vineyard showed a low ecological risk, with the descending order of contaminants was Cu (10.38) > Ni (10.07) > Co (4.98) > Cr (4.55) > Pb (3.54) > Zn (1.06). In addition, the mean PER was 34.59 and it revealed a low ecological risk for all metals in the vineyard soil. Even though there was a low potential ecological risk, the moderate level pollution of heavy metals, enrichment process, and the continuous using chemical compounds in viticulture could cause serious risk pollution by heavy metals in the future.
How to cite: Pham, N. T. H., Babcsányi, I., and Farsang, A.: Soil contamination and ecological risk of heavy metals in alkaline vineyard soil, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1587, https://doi.org/10.5194/egusphere-egu2020-1587, 2020.
EGU2020-1602 | Displays | ITS2.17/SSS12.2
Biological consequences of the spatial-differentiated level of the 131I accumulation in sheep thyroid after Chernobyl accidentElvira Denisova, Alexander Zenkin, Alexey Snegirev, Yuri Kurachenko, Gennady Kozmin, and Victor Budarkov
The aim of this work is to study the 131I biological effects on sheep at different concentrations of stable iodine in the diet. The problem of the absorbed dose estimation in the sheep thyroid gland (TG) after a radiation accident at the Chernobyl NPP in the conditions of natural micronutrient deficiency is considered. To determine the 131I critical dose in the sheep TG, leading to its dysfunction and subsequent destruction, complex laboratory studies were performed to refine the compartmental model parameters, based on reliable experimental and theoretical data. Modern technologies are used to model the TG area. The solution of the radiation transport equation is performed by the Monte Carlo technique, which takes into account both the γ - and β-radiation of the 131I immanent source and the contribution of all secondary radiations.
The studies were carried out on 64 sheep, divided into 10 groups based on the general clinical condition and body weight. The first 5 groups included animals from the Gomel region (32 sheep, iodine content in the daily diet was 0.08 mg/kg ), in the 6–10th groups (32 sheep; 0.43 mg/kg ) – from the Vladimir region. Tests for iodine content in feed and water were performed in the Belarusian Institute of Experimental Veterinary Medicine, Minsk, 1989). For sheep 1–3rd , 6–8th groups (9 sheep in the group) once peroral the 131I was injected with activity: for the 1st and 6th groups 3 µCi, for the 2nd and 7th 15 µCi, from the 3rd and 8th – 72 µCi per capita. The surviving sheep were vaccinated against Rift Valley fever and then exposed to infection with an epizootic strain of the virus of this disease.
The main theoretical result is the conversion factor of the 131I activity to the average dose rate in thyroid. The main practical result is the evaluation of the lower limit of absorbed dose in the TG (~ 300 Gy), which leads to its destruction. Animals with a reduced content of stable iodine in the diet were characterized by an increased number of cells in venous blood, reduced levels of thyroxine in the serum, altered structure and functional activity of the thyroid and liver. In animals with low levels of iodine nutrition, a large capture of the isotope by the TG was noted, which provided larger (2–5 times) doses. In sheep with iodine deficiency, a decrease in the number of leukocytes, thyroxine levels; survival is reduced. After the 131I intake, sheep developed a radiation-induced immunodeficiency, but the main mechanisms of the infectious process in animals remained: post-vaccination reactions proceeded without complications, were characterized by antibody formation and immune development.
How to cite: Denisova, E., Zenkin, A., Snegirev, A., Kurachenko, Y., Kozmin, G., and Budarkov, V.: Biological consequences of the spatial-differentiated level of the 131I accumulation in sheep thyroid after Chernobyl accident, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1602, https://doi.org/10.5194/egusphere-egu2020-1602, 2020.
The aim of this work is to study the 131I biological effects on sheep at different concentrations of stable iodine in the diet. The problem of the absorbed dose estimation in the sheep thyroid gland (TG) after a radiation accident at the Chernobyl NPP in the conditions of natural micronutrient deficiency is considered. To determine the 131I critical dose in the sheep TG, leading to its dysfunction and subsequent destruction, complex laboratory studies were performed to refine the compartmental model parameters, based on reliable experimental and theoretical data. Modern technologies are used to model the TG area. The solution of the radiation transport equation is performed by the Monte Carlo technique, which takes into account both the γ - and β-radiation of the 131I immanent source and the contribution of all secondary radiations.
The studies were carried out on 64 sheep, divided into 10 groups based on the general clinical condition and body weight. The first 5 groups included animals from the Gomel region (32 sheep, iodine content in the daily diet was 0.08 mg/kg ), in the 6–10th groups (32 sheep; 0.43 mg/kg ) – from the Vladimir region. Tests for iodine content in feed and water were performed in the Belarusian Institute of Experimental Veterinary Medicine, Minsk, 1989). For sheep 1–3rd , 6–8th groups (9 sheep in the group) once peroral the 131I was injected with activity: for the 1st and 6th groups 3 µCi, for the 2nd and 7th 15 µCi, from the 3rd and 8th – 72 µCi per capita. The surviving sheep were vaccinated against Rift Valley fever and then exposed to infection with an epizootic strain of the virus of this disease.
The main theoretical result is the conversion factor of the 131I activity to the average dose rate in thyroid. The main practical result is the evaluation of the lower limit of absorbed dose in the TG (~ 300 Gy), which leads to its destruction. Animals with a reduced content of stable iodine in the diet were characterized by an increased number of cells in venous blood, reduced levels of thyroxine in the serum, altered structure and functional activity of the thyroid and liver. In animals with low levels of iodine nutrition, a large capture of the isotope by the TG was noted, which provided larger (2–5 times) doses. In sheep with iodine deficiency, a decrease in the number of leukocytes, thyroxine levels; survival is reduced. After the 131I intake, sheep developed a radiation-induced immunodeficiency, but the main mechanisms of the infectious process in animals remained: post-vaccination reactions proceeded without complications, were characterized by antibody formation and immune development.
How to cite: Denisova, E., Zenkin, A., Snegirev, A., Kurachenko, Y., Kozmin, G., and Budarkov, V.: Biological consequences of the spatial-differentiated level of the 131I accumulation in sheep thyroid after Chernobyl accident, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1602, https://doi.org/10.5194/egusphere-egu2020-1602, 2020.
EGU2020-1214 | Displays | ITS2.17/SSS12.2 | Highlight
Vulnerability assessment of the Trinity Aquifer in Texas due to sulfonamides antibiotics leaching to groundwater forKaiyi Zhang and Itza Mendoza
Texas has the largest population of cattle farming and the highest production of poultry farming across the United States. In northeastern region, antibiotics have been widely used in Concentrated Animal Feeding Operations (CAFO) as veterinary pharmaceuticals (VP). Not fully metabolized and excreted antibiotics have caused soil pollution and resulted groundwater contamination. Sulfonamides’ high excretion rate from animals, low sorption to soils, and impact on nitrate-reducing bacteria for nitrate reduction capabilities, enhance leaching and secondary pollution from inherent nitrate-N contamination. However, there is a limited understanding of sulfonamides transport from the surface to groundwater. This research assessed the Trinity Aquifer vulnerability by incorporating major hydrogeological factors that affect and control the groundwater contamination using GIS-based DRASTIC along with major chemical factors using HYDRUS solute transport modeling. The study reclassified and refined subareas with different vulnerability potentials by overlaying various spatially referenced digital data layers. Additionally, sulfonamides transport was simulated for different vulnerable scenarios to estimate persistence of the antibiotic and potential concentrations reaching the aquifer, developing predicted methods to prevent, mitigate and remediate groundwater contamination caused by sulfonamides antibiotics.
How to cite: Zhang, K. and Mendoza, I.: Vulnerability assessment of the Trinity Aquifer in Texas due to sulfonamides antibiotics leaching to groundwater for, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1214, https://doi.org/10.5194/egusphere-egu2020-1214, 2020.
Texas has the largest population of cattle farming and the highest production of poultry farming across the United States. In northeastern region, antibiotics have been widely used in Concentrated Animal Feeding Operations (CAFO) as veterinary pharmaceuticals (VP). Not fully metabolized and excreted antibiotics have caused soil pollution and resulted groundwater contamination. Sulfonamides’ high excretion rate from animals, low sorption to soils, and impact on nitrate-reducing bacteria for nitrate reduction capabilities, enhance leaching and secondary pollution from inherent nitrate-N contamination. However, there is a limited understanding of sulfonamides transport from the surface to groundwater. This research assessed the Trinity Aquifer vulnerability by incorporating major hydrogeological factors that affect and control the groundwater contamination using GIS-based DRASTIC along with major chemical factors using HYDRUS solute transport modeling. The study reclassified and refined subareas with different vulnerability potentials by overlaying various spatially referenced digital data layers. Additionally, sulfonamides transport was simulated for different vulnerable scenarios to estimate persistence of the antibiotic and potential concentrations reaching the aquifer, developing predicted methods to prevent, mitigate and remediate groundwater contamination caused by sulfonamides antibiotics.
How to cite: Zhang, K. and Mendoza, I.: Vulnerability assessment of the Trinity Aquifer in Texas due to sulfonamides antibiotics leaching to groundwater for, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1214, https://doi.org/10.5194/egusphere-egu2020-1214, 2020.
EGU2020-4746 | Displays | ITS2.17/SSS12.2 | Highlight
Neonicotinoids in the Environment- Fate and ImpactJessica Potts, David Jones, Richard Pywell, Andy Macdonald, and Paul Cross
Over the last half century, society’s dependence on insect-assisted pollination of crops has risen by over 300% globally, while recent findings have estimated a 76% decline in flying insect biomass over the last 27 years. These losses in invertebrate numbers are thought to be due to a possible combination of various factors including parasites and diseases, agricultural intensification, climate change and possible chemical exposure including pesticides such as neonicotinoids.
Neonicotinoids are one of the most widely used insecticides on the global market. Their systemic mechanisms allow for ease of application and relatively successful outcomes in controlling biting and sucking invertebrates, however neonicotinoids have been strongly associated with recent declines in non-target organisms. Many neonicotinoids come directly in contact with the soil, either through application as a seed coating or soil drench, or through spray drift and drip from foliar applications. Relatively little research has focussed on the movement, fate and interactions of these chemicals in UK soils under general field management strategies, although evidence suggests that the addition of soil bio-amendments, such as fertilisers, can influence the mechanisms behind pesticide mobility.
My study aims to quantify the effects of Acetamiprid-based pesticide mixtures on below-ground soil functions, through the analysis of their movement and behaviour in soils of contrasting organic matter contents. A secondary aim is to assess the impact of neonicotinoids on select non-target organisms.
We used 14C labelled Acetamiprid to track the behaviour of the mixtures compared to the pure active ingredient. Previous research has only used the pure active ingredient, however this isn’t representative of true field scenarios. These spiked pesticides were added to soils of contrasting organic matter content collected from a long-term experiment at Woburn Experimental Farm, Rothamsted Research. We assessed the behaviour of these mixtures across a range of leaching, sorption and mineralisation experiments.
The mineralisation of all mixtures was found to be comparatively slow, with <23% of any given chemical/SOM combination being mineralised over the 60 day experimental period. The highest mineralisation rates were in samples with the highest SOM levels. The preliminary leaching data found that >80% of each chemical was recovered from the soil during the experiment. This, combined with low sorption and mineralisation rates, suggests that neonicotinoids are highly persistent within the environment.
Ongoing work is being conducted to investigate the knock-on impacts and biological implications of acetamiprid use under true field conditions.
How to cite: Potts, J., Jones, D., Pywell, R., Macdonald, A., and Cross, P.: Neonicotinoids in the Environment- Fate and Impact, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4746, https://doi.org/10.5194/egusphere-egu2020-4746, 2020.
Over the last half century, society’s dependence on insect-assisted pollination of crops has risen by over 300% globally, while recent findings have estimated a 76% decline in flying insect biomass over the last 27 years. These losses in invertebrate numbers are thought to be due to a possible combination of various factors including parasites and diseases, agricultural intensification, climate change and possible chemical exposure including pesticides such as neonicotinoids.
Neonicotinoids are one of the most widely used insecticides on the global market. Their systemic mechanisms allow for ease of application and relatively successful outcomes in controlling biting and sucking invertebrates, however neonicotinoids have been strongly associated with recent declines in non-target organisms. Many neonicotinoids come directly in contact with the soil, either through application as a seed coating or soil drench, or through spray drift and drip from foliar applications. Relatively little research has focussed on the movement, fate and interactions of these chemicals in UK soils under general field management strategies, although evidence suggests that the addition of soil bio-amendments, such as fertilisers, can influence the mechanisms behind pesticide mobility.
My study aims to quantify the effects of Acetamiprid-based pesticide mixtures on below-ground soil functions, through the analysis of their movement and behaviour in soils of contrasting organic matter contents. A secondary aim is to assess the impact of neonicotinoids on select non-target organisms.
We used 14C labelled Acetamiprid to track the behaviour of the mixtures compared to the pure active ingredient. Previous research has only used the pure active ingredient, however this isn’t representative of true field scenarios. These spiked pesticides were added to soils of contrasting organic matter content collected from a long-term experiment at Woburn Experimental Farm, Rothamsted Research. We assessed the behaviour of these mixtures across a range of leaching, sorption and mineralisation experiments.
The mineralisation of all mixtures was found to be comparatively slow, with <23% of any given chemical/SOM combination being mineralised over the 60 day experimental period. The highest mineralisation rates were in samples with the highest SOM levels. The preliminary leaching data found that >80% of each chemical was recovered from the soil during the experiment. This, combined with low sorption and mineralisation rates, suggests that neonicotinoids are highly persistent within the environment.
Ongoing work is being conducted to investigate the knock-on impacts and biological implications of acetamiprid use under true field conditions.
How to cite: Potts, J., Jones, D., Pywell, R., Macdonald, A., and Cross, P.: Neonicotinoids in the Environment- Fate and Impact, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4746, https://doi.org/10.5194/egusphere-egu2020-4746, 2020.
EGU2020-19709 | Displays | ITS2.17/SSS12.2 | Highlight
Geochemical modeling of chromium oxidation and treatment of polluted waters by RO/NF membrane processesIlaria Fuoco, Alberto Figoli, Alessandra Criscuoli, Rosanna De Rosa, Bartolo Gabriele, and Carmine Apollaro
Geogenic Cr(VI) contamination is a worldwide environmental issue which mainly occurs in areas where ophiolitic rocks crop out. In these areas Cr (VI) can reach high concentrations into groundwaters becoming highly dangerous for human health. Indeed Cr(VI) is recognized as highly toxic element with high mobility and bioavailability [1]. Due to these features, starting from July 2017, Italian government has lowered the Cr(VI) limit value for drinking water to 10 µg/L. To improve the living standards in contaminated areas, it is needed (i) to understand the release and fate of contaminant during the water-rock interaction and (ii) to develop efficient remediation systems for natural polluted waters. In this regard, a complementary study on genesis and treatment of a Cr-rich groundwater coming from Italian ophiolitic aquifers was conducted. Reaction path modelling is a proven geochemical tool to understand the release of Cr and its oxidation from Cr(III) to Cr(VI) during the water-rock interaction. The generally accepted hypothesis of scientific community is that geogenic Cr(III) oxidation is driven by the reduction of trivalent and tetravalent manganese (Mn(III); Mn (IV)) [2] whereas in this work the role of trivalent Fe hosted in serpentine minerals was re-evaluated. Unlike Mn, Fe is the main oxidant present in suitable amount in these rocks. Literature data confirmed the presence of Fe(III) into serpentine minerals hence reaction path modelling was performed varying the Fe (III)/Fe(tot) ratio ranging from 0.60 to 1.00. The theoretical paths, reproduce the analytical concentrations of relevant solutes, including Cr(VI), in the Mg-HCO3 water type hosted in the ophiolitic aquifers of Italy [3]. With increasing of Fe(III)/Fe(tot) ratio in serpentine minerals, high Cr(VI) concentration hold into solution until high alkalinity values. In addition, the spring with the highest Cr(VI) content (75 µg/L) was treated to lower its concentration below the threshold values. In this work membrane technologies were used as innovative method considering their many benefits, like the improvement of product quality without using chemicals [4]. A laboratory-scale set-up was used to carry out both Nanofiltration (NF) and Reverse Osmosis (RO) experiments. The experiments were conducted on different commercial membranes: one NF membrane module named DK (polyamide) and two RO membrane modules named AD (polyamide) and CD (cellulose).Tests were performed varying the operating pressures, and high Cr(VI) rejections (around 95%) were reached for all tested membranes, leading to a water containing Cr(VI) in concentrations below the threshold limits. The high flux, obtained already at lower operating pressures (27 L/m2h-10bar), combined with high selectivity towards Cr(VI) makes NF a favorable remediation option. The results obtained in this work are in line with the few data available in the literature for natural contaminated waters and there are quite promising for future scientific developments and application.
References
[1]Marinho B. A. et al., 2019. Environ Sci Pollut Res, 26(3), 2203-2227
[2]Oze C. et al., 2007. Proc. Natl. Acad. Sci. 104, 6544–6549
[3]Apollaro C. et al., 2019. Sci. Total Environ. 660, 1459-1471
[4]Figoli A. & Criscuoli A., 2017. Springer (Singapore); ISBN:9789811056215
How to cite: Fuoco, I., Figoli, A., Criscuoli, A., De Rosa, R., Gabriele, B., and Apollaro, C.: Geochemical modeling of chromium oxidation and treatment of polluted waters by RO/NF membrane processes, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19709, https://doi.org/10.5194/egusphere-egu2020-19709, 2020.
Geogenic Cr(VI) contamination is a worldwide environmental issue which mainly occurs in areas where ophiolitic rocks crop out. In these areas Cr (VI) can reach high concentrations into groundwaters becoming highly dangerous for human health. Indeed Cr(VI) is recognized as highly toxic element with high mobility and bioavailability [1]. Due to these features, starting from July 2017, Italian government has lowered the Cr(VI) limit value for drinking water to 10 µg/L. To improve the living standards in contaminated areas, it is needed (i) to understand the release and fate of contaminant during the water-rock interaction and (ii) to develop efficient remediation systems for natural polluted waters. In this regard, a complementary study on genesis and treatment of a Cr-rich groundwater coming from Italian ophiolitic aquifers was conducted. Reaction path modelling is a proven geochemical tool to understand the release of Cr and its oxidation from Cr(III) to Cr(VI) during the water-rock interaction. The generally accepted hypothesis of scientific community is that geogenic Cr(III) oxidation is driven by the reduction of trivalent and tetravalent manganese (Mn(III); Mn (IV)) [2] whereas in this work the role of trivalent Fe hosted in serpentine minerals was re-evaluated. Unlike Mn, Fe is the main oxidant present in suitable amount in these rocks. Literature data confirmed the presence of Fe(III) into serpentine minerals hence reaction path modelling was performed varying the Fe (III)/Fe(tot) ratio ranging from 0.60 to 1.00. The theoretical paths, reproduce the analytical concentrations of relevant solutes, including Cr(VI), in the Mg-HCO3 water type hosted in the ophiolitic aquifers of Italy [3]. With increasing of Fe(III)/Fe(tot) ratio in serpentine minerals, high Cr(VI) concentration hold into solution until high alkalinity values. In addition, the spring with the highest Cr(VI) content (75 µg/L) was treated to lower its concentration below the threshold values. In this work membrane technologies were used as innovative method considering their many benefits, like the improvement of product quality without using chemicals [4]. A laboratory-scale set-up was used to carry out both Nanofiltration (NF) and Reverse Osmosis (RO) experiments. The experiments were conducted on different commercial membranes: one NF membrane module named DK (polyamide) and two RO membrane modules named AD (polyamide) and CD (cellulose).Tests were performed varying the operating pressures, and high Cr(VI) rejections (around 95%) were reached for all tested membranes, leading to a water containing Cr(VI) in concentrations below the threshold limits. The high flux, obtained already at lower operating pressures (27 L/m2h-10bar), combined with high selectivity towards Cr(VI) makes NF a favorable remediation option. The results obtained in this work are in line with the few data available in the literature for natural contaminated waters and there are quite promising for future scientific developments and application.
References
[1]Marinho B. A. et al., 2019. Environ Sci Pollut Res, 26(3), 2203-2227
[2]Oze C. et al., 2007. Proc. Natl. Acad. Sci. 104, 6544–6549
[3]Apollaro C. et al., 2019. Sci. Total Environ. 660, 1459-1471
[4]Figoli A. & Criscuoli A., 2017. Springer (Singapore); ISBN:9789811056215
How to cite: Fuoco, I., Figoli, A., Criscuoli, A., De Rosa, R., Gabriele, B., and Apollaro, C.: Geochemical modeling of chromium oxidation and treatment of polluted waters by RO/NF membrane processes, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19709, https://doi.org/10.5194/egusphere-egu2020-19709, 2020.
EGU2020-22446 | Displays | ITS2.17/SSS12.2
Measurement of soil-sebum partition coefficients for high molecular weight polycyclic aromatic hydrocarbons present at former gasworks, UKDarren J. Beriro, Mark R. Cave, Joanna Wragg, Russell Thomas, Christopher Taylor, Jonathan Craggs, Alexander W. Kim, Paul Nathanail, and Christopher H. Vane
The current research builds on the findings of a systematic literature review by the authors which recommends the need to work towards a standardised method for measuring the in vitro dermal absorption of HMW-PAH in soils. One part of the method is understanding the partitioning of the high molecular weight polycyclic aromatic hydrocarbons (HMW-PAH) from soil to sebum found in skin. In vitro HMW-PAH soil-sebum partition coefficients (KSS) were measured for twelve soils collected from former UK gasworks. Concentrations of ∑16 USEPA PAH in the soils ranged from 51 to 1440 mg/kg, benzo[a]pyrene ranged from 3.2 to 132 mg/kg. Time series extractions (0.5, 1, 2, 4, 8 and 24 h) at skin temperature (32°C) of HMW-PAH from sebum to soil for two samples were conducted to determine the maximum release time-step. The maximum HMW-PAH release time-step was determined as 4 h, which was subsequently used as the extraction time for the remaining samples. Evaluation of KSS data for the 4 h extractions showed that soil type and selected HMW-PAH properties (literature based molecular weight and octanol-carbon partition coefficients) affect the amount of HMW-PAH released from soil into sebum. Characterisation of soil properties was limited to total organic carbon, which showed no relationship to KSS. Selected soils showed distinctly higher K¬SS than others. The relationship between MW and KSS was statistically significant while the relationship between KOC and KSS was not statistically significant. Further research effort is required to improve our understanding of which soil and HMW-PAH properties affect the release of HMW-PAH from soil into sebum and the reasons why.
How to cite: Beriro, D. J., Cave, M. R., Wragg, J., Thomas, R., Taylor, C., Craggs, J., Kim, A. W., Nathanail, P., and Vane, C. H.: Measurement of soil-sebum partition coefficients for high molecular weight polycyclic aromatic hydrocarbons present at former gasworks, UK, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22446, https://doi.org/10.5194/egusphere-egu2020-22446, 2020.
The current research builds on the findings of a systematic literature review by the authors which recommends the need to work towards a standardised method for measuring the in vitro dermal absorption of HMW-PAH in soils. One part of the method is understanding the partitioning of the high molecular weight polycyclic aromatic hydrocarbons (HMW-PAH) from soil to sebum found in skin. In vitro HMW-PAH soil-sebum partition coefficients (KSS) were measured for twelve soils collected from former UK gasworks. Concentrations of ∑16 USEPA PAH in the soils ranged from 51 to 1440 mg/kg, benzo[a]pyrene ranged from 3.2 to 132 mg/kg. Time series extractions (0.5, 1, 2, 4, 8 and 24 h) at skin temperature (32°C) of HMW-PAH from sebum to soil for two samples were conducted to determine the maximum release time-step. The maximum HMW-PAH release time-step was determined as 4 h, which was subsequently used as the extraction time for the remaining samples. Evaluation of KSS data for the 4 h extractions showed that soil type and selected HMW-PAH properties (literature based molecular weight and octanol-carbon partition coefficients) affect the amount of HMW-PAH released from soil into sebum. Characterisation of soil properties was limited to total organic carbon, which showed no relationship to KSS. Selected soils showed distinctly higher K¬SS than others. The relationship between MW and KSS was statistically significant while the relationship between KOC and KSS was not statistically significant. Further research effort is required to improve our understanding of which soil and HMW-PAH properties affect the release of HMW-PAH from soil into sebum and the reasons why.
How to cite: Beriro, D. J., Cave, M. R., Wragg, J., Thomas, R., Taylor, C., Craggs, J., Kim, A. W., Nathanail, P., and Vane, C. H.: Measurement of soil-sebum partition coefficients for high molecular weight polycyclic aromatic hydrocarbons present at former gasworks, UK, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22446, https://doi.org/10.5194/egusphere-egu2020-22446, 2020.
EGU2020-16476 | Displays | ITS2.17/SSS12.2
Soil contamination by pesticide residues – what and how much should we expect to find in EU agricultural soils based on pesticide recommended uses?Vera Silva, Xiaomei Yang, Luuk Fleskens, Coen Ritsema, and Violette Geissen
Pesticides are heavily used in agriculture to reduce crop losses due to pests, weeds and pathogens. The intensive and long-term use of pesticides raises major health and environmental concerns since a substantial part of applied pesticides does not reach the target and is distributed into the environment instead. Exposure data (i.e. data on occurrence and levels of different pesticide residues), a pre-requisite to perform comprehensive and cumulative pesticide risk assessments, are scarce and fragmented, especially for soil. As analysing all EU soils for pesticide residues is not realistic, the contamination status of EU soils has to be inferred. Given pesticide use data limitations, the representative uses of the active substances (a.s.) allowed in the EU market, which cover different application schemes and recommended application rates, can be used as a proxy to estimate the type and amount of pesticide residues in EU soils. These representative uses are also considered in the calculation of predicted environmental concentrations in soil (PECs), the closest to a soil quality indicator when it comes to residues in soils of currently used pesticides. Although both pesticide representative uses and PECs are publicly available, this information is presented for individual a.s., in respective EU dossiers, which up to now were never compiled into a database and explored as such. Therefore, our study provides the first predictions on the total pesticide content in EU soils, calculated for 8 different crops (i.e. cereals, maize, root crops, non-permanent industrial crops, permanent crops, grapes, dry pulses-vegetables-flowers, and temporary grassland), 3 EU regions (i.e. Northern, Central and Southern Europe), and 2 pesticide use scenarios (i.e. all pesticides applied and no herbicides used). Such predictions are integrated into the Soil Quality Mobile App SQAPP, a recently launched and freely available tool that integrates existing soil quality information (covering both soil properties and soil threats) and provides tailored recommendations to improve soil quality. Furthermore, we present soil quality thresholds for all these crop-region-use combinations in terms of the number of active substances and the total pesticide content expected in soil. Our results indicate a much higher variety of products allowed in cereals than in other crops, yet the highest pesticide load is expected in dry pulses-vegetables-flowers and in grapes. At the most heavy use scenario (i.e. all allowed substances are applied, at the same time, and at the worst recommended use patterns), pesticide input can exceed 1,200 kg a.s. ha-1 year-1 (dry pulses-vegetables-flowers). Pesticide input is expected to be the highest in Southern Europe, and the lowest in Northern Europe or Central Europe, depending on crop type. Predicted pesticide levels in soil were in line with application data, with the highest contents in dry pulses-vegetables-flowers, and in Southern Europe. Predictions-based thresholds resulted in very low soil protection, especially when compared to measured data in literature, and measurement-based thresholds. Finally, our results reinforce the need of monitoring and surveillance programs for pesticide residues in soil, proper risk evaluation procedures for mixtures, as well as the need to establish threshold methodologies for pesticides.
How to cite: Silva, V., Yang, X., Fleskens, L., Ritsema, C., and Geissen, V.: Soil contamination by pesticide residues – what and how much should we expect to find in EU agricultural soils based on pesticide recommended uses? , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-16476, https://doi.org/10.5194/egusphere-egu2020-16476, 2020.
Pesticides are heavily used in agriculture to reduce crop losses due to pests, weeds and pathogens. The intensive and long-term use of pesticides raises major health and environmental concerns since a substantial part of applied pesticides does not reach the target and is distributed into the environment instead. Exposure data (i.e. data on occurrence and levels of different pesticide residues), a pre-requisite to perform comprehensive and cumulative pesticide risk assessments, are scarce and fragmented, especially for soil. As analysing all EU soils for pesticide residues is not realistic, the contamination status of EU soils has to be inferred. Given pesticide use data limitations, the representative uses of the active substances (a.s.) allowed in the EU market, which cover different application schemes and recommended application rates, can be used as a proxy to estimate the type and amount of pesticide residues in EU soils. These representative uses are also considered in the calculation of predicted environmental concentrations in soil (PECs), the closest to a soil quality indicator when it comes to residues in soils of currently used pesticides. Although both pesticide representative uses and PECs are publicly available, this information is presented for individual a.s., in respective EU dossiers, which up to now were never compiled into a database and explored as such. Therefore, our study provides the first predictions on the total pesticide content in EU soils, calculated for 8 different crops (i.e. cereals, maize, root crops, non-permanent industrial crops, permanent crops, grapes, dry pulses-vegetables-flowers, and temporary grassland), 3 EU regions (i.e. Northern, Central and Southern Europe), and 2 pesticide use scenarios (i.e. all pesticides applied and no herbicides used). Such predictions are integrated into the Soil Quality Mobile App SQAPP, a recently launched and freely available tool that integrates existing soil quality information (covering both soil properties and soil threats) and provides tailored recommendations to improve soil quality. Furthermore, we present soil quality thresholds for all these crop-region-use combinations in terms of the number of active substances and the total pesticide content expected in soil. Our results indicate a much higher variety of products allowed in cereals than in other crops, yet the highest pesticide load is expected in dry pulses-vegetables-flowers and in grapes. At the most heavy use scenario (i.e. all allowed substances are applied, at the same time, and at the worst recommended use patterns), pesticide input can exceed 1,200 kg a.s. ha-1 year-1 (dry pulses-vegetables-flowers). Pesticide input is expected to be the highest in Southern Europe, and the lowest in Northern Europe or Central Europe, depending on crop type. Predicted pesticide levels in soil were in line with application data, with the highest contents in dry pulses-vegetables-flowers, and in Southern Europe. Predictions-based thresholds resulted in very low soil protection, especially when compared to measured data in literature, and measurement-based thresholds. Finally, our results reinforce the need of monitoring and surveillance programs for pesticide residues in soil, proper risk evaluation procedures for mixtures, as well as the need to establish threshold methodologies for pesticides.
How to cite: Silva, V., Yang, X., Fleskens, L., Ritsema, C., and Geissen, V.: Soil contamination by pesticide residues – what and how much should we expect to find in EU agricultural soils based on pesticide recommended uses? , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-16476, https://doi.org/10.5194/egusphere-egu2020-16476, 2020.
EGU2020-3348 | Displays | ITS2.17/SSS12.2 | Highlight
Glyphosate and AMPA dissipation at different depths in conventional and conservation agricultureLaura Carretta, Alessandra Cardinali, Giuseppe Zanin, and Roberta Masin
Glyphosate is the most used herbicide worldwide, especially in conservation agriculture, where the lack of mechanical weed control often necessitates chemical inputs. In conservation agriculture, the elimination of tillage operations leads to changes in soil physical, chemical, and biological properties. Consequently, herbicide environmental fate may be potentially altered relatively to conventional tillage systems. The aim of this study was, therefore, to investigate the effect of conservation agriculture and conventional tillage on the adsorption of glyphosate and on the dissipation of glyphosate and its primary metabolite aminomethylphosphonic acid (AMPA) at two depths, 0−5 and 5−20 cm.
The field trial was conducted from October 2018 to April 2019 at the Padua University Experimental Farm, North-East Italy. Glyphosate was applied as a formulated product (Roundup Power 2.0) at a dose of 1.44 kg/ha of the active ingredient. The dissipation of glyphosate and the formation/dissipation of AMPA were followed for 182 days after their application. The concentrations of glyphosate and AMPA in the soil were analysed by Ultra-High Performance Liquid Chromatography coupled with mass spectrometry. The dissipation of glyphosate was described by the first-order multicompartment model (FOMC), whereas the model for AMPA was composed of an FOMC degradation model for glyphosate and the single first-order degradation model for AMPA. The estimated trend of concentrations over time for both glyphosate and AMPA were used to derive their DT50 (time required for 50% dissipation of the initial concentration).
The results indicate an increase in glyphosate adsorption in non-tilled soil compared to the tilled soil, at both depths. Glyphosate initial dissipation was fast, followed by a slower decline. At 0–5 cm no significant difference was observed in glyphosate persistence between the two soil managements, whereas at 5–20 cm glyphosate was more persistent in non-tilled soil (DT50 18 days) than in tilled soil (DT50 8 days). The fast initial dissipation of glyphosate was reflected in an increase in the concentration of AMPA. AMPA persisted longer than glyphosate but, for this metabolite, no apparent effect was observed in response to the different soil management. The higher persistence of glyphosate under conservation tillage might increase the risk of on-site soil pollution due to the accumulation of this chemical, especially in the case of repeated applications of glyphosate. Nevertheless, high glyphosate adsorption observed in non-tilled soil may reduce the leaching potential to lower soil depths.
This abstract falls in the group “Soil contamination” and the subgroup “Experimental assessment”.
How to cite: Carretta, L., Cardinali, A., Zanin, G., and Masin, R.: Glyphosate and AMPA dissipation at different depths in conventional and conservation agriculture, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3348, https://doi.org/10.5194/egusphere-egu2020-3348, 2020.
Glyphosate is the most used herbicide worldwide, especially in conservation agriculture, where the lack of mechanical weed control often necessitates chemical inputs. In conservation agriculture, the elimination of tillage operations leads to changes in soil physical, chemical, and biological properties. Consequently, herbicide environmental fate may be potentially altered relatively to conventional tillage systems. The aim of this study was, therefore, to investigate the effect of conservation agriculture and conventional tillage on the adsorption of glyphosate and on the dissipation of glyphosate and its primary metabolite aminomethylphosphonic acid (AMPA) at two depths, 0−5 and 5−20 cm.
The field trial was conducted from October 2018 to April 2019 at the Padua University Experimental Farm, North-East Italy. Glyphosate was applied as a formulated product (Roundup Power 2.0) at a dose of 1.44 kg/ha of the active ingredient. The dissipation of glyphosate and the formation/dissipation of AMPA were followed for 182 days after their application. The concentrations of glyphosate and AMPA in the soil were analysed by Ultra-High Performance Liquid Chromatography coupled with mass spectrometry. The dissipation of glyphosate was described by the first-order multicompartment model (FOMC), whereas the model for AMPA was composed of an FOMC degradation model for glyphosate and the single first-order degradation model for AMPA. The estimated trend of concentrations over time for both glyphosate and AMPA were used to derive their DT50 (time required for 50% dissipation of the initial concentration).
The results indicate an increase in glyphosate adsorption in non-tilled soil compared to the tilled soil, at both depths. Glyphosate initial dissipation was fast, followed by a slower decline. At 0–5 cm no significant difference was observed in glyphosate persistence between the two soil managements, whereas at 5–20 cm glyphosate was more persistent in non-tilled soil (DT50 18 days) than in tilled soil (DT50 8 days). The fast initial dissipation of glyphosate was reflected in an increase in the concentration of AMPA. AMPA persisted longer than glyphosate but, for this metabolite, no apparent effect was observed in response to the different soil management. The higher persistence of glyphosate under conservation tillage might increase the risk of on-site soil pollution due to the accumulation of this chemical, especially in the case of repeated applications of glyphosate. Nevertheless, high glyphosate adsorption observed in non-tilled soil may reduce the leaching potential to lower soil depths.
This abstract falls in the group “Soil contamination” and the subgroup “Experimental assessment”.
How to cite: Carretta, L., Cardinali, A., Zanin, G., and Masin, R.: Glyphosate and AMPA dissipation at different depths in conventional and conservation agriculture, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3348, https://doi.org/10.5194/egusphere-egu2020-3348, 2020.
EGU2020-6667 | Displays | ITS2.17/SSS12.2
Analyzing the impact of soil properties on glyphosate degradationAnneli S. Karlsson, Lutz Weihermüller, Stephan Köppchen, Harry Vereecken, Michaela Dippold, and Sandra Spielvogel
In year 1974, Monsanto introduced the glyphosate [N-(phosphonomethyl) glycine] product RoundUp and in the 1990s also glyphosate resistant crops. Since then, and increasingly after the expiry of the patent, glyphosate has become the most commonly used herbicide worldwide. The estimation of the worldwide use of glyphosate as an active ingredient amounts to more than 800,000 tons/year (estimation from 2014). A herbicide of such wide spread use, commercial in agriculture and for personal use in gardens, has been found in many different environmental compartments (e.g. surface waters and food) and with negative impact on non-target organisms (e.g. glyphosate resistance in weeds, or bactericidal effects). Glyphosate persistence and degradation of the compound differ between strongly between soils. In this study, we focus on elucidating the factors contributing to the persistence and degradation of glyphosate in two contrasting soils. Different chemical additives (N, P, DOM), as well as pH change and microbial transfer alongside glyphosate application were investigated in a 14C-glyphosate multi-labeling approach, upon their effect on the glyphosate degradation. The study shows that pH initially has a strong positive impact the mineralization in both soils and the DOM addition only increased the mineralization slightly. On the other hand, phosphate addition shows contrasting results in both soils, and nitrate addition lowered the mineralization significantly. Microbial transfer did not have any significant effect on the mineralization. Furthermore, we identify the impact of adsorption of glyphosate in soil as one of the major factors reducing glyphosate degradation.
How to cite: Karlsson, A. S., Weihermüller, L., Köppchen, S., Vereecken, H., Dippold, M., and Spielvogel, S.: Analyzing the impact of soil properties on glyphosate degradation, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6667, https://doi.org/10.5194/egusphere-egu2020-6667, 2020.
In year 1974, Monsanto introduced the glyphosate [N-(phosphonomethyl) glycine] product RoundUp and in the 1990s also glyphosate resistant crops. Since then, and increasingly after the expiry of the patent, glyphosate has become the most commonly used herbicide worldwide. The estimation of the worldwide use of glyphosate as an active ingredient amounts to more than 800,000 tons/year (estimation from 2014). A herbicide of such wide spread use, commercial in agriculture and for personal use in gardens, has been found in many different environmental compartments (e.g. surface waters and food) and with negative impact on non-target organisms (e.g. glyphosate resistance in weeds, or bactericidal effects). Glyphosate persistence and degradation of the compound differ between strongly between soils. In this study, we focus on elucidating the factors contributing to the persistence and degradation of glyphosate in two contrasting soils. Different chemical additives (N, P, DOM), as well as pH change and microbial transfer alongside glyphosate application were investigated in a 14C-glyphosate multi-labeling approach, upon their effect on the glyphosate degradation. The study shows that pH initially has a strong positive impact the mineralization in both soils and the DOM addition only increased the mineralization slightly. On the other hand, phosphate addition shows contrasting results in both soils, and nitrate addition lowered the mineralization significantly. Microbial transfer did not have any significant effect on the mineralization. Furthermore, we identify the impact of adsorption of glyphosate in soil as one of the major factors reducing glyphosate degradation.
How to cite: Karlsson, A. S., Weihermüller, L., Köppchen, S., Vereecken, H., Dippold, M., and Spielvogel, S.: Analyzing the impact of soil properties on glyphosate degradation, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6667, https://doi.org/10.5194/egusphere-egu2020-6667, 2020.
EGU2020-13165 | Displays | ITS2.17/SSS12.2
Fate of glyphosate and AMPA in the vadose zone: dissipation, transport and adsorptionMarta Mencaroni, Nicola Dal Ferro, Alessandra Cardinali, Laura Carretta, Leonardo Costa, Stefano Mazzega Ciamp, Francesco Morari, Paolo Salanin, and Giuseppe Zanin
Broad contamination of systemic herbicide glyphosate –GLP– (N-(phosphonomethyl) glycine) and its metabolite aminomethylphosphonic acid (AMPA) in soil and water has become one of the main environmental issues worldwide, raising awareness of the potential harmful effects to human health and ecosystems. Physical, chemical, and biological soil properties contribute to the complex interaction between GLP and the environment, that makes any prediction of adsorption, transport, and degradation dynamics still challenging.
Within a wide project –SWAT– that tries to link GLP and AMPA dynamics through the vadose zone with groundwater contamination, the specific goals of this work are: 1. monitoring soil and water contamination of GLP and AMPA in agricultural lands; 2. identifying the driving factors leading to site-specific soil-water contaminant interactions.
Two experimental sites were located in northeastern Italy (Conegliano and Valdobbiadene municipalities) in the winegrowing terroir of the Prosecco wine production, recently included in the UNESCO’s World Heritage List. Each site was equipped with two soil-water monitoring stations (25 m2 each), multi-sensor soil probes (temperature and water content) and suction lysimeters to monitor the full soil profile. Undisturbed soil cores were also collected and later analyzed for hydraulic, physical and chemical properties down to 70 cm. After GLP field contamination on November 2018 (0.188 g m-2), soil and water were systematically sampled from each site, starting immediately after contamination and thereafter at each rain event for 6 months. Adsorption coefficients (Kf) were estimated in laboratory in order to get information about GLP sorption to soil particles at different soil layers along the full soil profile. Site-specific dissipation kinetics (DT50) were also evaluated to better understand its decay rate.
First results revealed that GLP transport was highly site specific and locally affected by preferential flows when intense rainfall events occurred (12 mm h-1 max rainfall intensity): GLP showed strong binding affinity to soil particles in the topsoil layer and it likely bypassed the porous matrix towards the deepest layers, where it was detected as in the surface one. The GLP dissipation dynamic was completed after 6 months of experimentation, whereas AMPA was still detected in the topsoil layer, attesting the full degradation after almost 300 days. Site-specific laboratory and field data will be integrated and further discussed to better understand the fate of glyphosate and AMPA in the vadose zone.
How to cite: Mencaroni, M., Dal Ferro, N., Cardinali, A., Carretta, L., Costa, L., Mazzega Ciamp, S., Morari, F., Salanin, P., and Zanin, G.: Fate of glyphosate and AMPA in the vadose zone: dissipation, transport and adsorption , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13165, https://doi.org/10.5194/egusphere-egu2020-13165, 2020.
Broad contamination of systemic herbicide glyphosate –GLP– (N-(phosphonomethyl) glycine) and its metabolite aminomethylphosphonic acid (AMPA) in soil and water has become one of the main environmental issues worldwide, raising awareness of the potential harmful effects to human health and ecosystems. Physical, chemical, and biological soil properties contribute to the complex interaction between GLP and the environment, that makes any prediction of adsorption, transport, and degradation dynamics still challenging.
Within a wide project –SWAT– that tries to link GLP and AMPA dynamics through the vadose zone with groundwater contamination, the specific goals of this work are: 1. monitoring soil and water contamination of GLP and AMPA in agricultural lands; 2. identifying the driving factors leading to site-specific soil-water contaminant interactions.
Two experimental sites were located in northeastern Italy (Conegliano and Valdobbiadene municipalities) in the winegrowing terroir of the Prosecco wine production, recently included in the UNESCO’s World Heritage List. Each site was equipped with two soil-water monitoring stations (25 m2 each), multi-sensor soil probes (temperature and water content) and suction lysimeters to monitor the full soil profile. Undisturbed soil cores were also collected and later analyzed for hydraulic, physical and chemical properties down to 70 cm. After GLP field contamination on November 2018 (0.188 g m-2), soil and water were systematically sampled from each site, starting immediately after contamination and thereafter at each rain event for 6 months. Adsorption coefficients (Kf) were estimated in laboratory in order to get information about GLP sorption to soil particles at different soil layers along the full soil profile. Site-specific dissipation kinetics (DT50) were also evaluated to better understand its decay rate.
First results revealed that GLP transport was highly site specific and locally affected by preferential flows when intense rainfall events occurred (12 mm h-1 max rainfall intensity): GLP showed strong binding affinity to soil particles in the topsoil layer and it likely bypassed the porous matrix towards the deepest layers, where it was detected as in the surface one. The GLP dissipation dynamic was completed after 6 months of experimentation, whereas AMPA was still detected in the topsoil layer, attesting the full degradation after almost 300 days. Site-specific laboratory and field data will be integrated and further discussed to better understand the fate of glyphosate and AMPA in the vadose zone.
How to cite: Mencaroni, M., Dal Ferro, N., Cardinali, A., Carretta, L., Costa, L., Mazzega Ciamp, S., Morari, F., Salanin, P., and Zanin, G.: Fate of glyphosate and AMPA in the vadose zone: dissipation, transport and adsorption , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13165, https://doi.org/10.5194/egusphere-egu2020-13165, 2020.
EGU2020-8375 | Displays | ITS2.17/SSS12.2
Non-linear character of redistribution of chemical elements in the biosphere components as a result of the living matter activityVyacheslav Korzh
Modern problems of acceptable limits of techno-sphere impact on the biosphere, optimizing the interaction of techno-sphere and the biosphere, forecasting consequences of technogenic accidents in the environment and organization of rehabilitation in the post-accident period, place absolutely new demands for knowledge. These challenges require urgent development of new methodological foundations to study mass transfer and transformation of substances, the structure of global biogeochemical systems in the biosphere. Chemical composition of oceans and seas is a result of substance migration and transformation on biogeochemical river-sea and ocean- atmosphere “barriers”, i.e. in sites of “life condensation”. Stability of these processes is the main prerequisite of the hydrosphere ecosystem stability. The use of a methodology of empirical generalization has resulted in establishing a system of chemical elements’ distribution in the hydrosphere which possesses great predictive potentials.
A comparison of elements’ composition of different phases on the global level within the hydrosphere-lithosphere-soil-atmosphere systems enabled to reveal non-linear character of redistribution of different elements between these phases which reflects a general relative increase of concentration of trace elements in the environment of living organisms due to biogeochemical processes. These processes are most active at the biogeochemical barriers, i.e. in the localities of "concentrated life" and are therefore inferred to result from the geologic activity of the ubiquitous living matter regulating its environment. The proposed nonlinearity index exhibits definite stability of the resulting living matter impact for different systems approximating 0.7: 1) 0.75 for proto lithosphere - sediment system; 2) 0.67 for river – ocean system; 3) 0.7 for ocean –– atmosphere system. The obtained value is believed to present a universal constant of biosphere reflecting biogenic stabilization of elements’ global cycles in the biosphere in the course of its evolution and corresponds to the biosphere concept of V.I. Vernadsky. The obtained values may be used as a reference values in estimation of the biosphere stability and anthropogenic contribution to transformation of the global biogeochemical cycles.
References.
Vernadsky V.I. (1994) Living Matter and Biosphere. Moscow, Nauka, 672p. (in Russian).
Korzh V.D. (1974) Some general laws governing the turnover of substance within the ocean-atmosphere-continent-ocean cycle //Journal de Recherches Atmospherioques. France. Vol.8. N.3-4. P. 653-660.
Korzh V.D. (1991) Geochemistry of the Elemental Composition of the Hydrosphere. Moscow, “Nauka”, 243 p. (in Russian).
Korzh V.D. (2017) Biosphere. The formation of elemental compositions of the hydrosphere and lithosphere. Saar-Brucken: Lambert Academic Publishing, 63 p. (in Russian).
Korzh V.D. (2019) Transfer of trace elements in the ocean-atmosphere-continent system as a factor in the formation of the elemental composition of the Earth’s soil cover.//J. Environmental Geochemistry and Health. Vol.41. P. 1-7.
How to cite: Korzh, V.: Non-linear character of redistribution of chemical elements in the biosphere components as a result of the living matter activity, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8375, https://doi.org/10.5194/egusphere-egu2020-8375, 2020.
Modern problems of acceptable limits of techno-sphere impact on the biosphere, optimizing the interaction of techno-sphere and the biosphere, forecasting consequences of technogenic accidents in the environment and organization of rehabilitation in the post-accident period, place absolutely new demands for knowledge. These challenges require urgent development of new methodological foundations to study mass transfer and transformation of substances, the structure of global biogeochemical systems in the biosphere. Chemical composition of oceans and seas is a result of substance migration and transformation on biogeochemical river-sea and ocean- atmosphere “barriers”, i.e. in sites of “life condensation”. Stability of these processes is the main prerequisite of the hydrosphere ecosystem stability. The use of a methodology of empirical generalization has resulted in establishing a system of chemical elements’ distribution in the hydrosphere which possesses great predictive potentials.
A comparison of elements’ composition of different phases on the global level within the hydrosphere-lithosphere-soil-atmosphere systems enabled to reveal non-linear character of redistribution of different elements between these phases which reflects a general relative increase of concentration of trace elements in the environment of living organisms due to biogeochemical processes. These processes are most active at the biogeochemical barriers, i.e. in the localities of "concentrated life" and are therefore inferred to result from the geologic activity of the ubiquitous living matter regulating its environment. The proposed nonlinearity index exhibits definite stability of the resulting living matter impact for different systems approximating 0.7: 1) 0.75 for proto lithosphere - sediment system; 2) 0.67 for river – ocean system; 3) 0.7 for ocean –– atmosphere system. The obtained value is believed to present a universal constant of biosphere reflecting biogenic stabilization of elements’ global cycles in the biosphere in the course of its evolution and corresponds to the biosphere concept of V.I. Vernadsky. The obtained values may be used as a reference values in estimation of the biosphere stability and anthropogenic contribution to transformation of the global biogeochemical cycles.
References.
Vernadsky V.I. (1994) Living Matter and Biosphere. Moscow, Nauka, 672p. (in Russian).
Korzh V.D. (1974) Some general laws governing the turnover of substance within the ocean-atmosphere-continent-ocean cycle //Journal de Recherches Atmospherioques. France. Vol.8. N.3-4. P. 653-660.
Korzh V.D. (1991) Geochemistry of the Elemental Composition of the Hydrosphere. Moscow, “Nauka”, 243 p. (in Russian).
Korzh V.D. (2017) Biosphere. The formation of elemental compositions of the hydrosphere and lithosphere. Saar-Brucken: Lambert Academic Publishing, 63 p. (in Russian).
Korzh V.D. (2019) Transfer of trace elements in the ocean-atmosphere-continent system as a factor in the formation of the elemental composition of the Earth’s soil cover.//J. Environmental Geochemistry and Health. Vol.41. P. 1-7.
How to cite: Korzh, V.: Non-linear character of redistribution of chemical elements in the biosphere components as a result of the living matter activity, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8375, https://doi.org/10.5194/egusphere-egu2020-8375, 2020.
EGU2020-20726 | Displays | ITS2.17/SSS12.2
Toxicity as a biogeochemical problemSergey Romanov and Elena Korobova
Most part of the existing encyclopedic dictionaries and reference books define the notion of “toxic substances” as a certain group of elements and compounds capable of significantly worsening the physiological state of organism. With development of civilization the amount and variety of such substances is steadily increasing. The process is accompanied by a growing list of diseases of geochemical origin. Now the main efforts are focused on minimizing the consumption of the so-called toxicants although almost the whole world population is practically subjected to their impact. The prevailing inductive-empirical approach towards solution of the problem of toxicity, of course, gives useful results, but leads to an economically unacceptable increase in costs of its realization and, moreover, due to a specific spatial and temporal variability of the controlled objects the effective application of MPC (TLV) standards is significantly reduced.
The mentioned approach dominates but it does not exclude a deductive decision capable of providing the general solution of the problems of this class without a significant loss of accuracy and address capacity of the results. Such a solution can be found on the basis of theoretical biogeochemistry. Performed analysis enabled to draw to the following important inferences.
- Objectively “toxic” elements or compounds existed neither in the initial biosphere nor in modern noosphere, there existed only toxic concentrations.
- Diseases of a geochemical nature can be caused not only by toxic excess concentration of elements or substances but may also result from an artificial deficiency due to strict following of MPC prescriptions.
- The final result of the ecological and geochemical impact on living organisms is determined by specificity of spatial interference of the geochemical fields of natural and technogenic genesis.
- The problem of creating a universal algorithm for assessing the ecological and geochemical quality of the territory can be reduced to fixation of the difference between the ideal and the observed state of the environment.
The proposed approach does not have obvious contraindications, and the achieved level of development in measuring elements and compounds as well as that of computer technology makes it possible to practically implement the creation of a specialized technique.
A unique opportunity to test the hypothesis presented has appeared after the Chernobyl disaster, when the geochemical field of stable 127I has been shortly overlain by the field of the technogenic radioactive 131I.
Application of this approach opens up a new path to eliminating diseases of a geochemical nature and in the future will allow the creation of specialized decision-making systems for the safe organization of territories, the formation of the strategy an environmental-geochemical regulation and the prevention of microelementoses.
How to cite: Romanov, S. and Korobova, E.: Toxicity as a biogeochemical problem, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20726, https://doi.org/10.5194/egusphere-egu2020-20726, 2020.
Most part of the existing encyclopedic dictionaries and reference books define the notion of “toxic substances” as a certain group of elements and compounds capable of significantly worsening the physiological state of organism. With development of civilization the amount and variety of such substances is steadily increasing. The process is accompanied by a growing list of diseases of geochemical origin. Now the main efforts are focused on minimizing the consumption of the so-called toxicants although almost the whole world population is practically subjected to their impact. The prevailing inductive-empirical approach towards solution of the problem of toxicity, of course, gives useful results, but leads to an economically unacceptable increase in costs of its realization and, moreover, due to a specific spatial and temporal variability of the controlled objects the effective application of MPC (TLV) standards is significantly reduced.
The mentioned approach dominates but it does not exclude a deductive decision capable of providing the general solution of the problems of this class without a significant loss of accuracy and address capacity of the results. Such a solution can be found on the basis of theoretical biogeochemistry. Performed analysis enabled to draw to the following important inferences.
- Objectively “toxic” elements or compounds existed neither in the initial biosphere nor in modern noosphere, there existed only toxic concentrations.
- Diseases of a geochemical nature can be caused not only by toxic excess concentration of elements or substances but may also result from an artificial deficiency due to strict following of MPC prescriptions.
- The final result of the ecological and geochemical impact on living organisms is determined by specificity of spatial interference of the geochemical fields of natural and technogenic genesis.
- The problem of creating a universal algorithm for assessing the ecological and geochemical quality of the territory can be reduced to fixation of the difference between the ideal and the observed state of the environment.
The proposed approach does not have obvious contraindications, and the achieved level of development in measuring elements and compounds as well as that of computer technology makes it possible to practically implement the creation of a specialized technique.
A unique opportunity to test the hypothesis presented has appeared after the Chernobyl disaster, when the geochemical field of stable 127I has been shortly overlain by the field of the technogenic radioactive 131I.
Application of this approach opens up a new path to eliminating diseases of a geochemical nature and in the future will allow the creation of specialized decision-making systems for the safe organization of territories, the formation of the strategy an environmental-geochemical regulation and the prevention of microelementoses.
How to cite: Romanov, S. and Korobova, E.: Toxicity as a biogeochemical problem, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20726, https://doi.org/10.5194/egusphere-egu2020-20726, 2020.
EGU2020-3540 | Displays | ITS2.17/SSS12.2
Effects of air pollution, urbanization and temperature on chronic kidney diseaseChenyu Liang
The prevalence of chronic kidney disease in China is around 11%, which is a serious public health challenge. Air pollution and personal habits have been cited as major causes of chronic kidney disease, but a number of studies have suggested that urbanization and meteorological factors may also play a role. Therefore, this study established a longitudinal population cohort composed of 47,204 Chinese residents, combined with geographical means to obtain PM2.5 and temperature data, and used multiple regression model and random forest algorithm to explore the impact of air pollution, urbanization and temperature on chronic kidney disease. The results showed that the contribution of temperature, urbanization and PM2.5 to CKD was second only to individual factors such as age and BMI, and the contribution of temperature and urbanization to eGFR was higher than that of PM2.5. This provides a new way of thinking for the study of non-communicable diseases such as chronic kidney disease. With the acceleration of climate warming and urbanization, more attention should be paid to the impact of urbanization and temperature on diseases.
How to cite: Liang, C.: Effects of air pollution, urbanization and temperature on chronic kidney disease, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3540, https://doi.org/10.5194/egusphere-egu2020-3540, 2020.
The prevalence of chronic kidney disease in China is around 11%, which is a serious public health challenge. Air pollution and personal habits have been cited as major causes of chronic kidney disease, but a number of studies have suggested that urbanization and meteorological factors may also play a role. Therefore, this study established a longitudinal population cohort composed of 47,204 Chinese residents, combined with geographical means to obtain PM2.5 and temperature data, and used multiple regression model and random forest algorithm to explore the impact of air pollution, urbanization and temperature on chronic kidney disease. The results showed that the contribution of temperature, urbanization and PM2.5 to CKD was second only to individual factors such as age and BMI, and the contribution of temperature and urbanization to eGFR was higher than that of PM2.5. This provides a new way of thinking for the study of non-communicable diseases such as chronic kidney disease. With the acceleration of climate warming and urbanization, more attention should be paid to the impact of urbanization and temperature on diseases.
How to cite: Liang, C.: Effects of air pollution, urbanization and temperature on chronic kidney disease, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3540, https://doi.org/10.5194/egusphere-egu2020-3540, 2020.
EGU2020-7812 | Displays | ITS2.17/SSS12.2
Application of two different health risk assessment approaches to detect soil potentially toxic element induced riskGevorg Tepanosyan, Lilit Sahakyan, and Armen Saghatelyan
Soils of urbanized and mining areas succeeded the main geochemical features of parent materials, as well as accumulate potentially toxic elements (PTE) from different anthropogenic sources. The latter resulted in the change of soil chemical composition and high level of PTE which may have negative reflection on people’s health. In this study 207 soil samples were collected from the entire territory of the city of Alaverdi hosting Alaverdi copper smelter. After the determination of Fe, Ba, Mn, Co, V, Pb, Zn, Cu, Cr, As and Mo concentrations by XRF the established data set was subjected for the PTE induced health risk assessment. In this study two commonly used health risk assessment approaches - Summary pollution index (Zc) [1]–[3] and Hazard Index (HI, US EPA) [4] were used to assess human health risk posed by the content of studied PTE in soil of Alaverdi city. The result showed that the detected concentrations are mainly the result of superposition of PTE contents introduced into the environment from natural mineralization processes and Alaverdi copper smelter related activities. The health risk assessment showed that the Zc values belonging to the extremely hazardous level has point-like shape and are surrounded by the hazardous and moderately hazardous levels, respectively. Summary pollution index showed that approximately 53 % of the city territory including the residential part is under the risk suggesting the increase in the overall incidence of diseases among frequently ill individuals, functional disorders of the vascular system and children with chronic diseases [1]. The US EPA method were in line with the results of the Zc and indicated that the observed contents of elements are posing non-carcinogenic risk to adult mainly near the copper smelter. In the case of children single-element non-carcinogenic risk values greater than 1 were detected for As, Fe, Co, Cu, Mn, Pb and Mo in 122, 95, 86, 10, 10, 9 and 6 samples out of 207 soil samples and the mean HQ values decrease in the following order: As(2.41)>Fe(1.14)>Co(1.09)> Mn(0.61)>Pb(0.41)>Cu(0.32)>V(0.19)>Mo(0.11)>Cr(0.05)>Ba(0.03)>Zn(0.02). The multi-elemental non-carcinogenic risk observed in the entire territory of the city indicating an adverse health effect to children. The results of this study suggesting the need of immediate risk reduction measures with special attention to arsenic.
References:
[1] E. K. Burenkov and E. P. Yanin, “Ecogeochemical investigations in IMGRE: past, present, future,” Appl. Geochemistry, vol. 2, pp. 5–24, 2001.
[2] C. C. Johnson, A. Demetriades, J. Locutura, and R. T. Ottesen, Mapping the Chemical Environment of Urban Areas. 2011.
[3] Y. E. Saet, B. A. Revich, and E. P. Yanin, Environmental geochemistry. Nedra, 1990.
[4] RAIS, “Risk Exposure Models for Chemicals User’s Guide,” The Risk Assessment Information System, 2020. [Online]. Available: https://rais.ornl.gov/tools/rais_chemical_risk_guide.html. [Accessed: 01-Jan-2020].
How to cite: Tepanosyan, G., Sahakyan, L., and Saghatelyan, A.: Application of two different health risk assessment approaches to detect soil potentially toxic element induced risk, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7812, https://doi.org/10.5194/egusphere-egu2020-7812, 2020.
Soils of urbanized and mining areas succeeded the main geochemical features of parent materials, as well as accumulate potentially toxic elements (PTE) from different anthropogenic sources. The latter resulted in the change of soil chemical composition and high level of PTE which may have negative reflection on people’s health. In this study 207 soil samples were collected from the entire territory of the city of Alaverdi hosting Alaverdi copper smelter. After the determination of Fe, Ba, Mn, Co, V, Pb, Zn, Cu, Cr, As and Mo concentrations by XRF the established data set was subjected for the PTE induced health risk assessment. In this study two commonly used health risk assessment approaches - Summary pollution index (Zc) [1]–[3] and Hazard Index (HI, US EPA) [4] were used to assess human health risk posed by the content of studied PTE in soil of Alaverdi city. The result showed that the detected concentrations are mainly the result of superposition of PTE contents introduced into the environment from natural mineralization processes and Alaverdi copper smelter related activities. The health risk assessment showed that the Zc values belonging to the extremely hazardous level has point-like shape and are surrounded by the hazardous and moderately hazardous levels, respectively. Summary pollution index showed that approximately 53 % of the city territory including the residential part is under the risk suggesting the increase in the overall incidence of diseases among frequently ill individuals, functional disorders of the vascular system and children with chronic diseases [1]. The US EPA method were in line with the results of the Zc and indicated that the observed contents of elements are posing non-carcinogenic risk to adult mainly near the copper smelter. In the case of children single-element non-carcinogenic risk values greater than 1 were detected for As, Fe, Co, Cu, Mn, Pb and Mo in 122, 95, 86, 10, 10, 9 and 6 samples out of 207 soil samples and the mean HQ values decrease in the following order: As(2.41)>Fe(1.14)>Co(1.09)> Mn(0.61)>Pb(0.41)>Cu(0.32)>V(0.19)>Mo(0.11)>Cr(0.05)>Ba(0.03)>Zn(0.02). The multi-elemental non-carcinogenic risk observed in the entire territory of the city indicating an adverse health effect to children. The results of this study suggesting the need of immediate risk reduction measures with special attention to arsenic.
References:
[1] E. K. Burenkov and E. P. Yanin, “Ecogeochemical investigations in IMGRE: past, present, future,” Appl. Geochemistry, vol. 2, pp. 5–24, 2001.
[2] C. C. Johnson, A. Demetriades, J. Locutura, and R. T. Ottesen, Mapping the Chemical Environment of Urban Areas. 2011.
[3] Y. E. Saet, B. A. Revich, and E. P. Yanin, Environmental geochemistry. Nedra, 1990.
[4] RAIS, “Risk Exposure Models for Chemicals User’s Guide,” The Risk Assessment Information System, 2020. [Online]. Available: https://rais.ornl.gov/tools/rais_chemical_risk_guide.html. [Accessed: 01-Jan-2020].
How to cite: Tepanosyan, G., Sahakyan, L., and Saghatelyan, A.: Application of two different health risk assessment approaches to detect soil potentially toxic element induced risk, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7812, https://doi.org/10.5194/egusphere-egu2020-7812, 2020.
EGU2020-3825 | Displays | ITS2.17/SSS12.2
Quantification of Nitrogen in Food Chain Wastes in Jiangsu Province and Its Environmental EffectsYajuan Zhang
Nitrogen is an indispensable nutrient for the growth of plants and animals. It is also one of the key elements controlling the structure and function of terrestrial, freshwater and marine ecosystems. The study of the nitrogen amount in the food chain system “crop-livestock-household” could safeguard human health and lay the foundation for food production. In this paper, we focused on Jiangsu Province county in the lower reaches of the Yangtze River as the research target area. The material flow analysis and mass balance method combined with field research were used to construct a food chain production-consumption nitrogen flow model, for analyzing quantitatively the nitrogen in the crop-livestock-household system. During 2000~2016, the total inputs of nitrogen in the food chain of Jiangsu Province area decreased from 3.38×106 t N yr-1 to 3.15×106 t N yr-1. The crop production subsystem has become the main nitrogen input source in the food chain (77.84%). This may be caused by the ever-increasing urbanization rate and the shrinkage of farmland.
Compared with 2000, the amount of waste nitrogen produced by the “crop-livestock-household” system in 2016 decreased by 31.17%. During this period, the poultry breeding subsystem contributed dominantly to the total waste nitrogen, which was 36.84%, followed by the household consumption subsystem of 35.90%, and 27.25% from the crop production subsystem. The waste nitrogen is mainly recycled through returning to the field and as feed. The nitrogen circulation rate is low (18.46~21.85%). The main sources of environmental nitrogen load include chemical fertilizer application, manure, straw and kitchen waste. The waste nitrogen utilization may recycle effectively the resources, and conform to the concept of building a resource-saving and environment-friendly society. The measures are benefits for sustainable development in society.
How to cite: Zhang, Y.: Quantification of Nitrogen in Food Chain Wastes in Jiangsu Province and Its Environmental Effects, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3825, https://doi.org/10.5194/egusphere-egu2020-3825, 2020.
Nitrogen is an indispensable nutrient for the growth of plants and animals. It is also one of the key elements controlling the structure and function of terrestrial, freshwater and marine ecosystems. The study of the nitrogen amount in the food chain system “crop-livestock-household” could safeguard human health and lay the foundation for food production. In this paper, we focused on Jiangsu Province county in the lower reaches of the Yangtze River as the research target area. The material flow analysis and mass balance method combined with field research were used to construct a food chain production-consumption nitrogen flow model, for analyzing quantitatively the nitrogen in the crop-livestock-household system. During 2000~2016, the total inputs of nitrogen in the food chain of Jiangsu Province area decreased from 3.38×106 t N yr-1 to 3.15×106 t N yr-1. The crop production subsystem has become the main nitrogen input source in the food chain (77.84%). This may be caused by the ever-increasing urbanization rate and the shrinkage of farmland.
Compared with 2000, the amount of waste nitrogen produced by the “crop-livestock-household” system in 2016 decreased by 31.17%. During this period, the poultry breeding subsystem contributed dominantly to the total waste nitrogen, which was 36.84%, followed by the household consumption subsystem of 35.90%, and 27.25% from the crop production subsystem. The waste nitrogen is mainly recycled through returning to the field and as feed. The nitrogen circulation rate is low (18.46~21.85%). The main sources of environmental nitrogen load include chemical fertilizer application, manure, straw and kitchen waste. The waste nitrogen utilization may recycle effectively the resources, and conform to the concept of building a resource-saving and environment-friendly society. The measures are benefits for sustainable development in society.
How to cite: Zhang, Y.: Quantification of Nitrogen in Food Chain Wastes in Jiangsu Province and Its Environmental Effects, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3825, https://doi.org/10.5194/egusphere-egu2020-3825, 2020.
EGU2020-6024 | Displays | ITS2.17/SSS12.2
Why should we care about Carrying Capacity? A novel screening tool for the health risk in recreational waters near estuaryMorena Galešić, Mariaines Di Dato, and Roko Andričević
The present work proposes a novel screening tool to improve the quality of recreational coastal water. Indeed, the recreational potential of beach resort depends on its health status, which in marine cities may be threatened by increasing stress produced by anthropogenic activity. In particular, we focus on the beach near an estuary, which may be affected by a considerable load of contaminants, especially when the urban sewage system is combined and designed to spill untreated wastewater directly in the coastal water. In a few words, when the Combined Sewer Overflows (CSOs) are activated, the bacterial concentration in the estuary increases, thereby resulting in a potential hazard for the swimmers’ health. In the present work, the bacterial transport is modelled through a physically-based stochastic framework, whereas the human health risk is evaluated by means of the Quantitative Microbial Risk Assessment (QMRA). As the human health risk is quantified, it is used to evaluate the Carrying Capacity indicator of the recreational coastal water. This indicator is defined as the number of swimmers that can be sustained by coastal water with an acceptable risk threshold. The results indicate that the Carrying Capacity increases by dilution processes and by reduction of the source concentration. This indicator may be viewed as a screening tool for policy-makers and other stakeholders. For instance, it can help to balance the resources needed to improve the sewage-system and the benefits coming from tourism and sustainable environmental policies, given that the beach quality, in turn, depends on the improvements in the sewage system.
How to cite: Galešić, M., Di Dato, M., and Andričević, R.: Why should we care about Carrying Capacity? A novel screening tool for the health risk in recreational waters near estuary, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6024, https://doi.org/10.5194/egusphere-egu2020-6024, 2020.
The present work proposes a novel screening tool to improve the quality of recreational coastal water. Indeed, the recreational potential of beach resort depends on its health status, which in marine cities may be threatened by increasing stress produced by anthropogenic activity. In particular, we focus on the beach near an estuary, which may be affected by a considerable load of contaminants, especially when the urban sewage system is combined and designed to spill untreated wastewater directly in the coastal water. In a few words, when the Combined Sewer Overflows (CSOs) are activated, the bacterial concentration in the estuary increases, thereby resulting in a potential hazard for the swimmers’ health. In the present work, the bacterial transport is modelled through a physically-based stochastic framework, whereas the human health risk is evaluated by means of the Quantitative Microbial Risk Assessment (QMRA). As the human health risk is quantified, it is used to evaluate the Carrying Capacity indicator of the recreational coastal water. This indicator is defined as the number of swimmers that can be sustained by coastal water with an acceptable risk threshold. The results indicate that the Carrying Capacity increases by dilution processes and by reduction of the source concentration. This indicator may be viewed as a screening tool for policy-makers and other stakeholders. For instance, it can help to balance the resources needed to improve the sewage-system and the benefits coming from tourism and sustainable environmental policies, given that the beach quality, in turn, depends on the improvements in the sewage system.
How to cite: Galešić, M., Di Dato, M., and Andričević, R.: Why should we care about Carrying Capacity? A novel screening tool for the health risk in recreational waters near estuary, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6024, https://doi.org/10.5194/egusphere-egu2020-6024, 2020.
EGU2020-4426 | Displays | ITS2.17/SSS12.2
Indicators for Environment Health Risk Assessment in the Jiangsu Province of Chinazhang jing and zhang shujie
According to the framework of Pressure-State-Response, this study established an indicator system which can reflect comprehensive risk of environment and health for an area at large scale. This indicator system includes 17 specific indicators covering social and economic development, pollution emission intensity, air pollution exposure, population vulnerability, living standards, medical and public health, culture and education. A corresponding weight was given to each indicator through Analytical Hierarchy Process (AHP) method. Comprehensive risk assessment of the environment and health of 58 counties was conducted in the Jiangsu province, China, and the assessment result was divided into four types according to risk level. Higher-risk counties are all located in the economically developed southern region of Jiangsu province and relatively high-risk counties are located along the Yangtze River and Xuzhou County and its surrounding areas. The spatial distribution of relatively low-risk counties is dispersive, and lower-risk counties mainly located in the middle region where the economy is somewhat weaker in the province. The assessment results provide reasonable and scientific basis for Jiangsu province Government in formulating environment and health policy. Moreover, it also provides a method reference for the comprehensive risk assessment of environment and health within a large area (provinces, regions and countries).
How to cite: jing, Z. and shujie, Z.: Indicators for Environment Health Risk Assessment in the Jiangsu Province of China, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4426, https://doi.org/10.5194/egusphere-egu2020-4426, 2020.
According to the framework of Pressure-State-Response, this study established an indicator system which can reflect comprehensive risk of environment and health for an area at large scale. This indicator system includes 17 specific indicators covering social and economic development, pollution emission intensity, air pollution exposure, population vulnerability, living standards, medical and public health, culture and education. A corresponding weight was given to each indicator through Analytical Hierarchy Process (AHP) method. Comprehensive risk assessment of the environment and health of 58 counties was conducted in the Jiangsu province, China, and the assessment result was divided into four types according to risk level. Higher-risk counties are all located in the economically developed southern region of Jiangsu province and relatively high-risk counties are located along the Yangtze River and Xuzhou County and its surrounding areas. The spatial distribution of relatively low-risk counties is dispersive, and lower-risk counties mainly located in the middle region where the economy is somewhat weaker in the province. The assessment results provide reasonable and scientific basis for Jiangsu province Government in formulating environment and health policy. Moreover, it also provides a method reference for the comprehensive risk assessment of environment and health within a large area (provinces, regions and countries).
How to cite: jing, Z. and shujie, Z.: Indicators for Environment Health Risk Assessment in the Jiangsu Province of China, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4426, https://doi.org/10.5194/egusphere-egu2020-4426, 2020.
EGU2020-1995 | Displays | ITS2.17/SSS12.2
Assessment of the contribution of natural, technogenic and radionuclide factors to the spread of cattle leukemia in the Chelyabinsk regionNatalia Shkaeva, Artyom Shkaev, and Viktor Budarkov
The Chelyabinsk region is located in various geographical countries and zones: the Ural-mountainous country and the West Siberian low-lying country, which, in turn, occupy the mountain-forest, forest-steppe and steppe zones. The tense ecological situation of the region is associated with radioactive and intense technogenic pollution of the territory. Excess of the natural radiation background in the territory occurred after a major radiation accident in Kyshtym, which formed the East Ural Radioactive Trace (EURT), which was formed mainly in the Ural-mountain physiographic region in the north of the region. Industrial pollution caused by industrial emissionslarge enterprises and soil degradation as a result of mining operations. In general, the EURT covered 384 settlements ( 29.7%) in the Chelyabinsk region .
The aim of this work is to assess the contribution of natural, radionuclide, and technogenic factors to the level and risk of the spread of cattle leukemia in the Chelyabinsk region , one of the most disadvantaged Russian regions for this disease. cattle. Objects of research: cattle of black-motley breed, calves of different ages, fattening young animals, lactating cows. The monitoring duration was 1993-2018 years. On the EURT and in the zone of influence of the Techa and Bagaryak rivers, 5 regions of the Chelyabinsk region were investigated: Argayashsky, Kaslinsky, Krasnoarmeysky, Kunashaksky and Sosnovsky. The control for them was another 23 districts that were not contaminated with radioactive fallout after the accident at the Mayak Production Association.
A statistically significant association was established between the degree of radioactive contamination of the territory of the Chelyabinsk region and the intensity of the epizootic situation in cattle leukemia. The degree of influence of factors of the natural and socio-economic background on the frequency of occurrence and the extent of damage to animals from the disease is calculated. For the first time, simulation models are presented reflecting the relationship between the density of radionuclide contamination and the frequency of registration of dysfunctional sites, the number of infected VLCKR, patients rejected due to leukemia of animals. Cartograms of the spatial distribution of indicators of the relative registration frequency (stationarity index) and leukemia livestock infection rate were compiled. A comparative analysis of the cartograms of the epizootic situation with the maps of technogenic pollution, the state of the natural and socio-economic background established the confinement of the highest values of the situation tension to regions of high technogenic pollution, including radioactive (urbanized areas), with intensive dairy farming of forest and forest-steppe landscape zones. Using elements of logical modeling in the form of a logical function of nonlinear logical multiplication of the probability model of the occurrence of the disease and the model of the possible infection of the livestock with leukemia, 5 zones of epizootological risk were identified in the Chelyabinsk region for the period until 2020 . The areas of highest epizootological risk are the northern most urbanized areas of the region.
How to cite: Shkaeva, N., Shkaev, A., and Budarkov, V.: Assessment of the contribution of natural, technogenic and radionuclide factors to the spread of cattle leukemia in the Chelyabinsk region, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1995, https://doi.org/10.5194/egusphere-egu2020-1995, 2020.
The Chelyabinsk region is located in various geographical countries and zones: the Ural-mountainous country and the West Siberian low-lying country, which, in turn, occupy the mountain-forest, forest-steppe and steppe zones. The tense ecological situation of the region is associated with radioactive and intense technogenic pollution of the territory. Excess of the natural radiation background in the territory occurred after a major radiation accident in Kyshtym, which formed the East Ural Radioactive Trace (EURT), which was formed mainly in the Ural-mountain physiographic region in the north of the region. Industrial pollution caused by industrial emissionslarge enterprises and soil degradation as a result of mining operations. In general, the EURT covered 384 settlements ( 29.7%) in the Chelyabinsk region .
The aim of this work is to assess the contribution of natural, radionuclide, and technogenic factors to the level and risk of the spread of cattle leukemia in the Chelyabinsk region , one of the most disadvantaged Russian regions for this disease. cattle. Objects of research: cattle of black-motley breed, calves of different ages, fattening young animals, lactating cows. The monitoring duration was 1993-2018 years. On the EURT and in the zone of influence of the Techa and Bagaryak rivers, 5 regions of the Chelyabinsk region were investigated: Argayashsky, Kaslinsky, Krasnoarmeysky, Kunashaksky and Sosnovsky. The control for them was another 23 districts that were not contaminated with radioactive fallout after the accident at the Mayak Production Association.
A statistically significant association was established between the degree of radioactive contamination of the territory of the Chelyabinsk region and the intensity of the epizootic situation in cattle leukemia. The degree of influence of factors of the natural and socio-economic background on the frequency of occurrence and the extent of damage to animals from the disease is calculated. For the first time, simulation models are presented reflecting the relationship between the density of radionuclide contamination and the frequency of registration of dysfunctional sites, the number of infected VLCKR, patients rejected due to leukemia of animals. Cartograms of the spatial distribution of indicators of the relative registration frequency (stationarity index) and leukemia livestock infection rate were compiled. A comparative analysis of the cartograms of the epizootic situation with the maps of technogenic pollution, the state of the natural and socio-economic background established the confinement of the highest values of the situation tension to regions of high technogenic pollution, including radioactive (urbanized areas), with intensive dairy farming of forest and forest-steppe landscape zones. Using elements of logical modeling in the form of a logical function of nonlinear logical multiplication of the probability model of the occurrence of the disease and the model of the possible infection of the livestock with leukemia, 5 zones of epizootological risk were identified in the Chelyabinsk region for the period until 2020 . The areas of highest epizootological risk are the northern most urbanized areas of the region.
How to cite: Shkaeva, N., Shkaev, A., and Budarkov, V.: Assessment of the contribution of natural, technogenic and radionuclide factors to the spread of cattle leukemia in the Chelyabinsk region, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1995, https://doi.org/10.5194/egusphere-egu2020-1995, 2020.
EGU2020-8360 | Displays | ITS2.17/SSS12.2
Association between PM2.5 exposure by inhalation and brain damages of Alzheimer’s disease in transgenic micepengfei fu and Ken Kin Lam Yung
Title
Association between PM2.5 exposure by inhalation and brain damages of Alzheimer’s disease in transgenic mice
Pengfei Fu a,b,
Corresponding author⁎: Ken Kin Lam Yung a,b, ⁎ (kklyung@hkbu.edu.hk)
a Department of Biology, Hong Kong Baptist University, Hong Kong SAR, China.
b Golden Meditech Center for NeuroRegeneration Sciences
ABSTRACT
Background: Fine particulate matter (PM2.5) exposure increases the risk of neurological disorders. However, the relevance between PM2.5 and Alzheimer’s disease (AD) needs to be identified and the effect of PM2.5 exposure on the brain in AD mice remains unclear.
Objective: To assess the effects of PM2.5 exposure on AD and investigate the brain damage in AD transgenic mice exposed to PM2.5.
Methods: We searched articles from the database of PubMed for meta-analyses on the association between PM2.5 exposure and AD. Further, using a novel real-world whole-body inhalation exposure system, wild type (WT) and APP/PS1 transgenic mice (AD mice) were respectively exposed to filtered air (FA) or ambient PM2.5 for 8 weeks in Taiyuan, China. The pathological and ultrastructural changes and levels of Aβ-42, TNF-α, and IL-6 in brains in FA-WT mice, FA-AD mice, FA-PM2.5 mice, and PM2.5-AD mice were measured.
Results: Long-term PM2.5 exposure had the association with increased risks of dementia and AD by OR of 1.16 (95% CI 1.07–1.26) and 3.26 (95% CI 0.84–12.74) via meta-analysis. Both lightly- and heavily polluted countries showed such increased risks. In the open field test, the PM2.5-AD mice showed more significant degenerative symptoms of AD by the behavioral change in movement. Hematoxylin-eosin staining results showed that noticeable histopathological injury such as structural disorder, hyperemia, and sporadic inflammatory cell infiltration in the brain of PM2.5-AD mice, and transmission electron microscope results displayed that serious damage in the brain in PM2.5-AD mice, which maintained disorder of cristae and vacuolation of mitochondria, synaptic abnormalities, and loose myelin sheaths. Aβ-42, TNF-α and IL-6 levels in brains of PM2.5-AD mice had raised more strongly than that of FA-WT or FA-AD mice.
Conclusion: This study indicated a strong association between PM2.5 exposure and AD risks. PM2.5 significantly aggravated the severity of neuronal pathomorphological changes and inflammation in AD mice when Aβ-42 levels in the brain were visibly increased.
Acknowledgment:
Car-SCs Treatment Technology and Study the Application of Inorganic Nanomatrices on MSC T-cells Proliferation (Project Ref: RMGS-2019-1-03).
How to cite: fu, P. and Yung, K. K. L.: Association between PM2.5 exposure by inhalation and brain damages of Alzheimer’s disease in transgenic mice , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8360, https://doi.org/10.5194/egusphere-egu2020-8360, 2020.
Title
Association between PM2.5 exposure by inhalation and brain damages of Alzheimer’s disease in transgenic mice
Pengfei Fu a,b,
Corresponding author⁎: Ken Kin Lam Yung a,b, ⁎ (kklyung@hkbu.edu.hk)
a Department of Biology, Hong Kong Baptist University, Hong Kong SAR, China.
b Golden Meditech Center for NeuroRegeneration Sciences
ABSTRACT
Background: Fine particulate matter (PM2.5) exposure increases the risk of neurological disorders. However, the relevance between PM2.5 and Alzheimer’s disease (AD) needs to be identified and the effect of PM2.5 exposure on the brain in AD mice remains unclear.
Objective: To assess the effects of PM2.5 exposure on AD and investigate the brain damage in AD transgenic mice exposed to PM2.5.
Methods: We searched articles from the database of PubMed for meta-analyses on the association between PM2.5 exposure and AD. Further, using a novel real-world whole-body inhalation exposure system, wild type (WT) and APP/PS1 transgenic mice (AD mice) were respectively exposed to filtered air (FA) or ambient PM2.5 for 8 weeks in Taiyuan, China. The pathological and ultrastructural changes and levels of Aβ-42, TNF-α, and IL-6 in brains in FA-WT mice, FA-AD mice, FA-PM2.5 mice, and PM2.5-AD mice were measured.
Results: Long-term PM2.5 exposure had the association with increased risks of dementia and AD by OR of 1.16 (95% CI 1.07–1.26) and 3.26 (95% CI 0.84–12.74) via meta-analysis. Both lightly- and heavily polluted countries showed such increased risks. In the open field test, the PM2.5-AD mice showed more significant degenerative symptoms of AD by the behavioral change in movement. Hematoxylin-eosin staining results showed that noticeable histopathological injury such as structural disorder, hyperemia, and sporadic inflammatory cell infiltration in the brain of PM2.5-AD mice, and transmission electron microscope results displayed that serious damage in the brain in PM2.5-AD mice, which maintained disorder of cristae and vacuolation of mitochondria, synaptic abnormalities, and loose myelin sheaths. Aβ-42, TNF-α and IL-6 levels in brains of PM2.5-AD mice had raised more strongly than that of FA-WT or FA-AD mice.
Conclusion: This study indicated a strong association between PM2.5 exposure and AD risks. PM2.5 significantly aggravated the severity of neuronal pathomorphological changes and inflammation in AD mice when Aβ-42 levels in the brain were visibly increased.
Acknowledgment:
Car-SCs Treatment Technology and Study the Application of Inorganic Nanomatrices on MSC T-cells Proliferation (Project Ref: RMGS-2019-1-03).
How to cite: fu, P. and Yung, K. K. L.: Association between PM2.5 exposure by inhalation and brain damages of Alzheimer’s disease in transgenic mice , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8360, https://doi.org/10.5194/egusphere-egu2020-8360, 2020.
EGU2020-18582 | Displays | ITS2.17/SSS12.2 | Highlight
Geomedical application of copper isotope ratios: change of δ65Cu in xenograft model of human cancersGabriella Kiss, Enikő Vetlényi, Lívia Varga, Ildikó Krencz, Titanilla Dankó, Gergely Rácz, Csaba Szabó, and László Palcsu
In geosciences, high precision isotope ratio determination provides essential information about processes in geological systems. Novel ambitions evolve closer to biological applications. Copper is an essential metal for human body taking part of several cellular processes (e.g. respiratory chain, enzyme function, iron metabolism, elimination of reactive oxygen species, cell signalling pathways etc). However, the disorder of copper homeostasis causes serious diseases like Wilson disease (Cu accumulation in liver caused by genetical disorder) and it could also promote tumour growth by supporting angiogenesis and metastasis formation [Denoyer et al., 2015]. Despite numerous experiments, focusing on copper concentration determination in different tumour tissues (e.g. breast, lung cancer, etc.) hoping to assist in tumour diagnosis, the results are not convincing enough. However, previous studies on hepatocellular cancer and oral squamous cell carcinoma showed that tumour tissue appears to be relatively enriched in 65Cu compared to normal tissue whereas the δ65Cu in blood of tumorous patient decreased according to data obtained from control population [Balter et al., 2015, Lobo et al., 2017]. Our main aim is to elaborate a method to understand better the change in 63Cu/65Cu stable isotope ratio during tumour growth. In this approach, we present our first results on copper isotope ratio determination in a xenograft mouse model. Our model was established in SCID (severe combined immunodeficiency disease) mice by injecting human cancer cells (1x107 cells) subcutaneously. After the tumour reached approximately 2-3 cm diameter, the tumour mass was cut it in small, equal pieces and transplanted further into 10 mice increasing the experimental set-up homogeneity. All the animals were sacrificed by cardiac puncture under deep terminal anaesthesia within four weeks. Tumour and organs were removed by ceramic knife then were frozen with liquid nitrogen and stored at -80°C. We measured the copper concentration and δ65Cu in the tumour tissue, blood, liver, kidney and brain. A clean laboratory ambience was chosen to perform the sample preparation processes decreasing the environmental contamination. Separation of copper from other biologically essential element (Na, Mg, Fe, Zn) interfering the copper isotope measurement is a serious condition of the preparation [Lauwens et al., 2017]. Effects of sodium (23Na40Ar+) and magnesium (25Mg40Ar+) on copper isotope ratio were solved by choosing not the peak center but the interference-free plateau. Our measurements have been carried out on a Thermo Neptune PLUS multicollector mass spectrometer equipped with 9 moveable Faraday detectors, 3 amplifiers with a resistance of 1013 Ohm, and 6 amplifiers with a resistance of 1011 Ohm, in wet plasma conditions. The mass spectrometric measurement of the copper isotope ratio is doped either with Ni or Ga reference material which have a well-known isotope ratios.
References:
Balter V. et al. PNAS 2015; 112: 982−985.
Denoyer D. et al. Metallomics 2015; 7: 1459−76.
Lauwens S. et al. J. Anal. At. Spectrom. 2017; 32: 597−608.
Lobo L. et al. Talanta 2017; 165: 92−97.
How to cite: Kiss, G., Vetlényi, E., Varga, L., Krencz, I., Dankó, T., Rácz, G., Szabó, C., and Palcsu, L.: Geomedical application of copper isotope ratios: change of δ65Cu in xenograft model of human cancers, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18582, https://doi.org/10.5194/egusphere-egu2020-18582, 2020.
In geosciences, high precision isotope ratio determination provides essential information about processes in geological systems. Novel ambitions evolve closer to biological applications. Copper is an essential metal for human body taking part of several cellular processes (e.g. respiratory chain, enzyme function, iron metabolism, elimination of reactive oxygen species, cell signalling pathways etc). However, the disorder of copper homeostasis causes serious diseases like Wilson disease (Cu accumulation in liver caused by genetical disorder) and it could also promote tumour growth by supporting angiogenesis and metastasis formation [Denoyer et al., 2015]. Despite numerous experiments, focusing on copper concentration determination in different tumour tissues (e.g. breast, lung cancer, etc.) hoping to assist in tumour diagnosis, the results are not convincing enough. However, previous studies on hepatocellular cancer and oral squamous cell carcinoma showed that tumour tissue appears to be relatively enriched in 65Cu compared to normal tissue whereas the δ65Cu in blood of tumorous patient decreased according to data obtained from control population [Balter et al., 2015, Lobo et al., 2017]. Our main aim is to elaborate a method to understand better the change in 63Cu/65Cu stable isotope ratio during tumour growth. In this approach, we present our first results on copper isotope ratio determination in a xenograft mouse model. Our model was established in SCID (severe combined immunodeficiency disease) mice by injecting human cancer cells (1x107 cells) subcutaneously. After the tumour reached approximately 2-3 cm diameter, the tumour mass was cut it in small, equal pieces and transplanted further into 10 mice increasing the experimental set-up homogeneity. All the animals were sacrificed by cardiac puncture under deep terminal anaesthesia within four weeks. Tumour and organs were removed by ceramic knife then were frozen with liquid nitrogen and stored at -80°C. We measured the copper concentration and δ65Cu in the tumour tissue, blood, liver, kidney and brain. A clean laboratory ambience was chosen to perform the sample preparation processes decreasing the environmental contamination. Separation of copper from other biologically essential element (Na, Mg, Fe, Zn) interfering the copper isotope measurement is a serious condition of the preparation [Lauwens et al., 2017]. Effects of sodium (23Na40Ar+) and magnesium (25Mg40Ar+) on copper isotope ratio were solved by choosing not the peak center but the interference-free plateau. Our measurements have been carried out on a Thermo Neptune PLUS multicollector mass spectrometer equipped with 9 moveable Faraday detectors, 3 amplifiers with a resistance of 1013 Ohm, and 6 amplifiers with a resistance of 1011 Ohm, in wet plasma conditions. The mass spectrometric measurement of the copper isotope ratio is doped either with Ni or Ga reference material which have a well-known isotope ratios.
References:
Balter V. et al. PNAS 2015; 112: 982−985.
Denoyer D. et al. Metallomics 2015; 7: 1459−76.
Lauwens S. et al. J. Anal. At. Spectrom. 2017; 32: 597−608.
Lobo L. et al. Talanta 2017; 165: 92−97.
How to cite: Kiss, G., Vetlényi, E., Varga, L., Krencz, I., Dankó, T., Rácz, G., Szabó, C., and Palcsu, L.: Geomedical application of copper isotope ratios: change of δ65Cu in xenograft model of human cancers, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18582, https://doi.org/10.5194/egusphere-egu2020-18582, 2020.
EGU2020-2206 | Displays | ITS2.17/SSS12.2
Urov endemic disease: peculiarities of biogeochemical food chainsVadim Ermakov, Uliana Gulyaeva, Valentina Danilova, Vladimir Safonov, Sergey Tyutikov, and Alexander Degtyarev
The comparative assessment of the levels of content and migration parameters of biologically active chemical elements in the biogeochemical food chains of the main localities of the Urov endemic disease in the Eastern Transbaikalia: rocks-soils-plants-animal hair, milk was conducted. The differentiated polyelement microelementosis with an excess of Sr [1], Mn, Cr, Ni, in some cases – P, Ba, As, Zn and deficiency of Se, J, Cu, and Mo is typical in Urov biogeochemical provinces of Eastern Transbaikalia against the background territories. Soil landscapes are not much different in content of selenium, but its migration in plants was reduced in places of spread of Urov disease. Parameters of migration of chemical elements in the soil-plant complex reflected on their content in wheat, hair cover of animals and milk cows. The sources of this imbalance are soil-forming rocks, specific conditions of soil formation (accumulation of organic matter in freezing soils of narrow valleys with a high degree of moisture and low flow, and selective concentration by plants). For floodplain soils with a high level of organic matter is characterized by a high content of micromycetes of the genus Fusarium as their species composition and abundance. The data obtained are consistent with the results of research by Chinese scientists on the assessment of the chemical elemental composition of hair in healthy children and with Urov Kashin-Beck pathology [2] and considered as risk factors in the genesis of this endemic disease.
References
How to cite: Ermakov, V., Gulyaeva, U., Danilova, V., Safonov, V., Tyutikov, S., and Degtyarev, A.: Urov endemic disease: peculiarities of biogeochemical food chains, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2206, https://doi.org/10.5194/egusphere-egu2020-2206, 2020.
The comparative assessment of the levels of content and migration parameters of biologically active chemical elements in the biogeochemical food chains of the main localities of the Urov endemic disease in the Eastern Transbaikalia: rocks-soils-plants-animal hair, milk was conducted. The differentiated polyelement microelementosis with an excess of Sr [1], Mn, Cr, Ni, in some cases – P, Ba, As, Zn and deficiency of Se, J, Cu, and Mo is typical in Urov biogeochemical provinces of Eastern Transbaikalia against the background territories. Soil landscapes are not much different in content of selenium, but its migration in plants was reduced in places of spread of Urov disease. Parameters of migration of chemical elements in the soil-plant complex reflected on their content in wheat, hair cover of animals and milk cows. The sources of this imbalance are soil-forming rocks, specific conditions of soil formation (accumulation of organic matter in freezing soils of narrow valleys with a high degree of moisture and low flow, and selective concentration by plants). For floodplain soils with a high level of organic matter is characterized by a high content of micromycetes of the genus Fusarium as their species composition and abundance. The data obtained are consistent with the results of research by Chinese scientists on the assessment of the chemical elemental composition of hair in healthy children and with Urov Kashin-Beck pathology [2] and considered as risk factors in the genesis of this endemic disease.
References
How to cite: Ermakov, V., Gulyaeva, U., Danilova, V., Safonov, V., Tyutikov, S., and Degtyarev, A.: Urov endemic disease: peculiarities of biogeochemical food chains, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2206, https://doi.org/10.5194/egusphere-egu2020-2206, 2020.
EGU2020-8165 | Displays | ITS2.17/SSS12.2
Balneology in Estonia: importance of the geochemical backgound information of the Estonian curative mudGalina Kapanen and Jaanus Terasmaa
In Estonia, natural remedies were commonly used during the first half of the 19th century. Thanks to its specific geological and geomorphological characteristics, Estonia has several significant deposits of lacustrine and marine curative (or therapeutic) mud, which has public health and commercial benefits. Many Estonian spas have traditionally incorporated a combination of natural remedies with a range of physical therapies, including gentle exercise, massage and heat and water therapies. Yet, the mud research in Estonia has been stagnating since the 1990s, and therefore one of the main limiting factor for the redevelopment of the public and commercial use is a lack of up-to-date scientific understanding about the sediment composition and deposit characteristics.
There is a long-term tradition in using fine-grained sediments (mud) for cosmetic and medical purposes but more precise information about the characteristics of such sediments is lacking. Since there are no specific standards regarding the bio-geo-chemical composition of curative mud, only the different geochemical and bioactive compound groups could be identified.
We reviewed the regional history of curative mud and the existing scientific rationale for the public and commercial applications of mud for healing purposes. We mapped spatial distribution organic and mineral matter, heavy metals in the surface sediments Estonian deposits of curative mud. Polycyclic aromatic hydrocarbons (PAHs) were controlled for the Haapsalu curative mud. Importantly, the geochemical characterisation is used to provide insights into all of the mud deposits and the broader ecosystem services of muds. The presence of heavy metals in mud is not always dangerous because many factors can affect their toxicity, including pH, and the oxygen, mineral and organic content. Muds can be used in assessing environmental quality, since the pollutants contained in them reflect the conditions of the water-body they were deposited in. The mechanisms of action of the curative mud are not fully elucidated; the net benefit is probably the result of a combination of mechanical, thermal and chemical effects. Additional studies are necessary to clarify the mechanisms of action of balneology.
How to cite: Kapanen, G. and Terasmaa, J.: Balneology in Estonia: importance of the geochemical backgound information of the Estonian curative mud , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8165, https://doi.org/10.5194/egusphere-egu2020-8165, 2020.
In Estonia, natural remedies were commonly used during the first half of the 19th century. Thanks to its specific geological and geomorphological characteristics, Estonia has several significant deposits of lacustrine and marine curative (or therapeutic) mud, which has public health and commercial benefits. Many Estonian spas have traditionally incorporated a combination of natural remedies with a range of physical therapies, including gentle exercise, massage and heat and water therapies. Yet, the mud research in Estonia has been stagnating since the 1990s, and therefore one of the main limiting factor for the redevelopment of the public and commercial use is a lack of up-to-date scientific understanding about the sediment composition and deposit characteristics.
There is a long-term tradition in using fine-grained sediments (mud) for cosmetic and medical purposes but more precise information about the characteristics of such sediments is lacking. Since there are no specific standards regarding the bio-geo-chemical composition of curative mud, only the different geochemical and bioactive compound groups could be identified.
We reviewed the regional history of curative mud and the existing scientific rationale for the public and commercial applications of mud for healing purposes. We mapped spatial distribution organic and mineral matter, heavy metals in the surface sediments Estonian deposits of curative mud. Polycyclic aromatic hydrocarbons (PAHs) were controlled for the Haapsalu curative mud. Importantly, the geochemical characterisation is used to provide insights into all of the mud deposits and the broader ecosystem services of muds. The presence of heavy metals in mud is not always dangerous because many factors can affect their toxicity, including pH, and the oxygen, mineral and organic content. Muds can be used in assessing environmental quality, since the pollutants contained in them reflect the conditions of the water-body they were deposited in. The mechanisms of action of the curative mud are not fully elucidated; the net benefit is probably the result of a combination of mechanical, thermal and chemical effects. Additional studies are necessary to clarify the mechanisms of action of balneology.
How to cite: Kapanen, G. and Terasmaa, J.: Balneology in Estonia: importance of the geochemical backgound information of the Estonian curative mud , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8165, https://doi.org/10.5194/egusphere-egu2020-8165, 2020.
EGU2020-9000 | Displays | ITS2.17/SSS12.2 | Highlight
Application of Geoinformation Technologies for minimization of thyroid gland diseases in the impact areas of the radioiodine falloutVladimir Baranchukov, Elena Korobova, and Sergey Romanov
Modern geoinformation technologies are widely used in spatial data analysis including medical geography locating spatial distribution of site-specific diseases. Following obviously essential problems the major part of such maps have been constructed for the most dangerous diseases. Although thyroid goiter has been known since ancient times, but it was not earlier than the middle of XIXth century when Chatain has related this disease to deficiency of the particular chemical element (iodine). And not earlier than 1938 Vinogradov has coined the notion of biogeochemical provinces to distinguish areas of specific endemic disease of geochemical origin and summarized natural factors causing iodine deficiency in local diets and contributing to goiter manifestation. The Chernobyl accident has highlighted the problem of a combined negative impact of radioiodine contamination and stable iodine deficiency. Technogenic and natural isotopes of iodine have specific spatial structure and this fact opened new prospects in identification of areas under different risk levels by using GIS technology. To study the geochemical factors responsible for distribution of the thyroid gland diseases in Chernobyl fallout area we have created and develop a specialized geographic information system basing on the idea of a two-layers spatial structure of modern noosphere (Korobova, 2017) according to which the natural geochemical background reflected in the soil cover structure is overlain by technogenic contamination fields. As a result an interferential imagery is produced. This image can be interpreted as a risk map which in turn may be verified by health effects. The study was performed for 4 regions subjected to the Chernobyl accident (Bryansk, Oryol, Kaluga and Tula oblast’s). An overlay of natural iodine deficiency and technogenic iodine fallout map layers classified by 6 zones from minimum to maximum risk allowed to identify 12 zones and to evaluate a combined risk for 93 rural districts. Comparison of the created combined risk map and radionuclide contamination map with regional medical data on standardized incidence of thyroid cancer (code C-73 ICD-10) had a higher correlation (r = 0.493, n = 93) compared to the map of the levels of radionuclide loss. All this, obviously, demonstrates that the proposed GIS technology will be useful to adequately minimize in any case thyroid diseases.
References
Korobova, E.M. Principles of spatial organization and evolution of the biosphere and the noosphere. Geochem. Int. 55, 1205–1282 (2017) doi:10.1134/S001670291713002X
How to cite: Baranchukov, V., Korobova, E., and Romanov, S.: Application of Geoinformation Technologies for minimization of thyroid gland diseases in the impact areas of the radioiodine fallout, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9000, https://doi.org/10.5194/egusphere-egu2020-9000, 2020.
Modern geoinformation technologies are widely used in spatial data analysis including medical geography locating spatial distribution of site-specific diseases. Following obviously essential problems the major part of such maps have been constructed for the most dangerous diseases. Although thyroid goiter has been known since ancient times, but it was not earlier than the middle of XIXth century when Chatain has related this disease to deficiency of the particular chemical element (iodine). And not earlier than 1938 Vinogradov has coined the notion of biogeochemical provinces to distinguish areas of specific endemic disease of geochemical origin and summarized natural factors causing iodine deficiency in local diets and contributing to goiter manifestation. The Chernobyl accident has highlighted the problem of a combined negative impact of radioiodine contamination and stable iodine deficiency. Technogenic and natural isotopes of iodine have specific spatial structure and this fact opened new prospects in identification of areas under different risk levels by using GIS technology. To study the geochemical factors responsible for distribution of the thyroid gland diseases in Chernobyl fallout area we have created and develop a specialized geographic information system basing on the idea of a two-layers spatial structure of modern noosphere (Korobova, 2017) according to which the natural geochemical background reflected in the soil cover structure is overlain by technogenic contamination fields. As a result an interferential imagery is produced. This image can be interpreted as a risk map which in turn may be verified by health effects. The study was performed for 4 regions subjected to the Chernobyl accident (Bryansk, Oryol, Kaluga and Tula oblast’s). An overlay of natural iodine deficiency and technogenic iodine fallout map layers classified by 6 zones from minimum to maximum risk allowed to identify 12 zones and to evaluate a combined risk for 93 rural districts. Comparison of the created combined risk map and radionuclide contamination map with regional medical data on standardized incidence of thyroid cancer (code C-73 ICD-10) had a higher correlation (r = 0.493, n = 93) compared to the map of the levels of radionuclide loss. All this, obviously, demonstrates that the proposed GIS technology will be useful to adequately minimize in any case thyroid diseases.
References
Korobova, E.M. Principles of spatial organization and evolution of the biosphere and the noosphere. Geochem. Int. 55, 1205–1282 (2017) doi:10.1134/S001670291713002X
How to cite: Baranchukov, V., Korobova, E., and Romanov, S.: Application of Geoinformation Technologies for minimization of thyroid gland diseases in the impact areas of the radioiodine fallout, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9000, https://doi.org/10.5194/egusphere-egu2020-9000, 2020.
EGU2020-635 | Displays | ITS2.17/SSS12.2
Methodological aspects of extracting heavy metals from soils and sedimentsMarina Burachevskaya, Tatiana Minkina, Saglara Mandzhieva, and Valery Kalinichenko
Soil and sediment contamination by heavy metals (HMs) can create a significant risk to human health. Quite a few human activities produce waste, much of which is discharged in soils as well as rivers and other water bodies where they accumulate in sediments. The behavior of pollutants in the terrestrial ecosystems is characterized by their fractional composition rather than their total content in the soil and sediments. To determine HMs there are different sample preparation techniques: sifted through a sieve with a hole diameter of 1 mm (AAB; McLaren, Crawford 1973; Miller et al, 1986; etc), of 2 mm (EDTA, EDTPA, etc), of 0.25 mm (Tessier et al, 1979). Another problem is the readsorption of metals that depends on the extraction conditions. Due to the fact that there are a number of difficulties in comparing the results obtained by different methods of extraction.
The main objective of this work was to study the influence of sample preparation and readsorption processes on the extractability of HMs from soil and bottom sediments in the model experiment. The experimental design included the control (original uncontaminated soil - Haplic Chernozem), treatments with the addition of Cu, Ni, Zn, Cd and Pb at a rates of 2, 10 and 20 maximum permissible concentration. The metal compounds extracted with the 1 N CH3COONH4 (AAB) are classified as exchangeable. Different sample preparation techniques has been used: the air-dry soil was sieving through a sieve with holes in 1 mm and with holes in 0.25 mm. The assessment of HM readsorption processes in soil was based on the comparative analysis of the results of multiple extraction of metals by AAB static extraction (shake for 1 hour and set aside for a day, 10 times) and dynamic conditions (10 times continuous processing).
It was found that the extraction of HM during sample preparation through a sieve of 0.25 mm was higher than through a sieve of 1 mm (to 3-17%). This is due to the larger surface of soil particles. These differences were manifested both in unpolluted soil and sediments and at different levels of their pollution. With the increasing contamination level the differences were more noticeable. Under static conditions a single AAB extraction does not extract the entire stock of mobile forms of HMs. Dynamic extraction of heavy metals from the soil and sediments, when conditions do not allow to achieve equilibrium, the processes of metal readsorption are eliminated, which leads to greater HM extraction from the soil and sediments.
Thus, the state of the analyzed sample has a significant influence of HMs extraction. To analyze and compare the results of fractionation of HM compounds from soils, it is necessary to take into account the sample preparation used and extraction time required in each method.
This work was supported by grant of the Russian Scientific Foundation, project no. 19-74-00085.
How to cite: Burachevskaya, M., Minkina, T., Mandzhieva, S., and Kalinichenko, V.: Methodological aspects of extracting heavy metals from soils and sediments, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-635, https://doi.org/10.5194/egusphere-egu2020-635, 2020.
Soil and sediment contamination by heavy metals (HMs) can create a significant risk to human health. Quite a few human activities produce waste, much of which is discharged in soils as well as rivers and other water bodies where they accumulate in sediments. The behavior of pollutants in the terrestrial ecosystems is characterized by their fractional composition rather than their total content in the soil and sediments. To determine HMs there are different sample preparation techniques: sifted through a sieve with a hole diameter of 1 mm (AAB; McLaren, Crawford 1973; Miller et al, 1986; etc), of 2 mm (EDTA, EDTPA, etc), of 0.25 mm (Tessier et al, 1979). Another problem is the readsorption of metals that depends on the extraction conditions. Due to the fact that there are a number of difficulties in comparing the results obtained by different methods of extraction.
The main objective of this work was to study the influence of sample preparation and readsorption processes on the extractability of HMs from soil and bottom sediments in the model experiment. The experimental design included the control (original uncontaminated soil - Haplic Chernozem), treatments with the addition of Cu, Ni, Zn, Cd and Pb at a rates of 2, 10 and 20 maximum permissible concentration. The metal compounds extracted with the 1 N CH3COONH4 (AAB) are classified as exchangeable. Different sample preparation techniques has been used: the air-dry soil was sieving through a sieve with holes in 1 mm and with holes in 0.25 mm. The assessment of HM readsorption processes in soil was based on the comparative analysis of the results of multiple extraction of metals by AAB static extraction (shake for 1 hour and set aside for a day, 10 times) and dynamic conditions (10 times continuous processing).
It was found that the extraction of HM during sample preparation through a sieve of 0.25 mm was higher than through a sieve of 1 mm (to 3-17%). This is due to the larger surface of soil particles. These differences were manifested both in unpolluted soil and sediments and at different levels of their pollution. With the increasing contamination level the differences were more noticeable. Under static conditions a single AAB extraction does not extract the entire stock of mobile forms of HMs. Dynamic extraction of heavy metals from the soil and sediments, when conditions do not allow to achieve equilibrium, the processes of metal readsorption are eliminated, which leads to greater HM extraction from the soil and sediments.
Thus, the state of the analyzed sample has a significant influence of HMs extraction. To analyze and compare the results of fractionation of HM compounds from soils, it is necessary to take into account the sample preparation used and extraction time required in each method.
This work was supported by grant of the Russian Scientific Foundation, project no. 19-74-00085.
How to cite: Burachevskaya, M., Minkina, T., Mandzhieva, S., and Kalinichenko, V.: Methodological aspects of extracting heavy metals from soils and sediments, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-635, https://doi.org/10.5194/egusphere-egu2020-635, 2020.
EGU2020-21934 | Displays | ITS2.17/SSS12.2
A study of iodine concentration in drinking waters of Bryansk and Oryol regionsViсtor Berezkin, Liydmila Kolmykova, Elena Korobova, and Gulnara Kulieva
The aim of the research was to study iodine in natural waters of different age aquifers in the Bryansk and Oryol regions (Russia), primarily in the waters used by local residents of these regions for drinking purposes. The low iodine concentration in food and drinking water can lead to the development of thyroid gland pathologies, which is typical for the non-Chernozem zone of Russia, including the Bryansk and Oryol regions.
The analysis was based on the original data on the sample collected the field work in the Bryansk region (2007-2013) and in the Oryol region (2016-2017). In addition to iodine concentration, the main geochemical parameters (pH, Eh, salinity) were determined in the selected waters.
The results confirmed the previously established significant variation of iodine concentration, both in the Bryansk region (0.7 - 41.19 mcg/l, n=267) and in the Oryol region (1.12 - 36.8 mcg/l, n=23). The most contrasting values of iodine are found for groundwater’s originating from upper Devonian and Cretaceous aquifers. The dependence of iodine concentration in the waters of the Bryansk region on salinity (r=0.39, n=267), as well as on the content of Ca (r=0.38, n=119, wells) and Mg (r=0.33, n=110, water supply), was shown earlier [1]. The relationship of the iodine concentration in waters of the Oryol region to their salinity appeared to be also significant (r=0.32, n=23).
Surface waters of the Oryol region were characterized by low iodine concentration (median-7.40 mcg/l, n=8). Even the maximum value (9.32 mcg/l) does not exceed the lower limit of the hygienic standard, equal to 10 mcg/l. The obtained value is rather close to median one found for the Bryansk region (6.76 mcg/l, n=110). The lowest median iodine concentration in the Oryol region was registered in borehole waters (2.96 mcg/l, n=9), which is likely to be due to their aquifer features. In the Bryansk region, iodine concentration in borehole waters is lower than in surface and well waters (5.82 mcg/l, n=30). This seems to be due to the fact that the ground water of Quaternary alluvial and fluvioglacial deposits are used for decentralized water supply in both areas. High iodine level of the concentration Oryol region was observed in tap water (median-27.16 mcg/l, n=6). Similar values of iodine concentration were found in deep-lying groundwater in the North-Eastern area of the Bryansk region. The higher amount of iodine in the underground waters of deep-lying areas can be explained by their chemical composition high salinity this was proved the Bryansk region [1, etc.].
The study was carried out with partial financial support by RFBR grant No. 13-05-00823.
References
- Kolmykova L. I., Korobova E. M., Korsakova N. V., Berezkin V. Yu., Danilova V. N., Khushvakhtova S. D., Sedykh E. M. Assessment of iodine and selenium content in drinking waters of the Bryansk region depending on water-bearing rocks and migration conditions. "Actual problems of ecology and nature management" Collection of scientific works. 2014. Pp. 140-144.
How to cite: Berezkin, V., Kolmykova, L., Korobova, E., and Kulieva, G.: A study of iodine concentration in drinking waters of Bryansk and Oryol regions, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21934, https://doi.org/10.5194/egusphere-egu2020-21934, 2020.
The aim of the research was to study iodine in natural waters of different age aquifers in the Bryansk and Oryol regions (Russia), primarily in the waters used by local residents of these regions for drinking purposes. The low iodine concentration in food and drinking water can lead to the development of thyroid gland pathologies, which is typical for the non-Chernozem zone of Russia, including the Bryansk and Oryol regions.
The analysis was based on the original data on the sample collected the field work in the Bryansk region (2007-2013) and in the Oryol region (2016-2017). In addition to iodine concentration, the main geochemical parameters (pH, Eh, salinity) were determined in the selected waters.
The results confirmed the previously established significant variation of iodine concentration, both in the Bryansk region (0.7 - 41.19 mcg/l, n=267) and in the Oryol region (1.12 - 36.8 mcg/l, n=23). The most contrasting values of iodine are found for groundwater’s originating from upper Devonian and Cretaceous aquifers. The dependence of iodine concentration in the waters of the Bryansk region on salinity (r=0.39, n=267), as well as on the content of Ca (r=0.38, n=119, wells) and Mg (r=0.33, n=110, water supply), was shown earlier [1]. The relationship of the iodine concentration in waters of the Oryol region to their salinity appeared to be also significant (r=0.32, n=23).
Surface waters of the Oryol region were characterized by low iodine concentration (median-7.40 mcg/l, n=8). Even the maximum value (9.32 mcg/l) does not exceed the lower limit of the hygienic standard, equal to 10 mcg/l. The obtained value is rather close to median one found for the Bryansk region (6.76 mcg/l, n=110). The lowest median iodine concentration in the Oryol region was registered in borehole waters (2.96 mcg/l, n=9), which is likely to be due to their aquifer features. In the Bryansk region, iodine concentration in borehole waters is lower than in surface and well waters (5.82 mcg/l, n=30). This seems to be due to the fact that the ground water of Quaternary alluvial and fluvioglacial deposits are used for decentralized water supply in both areas. High iodine level of the concentration Oryol region was observed in tap water (median-27.16 mcg/l, n=6). Similar values of iodine concentration were found in deep-lying groundwater in the North-Eastern area of the Bryansk region. The higher amount of iodine in the underground waters of deep-lying areas can be explained by their chemical composition high salinity this was proved the Bryansk region [1, etc.].
The study was carried out with partial financial support by RFBR grant No. 13-05-00823.
References
- Kolmykova L. I., Korobova E. M., Korsakova N. V., Berezkin V. Yu., Danilova V. N., Khushvakhtova S. D., Sedykh E. M. Assessment of iodine and selenium content in drinking waters of the Bryansk region depending on water-bearing rocks and migration conditions. "Actual problems of ecology and nature management" Collection of scientific works. 2014. Pp. 140-144.
How to cite: Berezkin, V., Kolmykova, L., Korobova, E., and Kulieva, G.: A study of iodine concentration in drinking waters of Bryansk and Oryol regions, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21934, https://doi.org/10.5194/egusphere-egu2020-21934, 2020.
EGU2020-7624 | Displays | ITS2.17/SSS12.2
Links between soil composition and podoconiosis occurrence and prevalence in CameroonHarriet Gislam, Niall Burnside, Matthew Brolly, Kebede Deribe, Gail Davey, Samuel Wanji, and Cheo Emmanuel Suh
Introduction: Podoconiosis, a form of non-filarial elephantiasis, is a geochemical disease associated with individuals exposed to red clay soil from alkalic volcanic rock. It is estimated that globally 4 million people suffer from the disease, though the exact causal agent is unknown. This study is the first analysis in Cameroon to compare high resolution ground-sampled geochemical soil variables and remote sensing data in relation to podoconiosis prevalence and occurrence.
Aim: To investigate the associations of soil mineralogical and element variables in relation to podoconiosis prevalence and occurrence in Cameroon.
Methods: In this study, exploratory statistical and spatial data analysis were conducted on soil and spatial epidemiology data associated with podoconiosis. The studied soil data was comprised of 194 samples from an area of 65 by 45 km, containing 19 minerals and 55 elements. Initial proximal analysis included a spatial join between the prevalence data points and the closest ground-sampled soil variables. In addition, the soil variables were interpolated to create a continuous surface. At each prevalence data point, soil values from the interpolated surfaces were extracted. Correlation and logistic regression analysis were carried out on both the proximal analysis data set and the interpolated soil variables. The interpolated soil variables were also analysed using principal component analysis, to identify any patterns or clusters, regarding podoconiosis occurrence.
Results: Bivariate analysis of the proximal and interpolated data set identified several statistically significant soil variables associated with podoconiosis. Correlation analysis identified several soil variables with a statistically significant positive Spearman rho value in relation to podoconiosis prevalence. Logistic regression analysis identified several statistically significant soil variables with odds ratio values greater than 1, with respect to the podoconiosis occurrence data. The significant variables included barium, beryllium, potassium, sodium, rubidium, strontium, thallium, potassium feldspar, mica and quartz. Barium, beryllium, potassium, sodium, quartz, mica and potassium feldspar have been previously identified in the literature in relation to podoconiosis occurrence. The PCA biplots showed no definite groupings of soil compositions with respect to podoconiosis occurrence. However, the envelope of the 95% confidence ellipse, representing prevalence data with at least one case of podoconiosis, does begin to separate as the soil variables suggested to be associated with podoconiosis occurrence increase and reach maximal values.
Conclusion: The findings suggest that the key minerals and elements identified in this study may play a role in the pathogenesis of podoconiosis or could be disease covariates. These significant results have led to ongoing research within this project to examine the utilisation of medium and high-resolution hyperspectral methods to identify if podoconiosis-associated soil variables, such as quartz, are detectable remotely. Data can then be used to predict areas at risk using multivariate machine learning techniques theorising a link between prevalence, presence and combinations of multiple soil related variables.
This study is supported by the National Institute for Health Research (NIHR) Global Health Research Unit on NTDs at BSMS (16/136/29). The views expressed are those of the author and not necessarily those of the NIHR or the Department of Health and Social Care.
How to cite: Gislam, H., Burnside, N., Brolly, M., Deribe, K., Davey, G., Wanji, S., and Suh, C. E.: Links between soil composition and podoconiosis occurrence and prevalence in Cameroon, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7624, https://doi.org/10.5194/egusphere-egu2020-7624, 2020.
Introduction: Podoconiosis, a form of non-filarial elephantiasis, is a geochemical disease associated with individuals exposed to red clay soil from alkalic volcanic rock. It is estimated that globally 4 million people suffer from the disease, though the exact causal agent is unknown. This study is the first analysis in Cameroon to compare high resolution ground-sampled geochemical soil variables and remote sensing data in relation to podoconiosis prevalence and occurrence.
Aim: To investigate the associations of soil mineralogical and element variables in relation to podoconiosis prevalence and occurrence in Cameroon.
Methods: In this study, exploratory statistical and spatial data analysis were conducted on soil and spatial epidemiology data associated with podoconiosis. The studied soil data was comprised of 194 samples from an area of 65 by 45 km, containing 19 minerals and 55 elements. Initial proximal analysis included a spatial join between the prevalence data points and the closest ground-sampled soil variables. In addition, the soil variables were interpolated to create a continuous surface. At each prevalence data point, soil values from the interpolated surfaces were extracted. Correlation and logistic regression analysis were carried out on both the proximal analysis data set and the interpolated soil variables. The interpolated soil variables were also analysed using principal component analysis, to identify any patterns or clusters, regarding podoconiosis occurrence.
Results: Bivariate analysis of the proximal and interpolated data set identified several statistically significant soil variables associated with podoconiosis. Correlation analysis identified several soil variables with a statistically significant positive Spearman rho value in relation to podoconiosis prevalence. Logistic regression analysis identified several statistically significant soil variables with odds ratio values greater than 1, with respect to the podoconiosis occurrence data. The significant variables included barium, beryllium, potassium, sodium, rubidium, strontium, thallium, potassium feldspar, mica and quartz. Barium, beryllium, potassium, sodium, quartz, mica and potassium feldspar have been previously identified in the literature in relation to podoconiosis occurrence. The PCA biplots showed no definite groupings of soil compositions with respect to podoconiosis occurrence. However, the envelope of the 95% confidence ellipse, representing prevalence data with at least one case of podoconiosis, does begin to separate as the soil variables suggested to be associated with podoconiosis occurrence increase and reach maximal values.
Conclusion: The findings suggest that the key minerals and elements identified in this study may play a role in the pathogenesis of podoconiosis or could be disease covariates. These significant results have led to ongoing research within this project to examine the utilisation of medium and high-resolution hyperspectral methods to identify if podoconiosis-associated soil variables, such as quartz, are detectable remotely. Data can then be used to predict areas at risk using multivariate machine learning techniques theorising a link between prevalence, presence and combinations of multiple soil related variables.
This study is supported by the National Institute for Health Research (NIHR) Global Health Research Unit on NTDs at BSMS (16/136/29). The views expressed are those of the author and not necessarily those of the NIHR or the Department of Health and Social Care.
How to cite: Gislam, H., Burnside, N., Brolly, M., Deribe, K., Davey, G., Wanji, S., and Suh, C. E.: Links between soil composition and podoconiosis occurrence and prevalence in Cameroon, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7624, https://doi.org/10.5194/egusphere-egu2020-7624, 2020.
EGU2020-11355 | Displays | ITS2.17/SSS12.2 | Highlight
Agroforestry systems towards rehabilitation of West Africa marginal areas through an integrated green biotechnology approachFilipa Monteiro, Maria Manuela Abreu, Augusto Manuel Correia, and Patrícia Vidigal
To achieve the goals set by the 2030 agenda for Sustainable Development, it is imperative to create sustainable solutions to recovery marginal lands (e.g. landfills or abandoned mining areas) and create conditions for agriculture activities, but most importantly is a concern that deserves political priority. Landfills, poses health and environmental concerns due to the presence of potentially hazardous elements (PHE), among other contaminants that cannot be degraded leading to soil and water contamination, with increasing concern in the African continent. In 2018, a Report to the United Nations Framework Convention on Climate Change by the Republic of Guinea-Bissau reported that waste management is one of the major problems that the country faces. Thus, it is essential to create solutions, beyond contaminated wastes management, such as the rehabilitation of such areas. A potential rehabilitation strategy is the combination of phytostabilization with Technosols. Phytostabilization uses plants to decrease mobility or immobilize PHE in the rhizosphere. These plants should also have low PHE translocation factors from the soil/roots to the shoots. For the Technosols construction it can be used a mixture of different kinds of wastes from different origins (e.g. landfill, construction) to obtain a anthropic soil whose properties (e.g. fertility, water-holding capacity, structure) decrease PHE availability and promote plant growth, minimizing the risk to both human health and the environment. A possible strategy for the rehabilitation of contaminated areas, could be the establishment of an agroforestry system - by intercropping legumes, towards phytostabilization, using cashew as a case study due to its importance as a revenue commodity in West Africa countries (e.g.Guinea-Bissau). However, it is of utmost importance to identify the nature/quantity of PHE and wastes as well as climatic conditions for each contaminated/degraded site, before creating an agroforestry system in those areas, thus ensuring the sustainability of the phyto-geo-technology towards food security. Furthermore, potential alternative revenues obtained from the agroforestry system arise. As such, we present a potential rehabilitation agroforestry system that can in the future be useful for African countries attain the goals set for 2030 and beyond.
How to cite: Monteiro, F., Abreu, M. M., Correia, A. M., and Vidigal, P.: Agroforestry systems towards rehabilitation of West Africa marginal areas through an integrated green biotechnology approach, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11355, https://doi.org/10.5194/egusphere-egu2020-11355, 2020.
To achieve the goals set by the 2030 agenda for Sustainable Development, it is imperative to create sustainable solutions to recovery marginal lands (e.g. landfills or abandoned mining areas) and create conditions for agriculture activities, but most importantly is a concern that deserves political priority. Landfills, poses health and environmental concerns due to the presence of potentially hazardous elements (PHE), among other contaminants that cannot be degraded leading to soil and water contamination, with increasing concern in the African continent. In 2018, a Report to the United Nations Framework Convention on Climate Change by the Republic of Guinea-Bissau reported that waste management is one of the major problems that the country faces. Thus, it is essential to create solutions, beyond contaminated wastes management, such as the rehabilitation of such areas. A potential rehabilitation strategy is the combination of phytostabilization with Technosols. Phytostabilization uses plants to decrease mobility or immobilize PHE in the rhizosphere. These plants should also have low PHE translocation factors from the soil/roots to the shoots. For the Technosols construction it can be used a mixture of different kinds of wastes from different origins (e.g. landfill, construction) to obtain a anthropic soil whose properties (e.g. fertility, water-holding capacity, structure) decrease PHE availability and promote plant growth, minimizing the risk to both human health and the environment. A possible strategy for the rehabilitation of contaminated areas, could be the establishment of an agroforestry system - by intercropping legumes, towards phytostabilization, using cashew as a case study due to its importance as a revenue commodity in West Africa countries (e.g.Guinea-Bissau). However, it is of utmost importance to identify the nature/quantity of PHE and wastes as well as climatic conditions for each contaminated/degraded site, before creating an agroforestry system in those areas, thus ensuring the sustainability of the phyto-geo-technology towards food security. Furthermore, potential alternative revenues obtained from the agroforestry system arise. As such, we present a potential rehabilitation agroforestry system that can in the future be useful for African countries attain the goals set for 2030 and beyond.
How to cite: Monteiro, F., Abreu, M. M., Correia, A. M., and Vidigal, P.: Agroforestry systems towards rehabilitation of West Africa marginal areas through an integrated green biotechnology approach, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11355, https://doi.org/10.5194/egusphere-egu2020-11355, 2020.
EGU2020-20543 | Displays | ITS2.17/SSS12.2
Assessment of total and available heavy metal contents in vineyards managed under different agriculture practicesAdelcia Veiga, Carla Carla Ferreira, Anne-Karine Boulet, Ana Caetano, Óscar Gonzalez-Pelayo, Nelson Abrantes, Jacob Keizer, and António Ferreira
Land degradation is a major challenge, particularly in intensive agriculture areas such as typical vineyards. Soil contamination with heavy metals is a widespread phenomenon in vineyards, due to the intensive use of pesticides, fertilizers, manure and slurry. As a result, vineyard soils have accumulated heavy metals and other trace elements that may be phytotoxic, non-biodegradable and persistent, which represents a long term threat to the crop system and to the food chain. In Portugal, vineyard area represents the fourth largest area in Europe (178770 ha), being one of the most relevant crops. Different approaches, such as, environmental programs and innovative management practices have been adopted over the last years, in order to minimize soil contamination by heavy metals. However, the establishment of quality standards for heavy metals in agriculture soils are mainly based on their total content, which is insufficient to estimate their environmental potential risk. The toxicity of metals does not depend only on their total concentration, but rather on their availability. Nevertheless, knowledge on the “bioavailable fraction” of heavy metals on agriculture soils, and particularly in vineyards, is still limited. This study, developed under iSQAPER research project, aims to assess the total and available heavy metal content in vineyards managed under different management practices: (1) no tillage, (2) integrated production, and (3) conventional farming. The integrated production and the conventional farming in the study sites have been intensively managed for more than 5 years, and more than 30 years in the no tillage vineyard. The study was performed in 2018, based on soil sampling before and after pesticide application (April and July, respectively). Soil samples were also analyzed for pH, soil organic matter content (SOM), total and available (DTPA-extractable) heavy metals content (Cu, Cd, Cr, Pb, Zn and Ni). Preliminary results show higher content of total Cu, Pb, Cr and Ni in the vineyard managed under no tillage than in the farms with conventional and integrated production practices. Cupper is the heavy metal with highest total concentrations, mainly due to the intensive application of Cu-based fungicides. In the vineyards with no tillage, the long term practices have led to total levels of Cu above the soil quality standards. Moreover, similar contents in total Zn were also observed in no- tillage and integrated production practices. The higher content of SOM observed in vineyards under integrated production may have favored the Zn accumulation in the topsoil layer of vineyards. Higher content of organic matter, were found in integrated production farming than in no-tillage and conventional practices, 2.6%, 1.3% and 1.2%, respectively. Understanding total and bioavailable fraction of heavy metals in vineyards is crucial to assess their potential toxicity on plants, animals and humans. The assessment of the best agricultural management practices is a key factor to mitigate land degradation in vineyards.
How to cite: Veiga, A., Carla Ferreira, C., Boulet, A.-K., Caetano, A., Gonzalez-Pelayo, Ó., Abrantes, N., Keizer, J., and Ferreira, A.: Assessment of total and available heavy metal contents in vineyards managed under different agriculture practices, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20543, https://doi.org/10.5194/egusphere-egu2020-20543, 2020.
Land degradation is a major challenge, particularly in intensive agriculture areas such as typical vineyards. Soil contamination with heavy metals is a widespread phenomenon in vineyards, due to the intensive use of pesticides, fertilizers, manure and slurry. As a result, vineyard soils have accumulated heavy metals and other trace elements that may be phytotoxic, non-biodegradable and persistent, which represents a long term threat to the crop system and to the food chain. In Portugal, vineyard area represents the fourth largest area in Europe (178770 ha), being one of the most relevant crops. Different approaches, such as, environmental programs and innovative management practices have been adopted over the last years, in order to minimize soil contamination by heavy metals. However, the establishment of quality standards for heavy metals in agriculture soils are mainly based on their total content, which is insufficient to estimate their environmental potential risk. The toxicity of metals does not depend only on their total concentration, but rather on their availability. Nevertheless, knowledge on the “bioavailable fraction” of heavy metals on agriculture soils, and particularly in vineyards, is still limited. This study, developed under iSQAPER research project, aims to assess the total and available heavy metal content in vineyards managed under different management practices: (1) no tillage, (2) integrated production, and (3) conventional farming. The integrated production and the conventional farming in the study sites have been intensively managed for more than 5 years, and more than 30 years in the no tillage vineyard. The study was performed in 2018, based on soil sampling before and after pesticide application (April and July, respectively). Soil samples were also analyzed for pH, soil organic matter content (SOM), total and available (DTPA-extractable) heavy metals content (Cu, Cd, Cr, Pb, Zn and Ni). Preliminary results show higher content of total Cu, Pb, Cr and Ni in the vineyard managed under no tillage than in the farms with conventional and integrated production practices. Cupper is the heavy metal with highest total concentrations, mainly due to the intensive application of Cu-based fungicides. In the vineyards with no tillage, the long term practices have led to total levels of Cu above the soil quality standards. Moreover, similar contents in total Zn were also observed in no- tillage and integrated production practices. The higher content of SOM observed in vineyards under integrated production may have favored the Zn accumulation in the topsoil layer of vineyards. Higher content of organic matter, were found in integrated production farming than in no-tillage and conventional practices, 2.6%, 1.3% and 1.2%, respectively. Understanding total and bioavailable fraction of heavy metals in vineyards is crucial to assess their potential toxicity on plants, animals and humans. The assessment of the best agricultural management practices is a key factor to mitigate land degradation in vineyards.
How to cite: Veiga, A., Carla Ferreira, C., Boulet, A.-K., Caetano, A., Gonzalez-Pelayo, Ó., Abrantes, N., Keizer, J., and Ferreira, A.: Assessment of total and available heavy metal contents in vineyards managed under different agriculture practices, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20543, https://doi.org/10.5194/egusphere-egu2020-20543, 2020.
EGU2020-11421 | Displays | ITS2.17/SSS12.2
Perspective of the rehabilitation of marginal areas: the case of Lablab purpureus (L.) SweetPatrícia Vidigal, Erisa S. Santos, Augusto Manuel Correia, Fernando Monteiro, and Maria Manuela Abreu
It is estimated that the world population reach 9.1 billion in 2050 resulting in increasing food demand and consumption, but also waste production. Moreover, to help achieve the goals set by the 2030 Agenda for Sustainable Development, it is imperative to develop sustainable strategies for the recovery marginal lands (e.g. landfills or abandoned mining areas) and create conditions for agriculture activities. Thus, there is a need to increase agricultural production and to create sustainable waste management approaches. Several landfills pose health and environmental concerns associated to non-selective deposition of wastes, which present potentially hazardous elements (PHE), and inexistence of environmental management systems. Therefore, leachates rich-in PHE can spread to adjacent areas leading to soil and water contamination. This is particularly concerning considering the growing rate of Sub-Saharan African (SSA) population that will be living in urban or peri-urban areas, and practice subsistence farming in those areas. For SSA it is estimated that by 2050 about 50% of the population will be living in towns and cities. The recovery of landfills, in addition to other environmental management measures, can involve the development of a secure plant cover that creates conditions for agriculture activities, while protecting the food-chain, but also improve environmental and landscape impacts. Plant species selected for green cover should have the ability to decrease the mobility or immobilize PHE in the rhizosphere. Furthermore, these plant species should also have low PHE translocation factors from the soil/roots to the shoots. Plants with these characteristics are not common, and it is necessary to increase our efforts to identify them. Moreover, in the scope of SSA it is important that these species should be native and known by the population. The study of African crops behaviour, such as Lablab purpureus (L.) Sweet, can be a promising option since Lablab shows the ability to accumulate PHE in the roots and low translocation factors from the soil/roots to the shoots, resulting the concentrations present in the shoots safe for animal consumption. It is important to point that the characteristics of each landfill can be totally different as well as climatic conditions where is located the landfill, thus the initial and multidisciplinary characterization of the study area is crucial. Moreover, the ecophysiological plant behaviours, namely PHE accumulation in the edible part, depends on plant species and edafoclimatic conditions, so more studies should be done in order to assess the impact in the food-chain.
How to cite: Vidigal, P., Santos, E. S., Correia, A. M., Monteiro, F., and Abreu, M. M.: Perspective of the rehabilitation of marginal areas: the case of Lablab purpureus (L.) Sweet , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11421, https://doi.org/10.5194/egusphere-egu2020-11421, 2020.
It is estimated that the world population reach 9.1 billion in 2050 resulting in increasing food demand and consumption, but also waste production. Moreover, to help achieve the goals set by the 2030 Agenda for Sustainable Development, it is imperative to develop sustainable strategies for the recovery marginal lands (e.g. landfills or abandoned mining areas) and create conditions for agriculture activities. Thus, there is a need to increase agricultural production and to create sustainable waste management approaches. Several landfills pose health and environmental concerns associated to non-selective deposition of wastes, which present potentially hazardous elements (PHE), and inexistence of environmental management systems. Therefore, leachates rich-in PHE can spread to adjacent areas leading to soil and water contamination. This is particularly concerning considering the growing rate of Sub-Saharan African (SSA) population that will be living in urban or peri-urban areas, and practice subsistence farming in those areas. For SSA it is estimated that by 2050 about 50% of the population will be living in towns and cities. The recovery of landfills, in addition to other environmental management measures, can involve the development of a secure plant cover that creates conditions for agriculture activities, while protecting the food-chain, but also improve environmental and landscape impacts. Plant species selected for green cover should have the ability to decrease the mobility or immobilize PHE in the rhizosphere. Furthermore, these plant species should also have low PHE translocation factors from the soil/roots to the shoots. Plants with these characteristics are not common, and it is necessary to increase our efforts to identify them. Moreover, in the scope of SSA it is important that these species should be native and known by the population. The study of African crops behaviour, such as Lablab purpureus (L.) Sweet, can be a promising option since Lablab shows the ability to accumulate PHE in the roots and low translocation factors from the soil/roots to the shoots, resulting the concentrations present in the shoots safe for animal consumption. It is important to point that the characteristics of each landfill can be totally different as well as climatic conditions where is located the landfill, thus the initial and multidisciplinary characterization of the study area is crucial. Moreover, the ecophysiological plant behaviours, namely PHE accumulation in the edible part, depends on plant species and edafoclimatic conditions, so more studies should be done in order to assess the impact in the food-chain.
How to cite: Vidigal, P., Santos, E. S., Correia, A. M., Monteiro, F., and Abreu, M. M.: Perspective of the rehabilitation of marginal areas: the case of Lablab purpureus (L.) Sweet , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11421, https://doi.org/10.5194/egusphere-egu2020-11421, 2020.
EGU2020-11014 | Displays | ITS2.17/SSS12.2
Testing statistical methods to predict pesticide drift depositionGlenda Garcia-Santos, Michael Scheiber, and Juergen Pilz
We studied the case of the Andean region in Colombia as example of non-mechanized small farming systems in which farmers use handheld sprayers to spray pesticides. This is the most common technique to spray pesticide in developing countries. To better understand the spatial distribution of airborne pesticide drift deposits on the soil surface using that spray technique, nine different spatial interpolation methods were tested using a surrogate tracer substance (Uranine) i.e. classical approaches like the linear interpolation and kriging, and some advanced methods like spatial vine copulas, the Karhunen-Loève expansion of the underlying random field, the integrated nested Laplace approximation and the Empirical Bayesian Kriging used in ArcMap (GIS). This study contributes to future studies on mass balance and risk assessment related to environmental drift pollution in developing countries.
How to cite: Garcia-Santos, G., Scheiber, M., and Pilz, J.: Testing statistical methods to predict pesticide drift deposition, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11014, https://doi.org/10.5194/egusphere-egu2020-11014, 2020.
We studied the case of the Andean region in Colombia as example of non-mechanized small farming systems in which farmers use handheld sprayers to spray pesticides. This is the most common technique to spray pesticide in developing countries. To better understand the spatial distribution of airborne pesticide drift deposits on the soil surface using that spray technique, nine different spatial interpolation methods were tested using a surrogate tracer substance (Uranine) i.e. classical approaches like the linear interpolation and kriging, and some advanced methods like spatial vine copulas, the Karhunen-Loève expansion of the underlying random field, the integrated nested Laplace approximation and the Empirical Bayesian Kriging used in ArcMap (GIS). This study contributes to future studies on mass balance and risk assessment related to environmental drift pollution in developing countries.
How to cite: Garcia-Santos, G., Scheiber, M., and Pilz, J.: Testing statistical methods to predict pesticide drift deposition, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11014, https://doi.org/10.5194/egusphere-egu2020-11014, 2020.
EGU2020-21704 | Displays | ITS2.17/SSS12.2
Modelling fate and transport of pesticides: the case study of the contamination in Valencia aquifers.Ricardo Pérez Indoval, Eduardo Cassiraga, and Javier Rodrigo-Ilarri
Predicting the fate of pesticides released into the natural environment is necessary to anticipate and minimize adverse effects far from the contamination source. These effects arise due to the movement of pesticides in surface water and can take place via drift, surface runoff and subsurface flow. A number of models have been developed to predict the behavior, mobility, and persistence of pesticides. These models should account for key hydrological processes, such as crop growth, pesticide application, transformation processes and field management practices.
In this work, Pesticide Water Calculator PWC model developed by the U.S. Environmental Protection Agency (USEPA) is applied to simulate the fate and transport of pesticides in the unsaturated zone of an aquifer. The model is used to estimate the daily concentrations of pesticides in the Valencia aquifers (Spain). In these aquifers, pesticide concentration values have been found to be greater than the Maximum Concentration Levels (MCLs) established by Spanish Legislation.
The simulations carried out in this work address different environmental scenarios and include a sensitivity analysis of the parameters used in the model. Results of the PWC model provide a crucial first step towards the development of pesticide risk assessment in Valencia region. Results also show that numerical simulation is a valid tool for the analysis and prediction of the fate and transport of pollutants in soil and groundwater.
How to cite: Pérez Indoval, R., Cassiraga, E., and Rodrigo-Ilarri, J.: Modelling fate and transport of pesticides: the case study of the contamination in Valencia aquifers., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21704, https://doi.org/10.5194/egusphere-egu2020-21704, 2020.
Predicting the fate of pesticides released into the natural environment is necessary to anticipate and minimize adverse effects far from the contamination source. These effects arise due to the movement of pesticides in surface water and can take place via drift, surface runoff and subsurface flow. A number of models have been developed to predict the behavior, mobility, and persistence of pesticides. These models should account for key hydrological processes, such as crop growth, pesticide application, transformation processes and field management practices.
In this work, Pesticide Water Calculator PWC model developed by the U.S. Environmental Protection Agency (USEPA) is applied to simulate the fate and transport of pesticides in the unsaturated zone of an aquifer. The model is used to estimate the daily concentrations of pesticides in the Valencia aquifers (Spain). In these aquifers, pesticide concentration values have been found to be greater than the Maximum Concentration Levels (MCLs) established by Spanish Legislation.
The simulations carried out in this work address different environmental scenarios and include a sensitivity analysis of the parameters used in the model. Results of the PWC model provide a crucial first step towards the development of pesticide risk assessment in Valencia region. Results also show that numerical simulation is a valid tool for the analysis and prediction of the fate and transport of pollutants in soil and groundwater.
How to cite: Pérez Indoval, R., Cassiraga, E., and Rodrigo-Ilarri, J.: Modelling fate and transport of pesticides: the case study of the contamination in Valencia aquifers., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21704, https://doi.org/10.5194/egusphere-egu2020-21704, 2020.
EGU2020-11210 | Displays | ITS2.17/SSS12.2
Calibration and validation of the EPIC model to predict glyphosate movement with different agronomic practices under shallow water table conditionsMatteo Longo, Nicola Dal Ferro, Roberto Cesar Izaurralde, Miguel Cabrera, Federico Grillo, Barbara Lazzaro, Alessandra Cardinali, Giuseppe Zanin, and Francesco Morari
Glyphosate (GLP) has been the most frequently used herbicide worldwide, including Europe. Due to its systemic, post-emergence, and non-selective characteristics, it offers optimal weed control without the need of mechanical treatments. Therefore, it is widely used in no-till practices. However, increasing awareness of its potential harmful effect to human health and ecosystems has led numerous countries to restrict or, even, ban its use. The EPIC (Environmental Policy Integrated Climate mode) model has been selected as a screening tool to evaluate the vulnerability of groundwater to glyphosate contamination under different pedo-climatic and agronomic conditions across the Veneto Region (NE Italy), an area where the interaction of different pedo-climatic and agronomic conditions makes it difficult to predict site-specific GLP movement. The aim of this study was to evaluate the performance of a modified version of EPIC that includes a fast solution of Richards’ equation to predict GLP dynamics under shallow water table conditions. The experimental site was in Northeastern Italy and consisted of eight drainable lysimeters; 4 treatments, replicated twice, in factorial combination of two management practices (conventional -CV- and conservation -CA- agriculture) and two water table levels (60 and 120 cm). Degradation and movement of GLP in the soil profile were monitored in 2019 from May to September. The herbicide (144 mg m-2) was applied on bare soil in CV and on the cover crop (Secale cereale) in CA. Water samples were systematically collected at 15, 30 and 60 cm depth using suction cups, whose suction was regulated by an automated system that combined matric potential readings, provided by electronic tensiometers, with a vacuum regulator. Water samplings from groundwater were also performed. Soil samples were collected at 0-5 and 5-15 cm depth every other week. Weather and soil data were used as input to EPIC, while the GLP experimental results, along with yields, soil water content, evapotranspiration and water percolation data, were used to calibrate (from 2011 to 2017) and validate (from 2018 to 2019) the model. In all lysimeters, GLP reached the groundwater the day after the first irrigation event, with higher leaching in CV than in CA and at 120 than at 60 cm. After 40 days, GLP was almost completely dissipated in CA soil, while it was still detected in CV. EPIC was able to acceptably reproduce evapotranspiration (R2=0.76), yields (R2=0.74) and water percolation (R2= 0.59-0.90). In general, GLP predictions compared well with observations but the predictions in CV treatments were closer to observations than in CA treatments. This work showed the robustness of the modified EPIC, suggesting its use as a tool to assess the potential vulnerability of the groundwater under different management scenarios and water table levels.
How to cite: Longo, M., Dal Ferro, N., Izaurralde, R. C., Cabrera, M., Grillo, F., Lazzaro, B., Cardinali, A., Zanin, G., and Morari, F.: Calibration and validation of the EPIC model to predict glyphosate movement with different agronomic practices under shallow water table conditions, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11210, https://doi.org/10.5194/egusphere-egu2020-11210, 2020.
Glyphosate (GLP) has been the most frequently used herbicide worldwide, including Europe. Due to its systemic, post-emergence, and non-selective characteristics, it offers optimal weed control without the need of mechanical treatments. Therefore, it is widely used in no-till practices. However, increasing awareness of its potential harmful effect to human health and ecosystems has led numerous countries to restrict or, even, ban its use. The EPIC (Environmental Policy Integrated Climate mode) model has been selected as a screening tool to evaluate the vulnerability of groundwater to glyphosate contamination under different pedo-climatic and agronomic conditions across the Veneto Region (NE Italy), an area where the interaction of different pedo-climatic and agronomic conditions makes it difficult to predict site-specific GLP movement. The aim of this study was to evaluate the performance of a modified version of EPIC that includes a fast solution of Richards’ equation to predict GLP dynamics under shallow water table conditions. The experimental site was in Northeastern Italy and consisted of eight drainable lysimeters; 4 treatments, replicated twice, in factorial combination of two management practices (conventional -CV- and conservation -CA- agriculture) and two water table levels (60 and 120 cm). Degradation and movement of GLP in the soil profile were monitored in 2019 from May to September. The herbicide (144 mg m-2) was applied on bare soil in CV and on the cover crop (Secale cereale) in CA. Water samples were systematically collected at 15, 30 and 60 cm depth using suction cups, whose suction was regulated by an automated system that combined matric potential readings, provided by electronic tensiometers, with a vacuum regulator. Water samplings from groundwater were also performed. Soil samples were collected at 0-5 and 5-15 cm depth every other week. Weather and soil data were used as input to EPIC, while the GLP experimental results, along with yields, soil water content, evapotranspiration and water percolation data, were used to calibrate (from 2011 to 2017) and validate (from 2018 to 2019) the model. In all lysimeters, GLP reached the groundwater the day after the first irrigation event, with higher leaching in CV than in CA and at 120 than at 60 cm. After 40 days, GLP was almost completely dissipated in CA soil, while it was still detected in CV. EPIC was able to acceptably reproduce evapotranspiration (R2=0.76), yields (R2=0.74) and water percolation (R2= 0.59-0.90). In general, GLP predictions compared well with observations but the predictions in CV treatments were closer to observations than in CA treatments. This work showed the robustness of the modified EPIC, suggesting its use as a tool to assess the potential vulnerability of the groundwater under different management scenarios and water table levels.
How to cite: Longo, M., Dal Ferro, N., Izaurralde, R. C., Cabrera, M., Grillo, F., Lazzaro, B., Cardinali, A., Zanin, G., and Morari, F.: Calibration and validation of the EPIC model to predict glyphosate movement with different agronomic practices under shallow water table conditions, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11210, https://doi.org/10.5194/egusphere-egu2020-11210, 2020.
EGU2020-1742 | Displays | ITS2.17/SSS12.2
Aminomethylphosphonic acid (AMPA) retention in different soil profile horizonsEliana Gonzalo Mayoral, Virginia Aparicio, José Luis Costa, and Eduardo De Gerónimo
Aminomethylphosphonic acid (AMPA) is a metabolite of microbial degradation of the widely used herbicide glyphosate and other phosphonate compounds, such as detergents. In the soil, AMPA has a strong adsorption than the glyphosate. No studies have been reported on adsorption of AMPA in the soil profile. There are only a few studies of retention in superficial horizons of the soil. In this sense, the objective of this study is to determine the adsorption coefficients of AMPA in the three main horizons of a typical Argiudoll.
The adsorption isotherms were performed by shaking 1 g of soil in 10 ml of CaCl2 (0,01M) at different concentrations of AMPA (0, 2, 5, 10, 20, 50, 100 ppm). Six replications were performed for each main horizon (A-B-C). The samples were incubated and agitated at 25 °C for 24 hours to reach equilibrium. Then it was centrifuged at 3000 rpm for 10 minutes. The concentration of AMPA was quantified in UPLC MS/MS (Waters®). The experimental data was adjusted following the Freundlich model. At the same time, physical-chemical determinations of each horizon were made in order to characterize the soil.
The percentage of AMPA adsorption was greater than 91, 85 and 74% of the concentration applied, for all concentration, in horizons A, B and C, respectively. These percentages decreased for each horizon from lower to higher concentration. If the adsorption between horizons is compared for each applied concentration, horizon B is the one that presents the highest percentages of adsorption of AMPA, followed by A, and then C. Only in the highest concentration used (100 ppm), horizon A registers the highest percentage of adsorption with respect to the other horizons. In this sense, the Kf values obtained were 295, 329 and 152 for the horizons B, A and C, respectively, with significant differences for the latter.
When looking for correlations between Kf values and the soil properties, it was found that the cation exchange capacity, K content and percentage of clays are the properties that correlate most strongly with the Kf value. On the other hand, the percentage of sand and the pH showed a strong negative correlation with Kf.
The results obtained indicate that, in soils (or horizons) with a high clay content, the AMPA is strongly retained, decreasing the probability of being transported to the underground water.
How to cite: Gonzalo Mayoral, E., Aparicio, V., Costa, J. L., and De Gerónimo, E.: Aminomethylphosphonic acid (AMPA) retention in different soil profile horizons, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1742, https://doi.org/10.5194/egusphere-egu2020-1742, 2020.
Aminomethylphosphonic acid (AMPA) is a metabolite of microbial degradation of the widely used herbicide glyphosate and other phosphonate compounds, such as detergents. In the soil, AMPA has a strong adsorption than the glyphosate. No studies have been reported on adsorption of AMPA in the soil profile. There are only a few studies of retention in superficial horizons of the soil. In this sense, the objective of this study is to determine the adsorption coefficients of AMPA in the three main horizons of a typical Argiudoll.
The adsorption isotherms were performed by shaking 1 g of soil in 10 ml of CaCl2 (0,01M) at different concentrations of AMPA (0, 2, 5, 10, 20, 50, 100 ppm). Six replications were performed for each main horizon (A-B-C). The samples were incubated and agitated at 25 °C for 24 hours to reach equilibrium. Then it was centrifuged at 3000 rpm for 10 minutes. The concentration of AMPA was quantified in UPLC MS/MS (Waters®). The experimental data was adjusted following the Freundlich model. At the same time, physical-chemical determinations of each horizon were made in order to characterize the soil.
The percentage of AMPA adsorption was greater than 91, 85 and 74% of the concentration applied, for all concentration, in horizons A, B and C, respectively. These percentages decreased for each horizon from lower to higher concentration. If the adsorption between horizons is compared for each applied concentration, horizon B is the one that presents the highest percentages of adsorption of AMPA, followed by A, and then C. Only in the highest concentration used (100 ppm), horizon A registers the highest percentage of adsorption with respect to the other horizons. In this sense, the Kf values obtained were 295, 329 and 152 for the horizons B, A and C, respectively, with significant differences for the latter.
When looking for correlations between Kf values and the soil properties, it was found that the cation exchange capacity, K content and percentage of clays are the properties that correlate most strongly with the Kf value. On the other hand, the percentage of sand and the pH showed a strong negative correlation with Kf.
The results obtained indicate that, in soils (or horizons) with a high clay content, the AMPA is strongly retained, decreasing the probability of being transported to the underground water.
How to cite: Gonzalo Mayoral, E., Aparicio, V., Costa, J. L., and De Gerónimo, E.: Aminomethylphosphonic acid (AMPA) retention in different soil profile horizons, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1742, https://doi.org/10.5194/egusphere-egu2020-1742, 2020.
EGU2020-1775 | Displays | ITS2.17/SSS12.2
Summer crops and the impact of pesticides on surface and underground water in the southeast of the province of Buenos Aires, Argentina.José Luis Costa, Hernan Angelini, Eduardo De Geronimo, and Virginia Aparicio
Agricultural land is the first pesticide recipient after application. Even if the pesticides are applied in accordance with the regulations, only a smaller amount reaches their objectives (weed or pest), while the rest represents possible environmental pollutants (Hvězdová et al., 2018). In this case, the pesticides they become the non-point source of contamination.
The objective of this work was to evaluate the impact of summer crop practices on the concentration of pesticides in surface water and groundwater. In soybean and corn crops, next to surface water courses, 2 freatimeters were installed. Groundwater depth was evaluated in six moments (19/12/2018, 4/1/2019, 14/1/2019, 8/2/2019, 15/2/2019 and 25/2/2019). Water samples were extracted and the concentration of 45 organic molecules (pesticides and degradation products) was determined with a UPLC MS / MS. Once the concentration of each molecule was quantified, it was added to establish the proportion corresponding to the total of a) glyphosate + AMPA; b) Atrazine + hydroxy-atrazine + desetyl-atrazine + desisopropyl-atrazine; c) 2,4D and d) other molecules.
The groundwater was always at a depth greater than 1.30 m in the freatimeters. On average, the proportion of the sum of molecules was: glyphosate metabolite > atrazine metabolite > 2.4D > other organic molecules. The sum of molecules ranged from 0.17 to 39.1 µg l-1. On the other hand, the sum of molecules ranged from 1.3 to 12.5 µg l-1 during the evaluation period. On average, the proportion of the sum of molecules was: glyphosate + metabolite > Atrazine + metabolite > 2.4D other organic molecules.
These preliminary results indicate that the grain production system generates an impact evidenced by the presence of synthetic organic molecules in the water. It is important to adjust crop management practices to avoid and / or minimize that impact and its environmental consequences.
How to cite: Costa, J. L., Angelini, H., De Geronimo, E., and Aparicio, V.: Summer crops and the impact of pesticides on surface and underground water in the southeast of the province of Buenos Aires, Argentina., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1775, https://doi.org/10.5194/egusphere-egu2020-1775, 2020.
Agricultural land is the first pesticide recipient after application. Even if the pesticides are applied in accordance with the regulations, only a smaller amount reaches their objectives (weed or pest), while the rest represents possible environmental pollutants (Hvězdová et al., 2018). In this case, the pesticides they become the non-point source of contamination.
The objective of this work was to evaluate the impact of summer crop practices on the concentration of pesticides in surface water and groundwater. In soybean and corn crops, next to surface water courses, 2 freatimeters were installed. Groundwater depth was evaluated in six moments (19/12/2018, 4/1/2019, 14/1/2019, 8/2/2019, 15/2/2019 and 25/2/2019). Water samples were extracted and the concentration of 45 organic molecules (pesticides and degradation products) was determined with a UPLC MS / MS. Once the concentration of each molecule was quantified, it was added to establish the proportion corresponding to the total of a) glyphosate + AMPA; b) Atrazine + hydroxy-atrazine + desetyl-atrazine + desisopropyl-atrazine; c) 2,4D and d) other molecules.
The groundwater was always at a depth greater than 1.30 m in the freatimeters. On average, the proportion of the sum of molecules was: glyphosate metabolite > atrazine metabolite > 2.4D > other organic molecules. The sum of molecules ranged from 0.17 to 39.1 µg l-1. On the other hand, the sum of molecules ranged from 1.3 to 12.5 µg l-1 during the evaluation period. On average, the proportion of the sum of molecules was: glyphosate + metabolite > Atrazine + metabolite > 2.4D other organic molecules.
These preliminary results indicate that the grain production system generates an impact evidenced by the presence of synthetic organic molecules in the water. It is important to adjust crop management practices to avoid and / or minimize that impact and its environmental consequences.
How to cite: Costa, J. L., Angelini, H., De Geronimo, E., and Aparicio, V.: Summer crops and the impact of pesticides on surface and underground water in the southeast of the province of Buenos Aires, Argentina., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1775, https://doi.org/10.5194/egusphere-egu2020-1775, 2020.
EGU2020-1772 | Displays | ITS2.17/SSS12.2
Herbicides distribution in sediments of the Argentina rolling pampas landscapeAna Clara Caprile, Virginia Aparicio, María Liliana Darder, Eduardo De Gerónimo, and Adrian Andriulo
Soil losses due to water erosion exceed the tolerance in the edafoclimatic conditions of the rolling pampas. Erosion sediments transport pesticides outside their own limits. Increased knowledge about its polluting potential would allow agronomic practices to be redirected towards sustainability. The objectives of this work were to: a) analyze herbicide distribution patterns frequently used in agricultural production and b) evaluate some herbicide and soil properties to explain their landscape distribution pattern. In an area under exclusively agricultural production of the upper basin of Pergamino stream, rain simulations were carried out in different landscape positions (upland, mid slope, and lowland). In the upland and mid slope (well-drained Mollisols) agriculture is practiced with soybean monoculture tendency under no tillage; in the lowland (Mollisols and alkaline and saline Alfisols), cattle breeding and rearing is carried-out on improved grasslands. Sediments were obtained using a rain simulator for one hour at high intensity (60 mm h-1) at 23 sampling points. In the sediments, 2.4-D, acetochlor, atrazine and metabolites, flurochloridone, glyphosate and AMPA, and s-metolachlor concentrations were determined. In addition, the following variables: basic infiltration, runoff coefficient (%), slope, amount of sediments, texture, soil organic carbon (SOC), pH, electrical conductivity and exchangeable sodium at 0-5 cm were obtained. Non-parametric tests of herbicide concentrations between landscape positions and correlations with the analyzed variables were performed. The production systems practiced in the landscape different positions, even with low grade slopes, against heavy rains, favor surface runoff (between 45 and 64%) and generate significant sediment losses. No differences were found in the amount of sediment between landscape positions. There was also no relationship between sediment quantity and herbicide concentration. The herbicides applied in agriculture were moved to the lower parts of the landscape, where they are not applied. Three patterns of distribution of concentrations were found that corresponded to some herbicides and soils properties. The average concentrations of 2.4-D, acetochlor and s-metolachlor were higher in the lowland than in the upland and mid slope. The low/moderate adsorption coefficients, the moderate/high solubilities and their relationship with higher sand content and SOC led to their accumulation in the lowland. On the contrary, the average concentrations of glyphosate and AMPA were higher in the upland and mid slope positions, as a consequence of their high adsorption coefficient in soils with higher clay and silt content. Finally, the average concentrations of atrazine-OH and flurochloridone did not differ between landscape positions. Its moderate adsorption to the soil, low solubility and lack of relationship with soil properties caused a relatively homogeneous distribution in the landscape. It is necessary to implement crop rotations that improve soil surface properties to increase its retention and degradation and, therefore, decrease the runoff, the herbicides load in runoff and the associated environmental risks.
How to cite: Caprile, A. C., Aparicio, V., Darder, M. L., De Gerónimo, E., and Andriulo, A.: Herbicides distribution in sediments of the Argentina rolling pampas landscape , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1772, https://doi.org/10.5194/egusphere-egu2020-1772, 2020.
Soil losses due to water erosion exceed the tolerance in the edafoclimatic conditions of the rolling pampas. Erosion sediments transport pesticides outside their own limits. Increased knowledge about its polluting potential would allow agronomic practices to be redirected towards sustainability. The objectives of this work were to: a) analyze herbicide distribution patterns frequently used in agricultural production and b) evaluate some herbicide and soil properties to explain their landscape distribution pattern. In an area under exclusively agricultural production of the upper basin of Pergamino stream, rain simulations were carried out in different landscape positions (upland, mid slope, and lowland). In the upland and mid slope (well-drained Mollisols) agriculture is practiced with soybean monoculture tendency under no tillage; in the lowland (Mollisols and alkaline and saline Alfisols), cattle breeding and rearing is carried-out on improved grasslands. Sediments were obtained using a rain simulator for one hour at high intensity (60 mm h-1) at 23 sampling points. In the sediments, 2.4-D, acetochlor, atrazine and metabolites, flurochloridone, glyphosate and AMPA, and s-metolachlor concentrations were determined. In addition, the following variables: basic infiltration, runoff coefficient (%), slope, amount of sediments, texture, soil organic carbon (SOC), pH, electrical conductivity and exchangeable sodium at 0-5 cm were obtained. Non-parametric tests of herbicide concentrations between landscape positions and correlations with the analyzed variables were performed. The production systems practiced in the landscape different positions, even with low grade slopes, against heavy rains, favor surface runoff (between 45 and 64%) and generate significant sediment losses. No differences were found in the amount of sediment between landscape positions. There was also no relationship between sediment quantity and herbicide concentration. The herbicides applied in agriculture were moved to the lower parts of the landscape, where they are not applied. Three patterns of distribution of concentrations were found that corresponded to some herbicides and soils properties. The average concentrations of 2.4-D, acetochlor and s-metolachlor were higher in the lowland than in the upland and mid slope. The low/moderate adsorption coefficients, the moderate/high solubilities and their relationship with higher sand content and SOC led to their accumulation in the lowland. On the contrary, the average concentrations of glyphosate and AMPA were higher in the upland and mid slope positions, as a consequence of their high adsorption coefficient in soils with higher clay and silt content. Finally, the average concentrations of atrazine-OH and flurochloridone did not differ between landscape positions. Its moderate adsorption to the soil, low solubility and lack of relationship with soil properties caused a relatively homogeneous distribution in the landscape. It is necessary to implement crop rotations that improve soil surface properties to increase its retention and degradation and, therefore, decrease the runoff, the herbicides load in runoff and the associated environmental risks.
How to cite: Caprile, A. C., Aparicio, V., Darder, M. L., De Gerónimo, E., and Andriulo, A.: Herbicides distribution in sediments of the Argentina rolling pampas landscape , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1772, https://doi.org/10.5194/egusphere-egu2020-1772, 2020.
EGU2020-1773 | Displays | ITS2.17/SSS12.2
Management effects on glyphosate and AMPA concentrations in the PM10 emitted by soils of the central semiarid region of ArgentinaNancy B. Ramirez Haberkon, Virginia Aparicio, Silvia B. Aimar, Daniel E. Buschiazzo, Eduardo De Geronimo, José Luis Costa, and Mariano Mendez
Particulate material less than 10 microns (PM10) is important because it is related to negative effects on human health. The soil is one of the most important sources of PM10 which can be emitted by wind erosion, tillage and traffic on unpaved roads. In agricultural soils different fertilizers and agrochemicals are used to produce food. Glyphosate is the main herbicide used in Argentina and in the world, being the dose and the number of applications per year variable in different management system. The objective of this study was to analyze the concentration of glyphosate and its main metabolite, AMPA, in PM10 emitted by soils with different management and uses of the herbicide. For this, the first 5 cm of the following soils were sampled: 9 soils with harvest crop (HC) mostly resistant to glyphosate, under direct sowing and with at least 3 applications of glyphosate per year; 5 soils with forage crops (FC) mostly non-glyphosate resistant, under conventional tillage and one application of glyphosate per year; and 2soils with permanent pasture (PP) that did not receive glyphosate and tillage during the last 30 years. From the soil samples were extracted and collected the PM10 using the easy dust generator coupled to an electrostatic precipitator. The glyphosate and AMPA content were determined in the soils and PM10. The results showed that the percentage of glyphosate detection in PM10 was 100% in HC and FC, and 83% in PP; whereas that AMPA detection was 100% in all management systems. In the soil the detection of glyphosate was 100% in HC, 80% in FC and 0% PP. For AMPA the percentage of detection was 100% in HC and FC, and 66% in PP. Contents of glyphosate and AMPA in the soil were higher in HC (87.1 ug kg-1 and 1015.5 ug kg-1) than in FC (4.4 ug kg-1 and 140.3 ug kg-1) and PP (0 ug kg -1 and 8.5 ug kg-1) (p <0.05). The same results were found in PM10, where glyphosate and AMPA contents in HC (279.5 ug kg-1 and 4690.5 ug kg-1) were higher than in FC (21.1 ug kg-1 and 503.4 ug kg-1 ) and PP (33.5 ug kg-1 and 128.4 ug kg-1) (p <0.05). Content of AMPA was higher than that of glyphosate in the soil and PM10 of the three managements studied. Glyphosate and AMPA contents in the PM10 were higher than in the soil. This study shows that the most frequent use of glyphosate increases its content and that of AMPA in the soil and PM10. It is confirmed that the contents of glyphosate and AMPA in PM10 are greater than in the soil under different management systems. Our results suggest that is highly probably the existence of glyphosate and AMPA in the PM10 emitted from agricultural soils and that, in this way, glyphosate and AMPA be transported to not target areas. All those results should by confirmed under field condition.
How to cite: Ramirez Haberkon, N. B., Aparicio, V., Aimar, S. B., Buschiazzo, D. E., De Geronimo, E., Costa, J. L., and Mendez, M.: Management effects on glyphosate and AMPA concentrations in the PM10 emitted by soils of the central semiarid region of Argentina, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1773, https://doi.org/10.5194/egusphere-egu2020-1773, 2020.
Particulate material less than 10 microns (PM10) is important because it is related to negative effects on human health. The soil is one of the most important sources of PM10 which can be emitted by wind erosion, tillage and traffic on unpaved roads. In agricultural soils different fertilizers and agrochemicals are used to produce food. Glyphosate is the main herbicide used in Argentina and in the world, being the dose and the number of applications per year variable in different management system. The objective of this study was to analyze the concentration of glyphosate and its main metabolite, AMPA, in PM10 emitted by soils with different management and uses of the herbicide. For this, the first 5 cm of the following soils were sampled: 9 soils with harvest crop (HC) mostly resistant to glyphosate, under direct sowing and with at least 3 applications of glyphosate per year; 5 soils with forage crops (FC) mostly non-glyphosate resistant, under conventional tillage and one application of glyphosate per year; and 2soils with permanent pasture (PP) that did not receive glyphosate and tillage during the last 30 years. From the soil samples were extracted and collected the PM10 using the easy dust generator coupled to an electrostatic precipitator. The glyphosate and AMPA content were determined in the soils and PM10. The results showed that the percentage of glyphosate detection in PM10 was 100% in HC and FC, and 83% in PP; whereas that AMPA detection was 100% in all management systems. In the soil the detection of glyphosate was 100% in HC, 80% in FC and 0% PP. For AMPA the percentage of detection was 100% in HC and FC, and 66% in PP. Contents of glyphosate and AMPA in the soil were higher in HC (87.1 ug kg-1 and 1015.5 ug kg-1) than in FC (4.4 ug kg-1 and 140.3 ug kg-1) and PP (0 ug kg -1 and 8.5 ug kg-1) (p <0.05). The same results were found in PM10, where glyphosate and AMPA contents in HC (279.5 ug kg-1 and 4690.5 ug kg-1) were higher than in FC (21.1 ug kg-1 and 503.4 ug kg-1 ) and PP (33.5 ug kg-1 and 128.4 ug kg-1) (p <0.05). Content of AMPA was higher than that of glyphosate in the soil and PM10 of the three managements studied. Glyphosate and AMPA contents in the PM10 were higher than in the soil. This study shows that the most frequent use of glyphosate increases its content and that of AMPA in the soil and PM10. It is confirmed that the contents of glyphosate and AMPA in PM10 are greater than in the soil under different management systems. Our results suggest that is highly probably the existence of glyphosate and AMPA in the PM10 emitted from agricultural soils and that, in this way, glyphosate and AMPA be transported to not target areas. All those results should by confirmed under field condition.
How to cite: Ramirez Haberkon, N. B., Aparicio, V., Aimar, S. B., Buschiazzo, D. E., De Geronimo, E., Costa, J. L., and Mendez, M.: Management effects on glyphosate and AMPA concentrations in the PM10 emitted by soils of the central semiarid region of Argentina, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1773, https://doi.org/10.5194/egusphere-egu2020-1773, 2020.
ITS3.1/NP1.2 – Tipping Points in the Earth System
EGU2020-18559 | Displays | ITS3.1/NP1.2
Bifurcations, Global Change, Tipping Points and All ThatMichael Ghil
In this talk, we will attempt to cover, as time permits, issues pertaining to a self-consistent, unified treatment of the climate system’s natural and forced variability, i.e. climate change, sensitivity and intrinsic variability. To set the stage, key features of short-, intermediate, and long-term prediction will be sketched, followed by the effects of the system’s multiple scales of motion. After summarizing the main results and uncertainties of successive assessment reports of the International Panel on Climate Change (IPCC), time-dependent forcing will be introduced, in both its natural and anthropogenic forms.
We will outline the generalization of strange attractors to this non-autonomous setting, namely pullback and random attractors (PBAs & RAs), as well as the generalization of the bifurcations known from classical, autonomous dynamical systems to the tipping points (TPs) of non-autonomous ones. The case of the Lorenz convection model with stochastic forcing and of its RA will be used as an illustrative example. The talk will conclude with a list of questions and a selected bibliography.
How to cite: Ghil, M.: Bifurcations, Global Change, Tipping Points and All That, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18559, https://doi.org/10.5194/egusphere-egu2020-18559, 2020.
In this talk, we will attempt to cover, as time permits, issues pertaining to a self-consistent, unified treatment of the climate system’s natural and forced variability, i.e. climate change, sensitivity and intrinsic variability. To set the stage, key features of short-, intermediate, and long-term prediction will be sketched, followed by the effects of the system’s multiple scales of motion. After summarizing the main results and uncertainties of successive assessment reports of the International Panel on Climate Change (IPCC), time-dependent forcing will be introduced, in both its natural and anthropogenic forms.
We will outline the generalization of strange attractors to this non-autonomous setting, namely pullback and random attractors (PBAs & RAs), as well as the generalization of the bifurcations known from classical, autonomous dynamical systems to the tipping points (TPs) of non-autonomous ones. The case of the Lorenz convection model with stochastic forcing and of its RA will be used as an illustrative example. The talk will conclude with a list of questions and a selected bibliography.
How to cite: Ghil, M.: Bifurcations, Global Change, Tipping Points and All That, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18559, https://doi.org/10.5194/egusphere-egu2020-18559, 2020.
EGU2020-14267 | Displays | ITS3.1/NP1.2
Data quality in different paleo archives and covering different time scales: a key issue in studying tipping elements.Denis-Didier Rousseau, Susanna Barbosa, Witold Bagniewski, Niklas Boers, Eliza Cook, Jens Fohlmeister, Bedartha Goswami, Norbert Marwan, Sune Olander Rasmussen, Louise Sime, and Anders Svensson
Although the Earth system is described to react relatively abruptly to present anthropogenic forcings, the notion of abruptness remains questionable as it refers to a time scale that is difficult to constrain properly. Recognizing this issue, the tipping elements as listed in Lenton et al. (2008) rely on long-term observations under controlled conditions, which enabled the associated tipping points to be identified. For example, there is evidence nowadays that if the rate of deforestation from forest fires and the climate change does not decrease, the Amazonian forest will reach a tipping point towards savanna (Nobre, 2019), which would impact the regional and global climate systems as well as various other ecosystems, directly or indirectly (Magalhães et al., 2020). However, if the present tipping elements, which are now evidenced, are mostly related to the present climate change and thus directly or indirectly related to anthropogenic forcing, their interpretation must still rely on former cases detected in the past, and especially from studies of abrupt climatic transitions evidenced in paleoclimate proxy records. Moreover, recent studies of past changes have shown that addressing abrupt transitions in the past raises the issue of data quality of individual records, including the precision of the time scale and the quantification of associated uncertainties. Investigating past abrupt transitions and the mechanisms involved requires the best data quality possible. This can be a serious limitation when considering the sparse spatial coverage of high resolution paleo-records where dating is critical and corresponding errors often challenging to control. In theory, this would therefore almost limit our investigations to ice-core records of the last climate cycle, because they offer the best possible time resolution. However, evidence shows that abrupt transitions can also be identified in deeper time with lower resolution records, but still revealing changes or transitions that have impacted the dynamics of the Earth system globally. TiPES Work Package 1 will address these issues and collect paleorecords permitting to evidence the temporal behavior of tipping elements in past climates, including several examples.
Lenton T. et al. (2008). PNAS 105, 1786-1793.
Nobre C. (2019). Nature 574, 455.
Magalhães N.d. et al. (2020). Sci. Rep. 16914 (2019) doi:10.1038/s41598-019-53284-1
This work is performed under the TiPES project funded by the European Union’s Horizon 2020 research and innovation program under grant agreement # 820970 <https://tipes.sites.ku.dk/>
How to cite: Rousseau, D.-D., Barbosa, S., Bagniewski, W., Boers, N., Cook, E., Fohlmeister, J., Goswami, B., Marwan, N., Rasmussen, S. O., Sime, L., and Svensson, A.: Data quality in different paleo archives and covering different time scales: a key issue in studying tipping elements., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-14267, https://doi.org/10.5194/egusphere-egu2020-14267, 2020.
Although the Earth system is described to react relatively abruptly to present anthropogenic forcings, the notion of abruptness remains questionable as it refers to a time scale that is difficult to constrain properly. Recognizing this issue, the tipping elements as listed in Lenton et al. (2008) rely on long-term observations under controlled conditions, which enabled the associated tipping points to be identified. For example, there is evidence nowadays that if the rate of deforestation from forest fires and the climate change does not decrease, the Amazonian forest will reach a tipping point towards savanna (Nobre, 2019), which would impact the regional and global climate systems as well as various other ecosystems, directly or indirectly (Magalhães et al., 2020). However, if the present tipping elements, which are now evidenced, are mostly related to the present climate change and thus directly or indirectly related to anthropogenic forcing, their interpretation must still rely on former cases detected in the past, and especially from studies of abrupt climatic transitions evidenced in paleoclimate proxy records. Moreover, recent studies of past changes have shown that addressing abrupt transitions in the past raises the issue of data quality of individual records, including the precision of the time scale and the quantification of associated uncertainties. Investigating past abrupt transitions and the mechanisms involved requires the best data quality possible. This can be a serious limitation when considering the sparse spatial coverage of high resolution paleo-records where dating is critical and corresponding errors often challenging to control. In theory, this would therefore almost limit our investigations to ice-core records of the last climate cycle, because they offer the best possible time resolution. However, evidence shows that abrupt transitions can also be identified in deeper time with lower resolution records, but still revealing changes or transitions that have impacted the dynamics of the Earth system globally. TiPES Work Package 1 will address these issues and collect paleorecords permitting to evidence the temporal behavior of tipping elements in past climates, including several examples.
Lenton T. et al. (2008). PNAS 105, 1786-1793.
Nobre C. (2019). Nature 574, 455.
Magalhães N.d. et al. (2020). Sci. Rep. 16914 (2019) doi:10.1038/s41598-019-53284-1
This work is performed under the TiPES project funded by the European Union’s Horizon 2020 research and innovation program under grant agreement # 820970 <https://tipes.sites.ku.dk/>
How to cite: Rousseau, D.-D., Barbosa, S., Bagniewski, W., Boers, N., Cook, E., Fohlmeister, J., Goswami, B., Marwan, N., Rasmussen, S. O., Sime, L., and Svensson, A.: Data quality in different paleo archives and covering different time scales: a key issue in studying tipping elements., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-14267, https://doi.org/10.5194/egusphere-egu2020-14267, 2020.
EGU2020-9618 | Displays | ITS3.1/NP1.2
A Noisy Excitable Model of Millennial Scale Glacial Climate VariabilityGuido Vettoretti, Peter Ditlevsen, Markus Jochum, Sune Rasmussen, and Kerim Nisancioglu
The Dansgaard-Oeschger (D-O) oscillation recorded in isotopic analyses of Greenland ice cores is a climate oscillation with millennial scale variability alternating very rapidly between cold climate and warm climate states. In contrast to theories invoking Heinrich event forced oscillations or stochastic noise induced transitions between on or off states of the Atlantic Meridional Overturning Circulation, theories are emerging that propose that the D-O oscillation is an intrinsic stable glacial limit cycle relaxation oscillation that can be perturbed by internal and external forcing. Here we use the Community Earth System Model (CESM), run with glacial boundary conditions, which accurately simulates internal unforced D-O oscillations that can be modulated by radiative forcing, freshwater forcing, and changes in ocean mixing. Based on our set of CESM climate simulations, we propose a clear process-based framework that explains the natural intrinsic timescale for the millennial scale climate transitions. We build a reduced dimensional planar dynamical system model in which the parameters of the simple model are informed by the fully coupled glacial climate model. This simple system can produce self-sustained millennial scale abrupt climate transitions, which can be modulated by forcing and display a behaviour like that observed in the complex model. We conclude that the physics underlying the glacial climate system is characterized by an excitable system susceptible to coherence resonance with similar analogues in biological systems that operate on vastly different spatial and time scales.
How to cite: Vettoretti, G., Ditlevsen, P., Jochum, M., Rasmussen, S., and Nisancioglu, K.: A Noisy Excitable Model of Millennial Scale Glacial Climate Variability, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9618, https://doi.org/10.5194/egusphere-egu2020-9618, 2020.
The Dansgaard-Oeschger (D-O) oscillation recorded in isotopic analyses of Greenland ice cores is a climate oscillation with millennial scale variability alternating very rapidly between cold climate and warm climate states. In contrast to theories invoking Heinrich event forced oscillations or stochastic noise induced transitions between on or off states of the Atlantic Meridional Overturning Circulation, theories are emerging that propose that the D-O oscillation is an intrinsic stable glacial limit cycle relaxation oscillation that can be perturbed by internal and external forcing. Here we use the Community Earth System Model (CESM), run with glacial boundary conditions, which accurately simulates internal unforced D-O oscillations that can be modulated by radiative forcing, freshwater forcing, and changes in ocean mixing. Based on our set of CESM climate simulations, we propose a clear process-based framework that explains the natural intrinsic timescale for the millennial scale climate transitions. We build a reduced dimensional planar dynamical system model in which the parameters of the simple model are informed by the fully coupled glacial climate model. This simple system can produce self-sustained millennial scale abrupt climate transitions, which can be modulated by forcing and display a behaviour like that observed in the complex model. We conclude that the physics underlying the glacial climate system is characterized by an excitable system susceptible to coherence resonance with similar analogues in biological systems that operate on vastly different spatial and time scales.
How to cite: Vettoretti, G., Ditlevsen, P., Jochum, M., Rasmussen, S., and Nisancioglu, K.: A Noisy Excitable Model of Millennial Scale Glacial Climate Variability, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9618, https://doi.org/10.5194/egusphere-egu2020-9618, 2020.
EGU2020-16665 | Displays | ITS3.1/NP1.2
Predicting past tipping points: The Dansgaard-Oeschger events of the last glacial periodJohannes Lohmann and Peter Ditlevsen
The Dansgaard-Oeschger (DO) events of the last glacial period provide a unique example of large-scale climate change on centennial time scales. Despite significant progress in modeling DO-like transitions with realistic climate models, it is still unknown what ultimately drives these changes. It is an outstanding problem whether they are driven by a self-sustained oscillation of the earth system, or by stochastic perturbations in terms of freshwater discharges into the North Atlantic or extremes in atmospheric dynamics.
This work addresses the question of whether DO events fall into the realm of tipping points in the mathematical sense, either driven by an underlying bifurcation, noise or a rate-dependent instability, or whether they are a true and possibly chaotic oscillation. To do this, different ice core proxy data and empirical predictability can be used as a discriminator.
The complex temporal pattern of DO events has been investigated previously to suggest that the transitions in between cold (stadial) and warm (interstadial) phases are purely noise-induced and thus unpredictable. In contrast, evidence is presented that trends in proxy records of Greenland ice cores within the stadial and interstadial phases pre-determine the impending abrupt transitions and allow their prediction. As a result, they cannot be purely noise-induced.
The observed proxy trends manifest consistent reorganizations of the climate system at specific time scales, and can give some hints on the physical processes being involved. Nevertheless, the complex temporal pattern, i.e., what sets the highly variable and largely uncorrelated time scales of individual DO excursions remains to be explained.
How to cite: Lohmann, J. and Ditlevsen, P.: Predicting past tipping points: The Dansgaard-Oeschger events of the last glacial period, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-16665, https://doi.org/10.5194/egusphere-egu2020-16665, 2020.
The Dansgaard-Oeschger (DO) events of the last glacial period provide a unique example of large-scale climate change on centennial time scales. Despite significant progress in modeling DO-like transitions with realistic climate models, it is still unknown what ultimately drives these changes. It is an outstanding problem whether they are driven by a self-sustained oscillation of the earth system, or by stochastic perturbations in terms of freshwater discharges into the North Atlantic or extremes in atmospheric dynamics.
This work addresses the question of whether DO events fall into the realm of tipping points in the mathematical sense, either driven by an underlying bifurcation, noise or a rate-dependent instability, or whether they are a true and possibly chaotic oscillation. To do this, different ice core proxy data and empirical predictability can be used as a discriminator.
The complex temporal pattern of DO events has been investigated previously to suggest that the transitions in between cold (stadial) and warm (interstadial) phases are purely noise-induced and thus unpredictable. In contrast, evidence is presented that trends in proxy records of Greenland ice cores within the stadial and interstadial phases pre-determine the impending abrupt transitions and allow their prediction. As a result, they cannot be purely noise-induced.
The observed proxy trends manifest consistent reorganizations of the climate system at specific time scales, and can give some hints on the physical processes being involved. Nevertheless, the complex temporal pattern, i.e., what sets the highly variable and largely uncorrelated time scales of individual DO excursions remains to be explained.
How to cite: Lohmann, J. and Ditlevsen, P.: Predicting past tipping points: The Dansgaard-Oeschger events of the last glacial period, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-16665, https://doi.org/10.5194/egusphere-egu2020-16665, 2020.
EGU2020-1399 * | Displays | ITS3.1/NP1.2 | Highlight
A sea ice-free Arctic during the Last Interglacial supports fast future lossMaria Vittoria Guarino, Louise Sime, David Schroeder, Irene Malmierca-Vallet, Erica Rosenblum, Mark Ringer, Jeff Ridley, Danny Feltham, Cecilia Bitz, Eric Steig, Eric Wolff, Julienne Stroeve, and Alistair Sellar
The Last Interglacial (LIG) is a period of great importance as an analog for future climate change. Global sea level was 6-9 m higher than present. Stronger LIG summertime insolation at high northern latitudes drove Arctic land summer temperatures around 4-5 K higher than during the preindustrial era. Climate-model simulations have previously failed to capture these elevated temperatures. This may be because these models failed to correctly capture LIG sea ice changes.
Here, we show that the latest version of the UK Hadley Center coupled ocean-atmosphere climate model (HadGEM3) simulates a much improved Arctic LIG climate, including the observed high temperatures. Improved model physics in HadGEM3, including a sophisticated sea ice melt-pond scheme, results in the first-ever simulation of the complete loss of Arctic sea ice in summer during the LIG.
Our ice-free Arctic yields a compelling solution to the long-standing puzzle of what drove LIG Arctic warmth. The LIG simulation result is a new independent constraint on the strength of Arctic sea ice decline in climate-model projections, and provides support for a fast retreat of Arctic summer sea ice in the future.
How to cite: Guarino, M. V., Sime, L., Schroeder, D., Malmierca-Vallet, I., Rosenblum, E., Ringer, M., Ridley, J., Feltham, D., Bitz, C., Steig, E., Wolff, E., Stroeve, J., and Sellar, A.: A sea ice-free Arctic during the Last Interglacial supports fast future loss, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1399, https://doi.org/10.5194/egusphere-egu2020-1399, 2020.
The Last Interglacial (LIG) is a period of great importance as an analog for future climate change. Global sea level was 6-9 m higher than present. Stronger LIG summertime insolation at high northern latitudes drove Arctic land summer temperatures around 4-5 K higher than during the preindustrial era. Climate-model simulations have previously failed to capture these elevated temperatures. This may be because these models failed to correctly capture LIG sea ice changes.
Here, we show that the latest version of the UK Hadley Center coupled ocean-atmosphere climate model (HadGEM3) simulates a much improved Arctic LIG climate, including the observed high temperatures. Improved model physics in HadGEM3, including a sophisticated sea ice melt-pond scheme, results in the first-ever simulation of the complete loss of Arctic sea ice in summer during the LIG.
Our ice-free Arctic yields a compelling solution to the long-standing puzzle of what drove LIG Arctic warmth. The LIG simulation result is a new independent constraint on the strength of Arctic sea ice decline in climate-model projections, and provides support for a fast retreat of Arctic summer sea ice in the future.
How to cite: Guarino, M. V., Sime, L., Schroeder, D., Malmierca-Vallet, I., Rosenblum, E., Ringer, M., Ridley, J., Feltham, D., Bitz, C., Steig, E., Wolff, E., Stroeve, J., and Sellar, A.: A sea ice-free Arctic during the Last Interglacial supports fast future loss, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1399, https://doi.org/10.5194/egusphere-egu2020-1399, 2020.
EGU2020-19531 | Displays | ITS3.1/NP1.2
Simulating an abrupt termination of the Holocene African Humid period using an optimised configuration of HadCM3Peter Hopcroft, Paul Valdes, William Ingram, and Ruza Ivanovic
The Holocene ’greening’ and subsequent, possibly abrupt desertification of the Sahara is a fascinating example of natural environmental change. It was driven by a gradual decline in summer insolation but with land-atmosphere coupling by vegetation likely providing additional reinforcing feedbacks. However, the majority of general circulation models (GCMs) cannot produce enough precipitation to sustain a ’Green’ Sahara, and the transient evolution through the Holocene has therefore only been studied with a few models. We present a suite of transient simulations with the coupled atmosphere-ocean GCM HadCM3, the CMIP3 version of the Met Office’s Hadley Centre model. These simulations cover the Holocene from 10,000 years before present, and optionally include recently developed optimisations of the atmospheric convection and dynamic vegetation parameterisations. In the model run with both optimisations, HadCM3 shows a convincing ‘greening’ for the first time. This is followed by a series of abrupt oscillations in vegetation cover and hydrology, that culminates in an abrupt collapse at around 6000 years before present. We compare the behaviour in four model versions and make a detailed evaluation with available geological evidence. Our results show that the stability of climate models is determined by chosen parameter values and formulations. We conclude that novel methods of inferring suitable model state-space regions using both present day observations and palaeoclimate reconstructions are needed.
How to cite: Hopcroft, P., Valdes, P., Ingram, W., and Ivanovic, R.: Simulating an abrupt termination of the Holocene African Humid period using an optimised configuration of HadCM3, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19531, https://doi.org/10.5194/egusphere-egu2020-19531, 2020.
The Holocene ’greening’ and subsequent, possibly abrupt desertification of the Sahara is a fascinating example of natural environmental change. It was driven by a gradual decline in summer insolation but with land-atmosphere coupling by vegetation likely providing additional reinforcing feedbacks. However, the majority of general circulation models (GCMs) cannot produce enough precipitation to sustain a ’Green’ Sahara, and the transient evolution through the Holocene has therefore only been studied with a few models. We present a suite of transient simulations with the coupled atmosphere-ocean GCM HadCM3, the CMIP3 version of the Met Office’s Hadley Centre model. These simulations cover the Holocene from 10,000 years before present, and optionally include recently developed optimisations of the atmospheric convection and dynamic vegetation parameterisations. In the model run with both optimisations, HadCM3 shows a convincing ‘greening’ for the first time. This is followed by a series of abrupt oscillations in vegetation cover and hydrology, that culminates in an abrupt collapse at around 6000 years before present. We compare the behaviour in four model versions and make a detailed evaluation with available geological evidence. Our results show that the stability of climate models is determined by chosen parameter values and formulations. We conclude that novel methods of inferring suitable model state-space regions using both present day observations and palaeoclimate reconstructions are needed.
How to cite: Hopcroft, P., Valdes, P., Ingram, W., and Ivanovic, R.: Simulating an abrupt termination of the Holocene African Humid period using an optimised configuration of HadCM3, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19531, https://doi.org/10.5194/egusphere-egu2020-19531, 2020.
EGU2020-9484 | Displays | ITS3.1/NP1.2
Spatial early warnings of the transition to superrotation: Studying a bifurcation in the general circulation using an idealized GCMmark williamson
A superrotating atmosphere, one in which the angular momentum of the atmosphere exceeds the solid body rotation of the planet, occurs on Venus and Titan. However, it may have occurred on the Earth in the hot house climates of the Early Cenozoic and some climate models have transitioned abruptly to a superrotating state under the more extreme global warming scenarios. Applied to the Earth, the transition to superrotation causes the prevailing easterlies at the equator to become westerlies and accompanying large changes in global circulation patterns. Although current thinking is that this scenario is unlikely, it shares features of other global tipping points in that it is a low probability, high risk event.
More than anything though, this tipping point serves as an ideal example to test some spatial early warning methods. I’ll show some preliminary results how the critical spatial modes and time scales change through the transition to superrotation using an idealized general circulation model (GCM), Isca.
How to cite: williamson, M.: Spatial early warnings of the transition to superrotation: Studying a bifurcation in the general circulation using an idealized GCM, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9484, https://doi.org/10.5194/egusphere-egu2020-9484, 2020.
A superrotating atmosphere, one in which the angular momentum of the atmosphere exceeds the solid body rotation of the planet, occurs on Venus and Titan. However, it may have occurred on the Earth in the hot house climates of the Early Cenozoic and some climate models have transitioned abruptly to a superrotating state under the more extreme global warming scenarios. Applied to the Earth, the transition to superrotation causes the prevailing easterlies at the equator to become westerlies and accompanying large changes in global circulation patterns. Although current thinking is that this scenario is unlikely, it shares features of other global tipping points in that it is a low probability, high risk event.
More than anything though, this tipping point serves as an ideal example to test some spatial early warning methods. I’ll show some preliminary results how the critical spatial modes and time scales change through the transition to superrotation using an idealized general circulation model (GCM), Isca.
How to cite: williamson, M.: Spatial early warnings of the transition to superrotation: Studying a bifurcation in the general circulation using an idealized GCM, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9484, https://doi.org/10.5194/egusphere-egu2020-9484, 2020.
EGU2020-17889 | Displays | ITS3.1/NP1.2
Climate Tipping Points: Can they trigger a Global Cascade?David Armstrong McKay, Arie Staal, Sarah Cornell, Timothy Lenton, and Ingo Fetzer
Over the past 15 years climate tipping points have emerged as both an important research topic and source of public concern. Some articles have suggested that some tipping points could begin within the 1.5-2oC Paris climate target range, with many more potentially starting by the ~3-4oC of warming that current policy is projected to be committed to. Recent work has also proposed that these tipping points could interact and potentially ‘cascade’ – with the impacts of passing one tipping point being sufficient to trigger the next and so on – resulting in an emergent global tipping point for a long-term commitment to a ‘Hothouse Earth’ trajectory of 4+oC (Steffen et al., 2018). However, much of the recent discussion relies largely on a decade-old characterisation of climate tipping points, based on a literature review and expert elicitation exercise. An updated characterisation would fully utilise more recent results from coupled and offline models, model inter-comparisons, and palaeoclimate studies. The ‘tipping cascade’ hypothesis has also not yet been tested, with the suggestion of 2oC as the global tipping point remaining speculative. Furthermore, the definition of what counts as a climate tipping point is often inconsistent, with some purported tipping points represented more accurately as threshold-free positive feedbacks. Here we perform an updated systematic review of climate tipping points, cataloguing the current evidence for each suggested element with reference to rigorously-applied tipping point definitions. Based on this we test the potential for a global tipping cascade using a stylised model, from which we will present preliminary results.
References
Steffen, W., et al.: Trajectories of the Earth System in the Anthropocene, Proc. Natl. Acad. Sci., 115(33), 8252–8259, doi:10.1073/pnas.1810141115, 2018.
How to cite: Armstrong McKay, D., Staal, A., Cornell, S., Lenton, T., and Fetzer, I.: Climate Tipping Points: Can they trigger a Global Cascade?, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-17889, https://doi.org/10.5194/egusphere-egu2020-17889, 2020.
Over the past 15 years climate tipping points have emerged as both an important research topic and source of public concern. Some articles have suggested that some tipping points could begin within the 1.5-2oC Paris climate target range, with many more potentially starting by the ~3-4oC of warming that current policy is projected to be committed to. Recent work has also proposed that these tipping points could interact and potentially ‘cascade’ – with the impacts of passing one tipping point being sufficient to trigger the next and so on – resulting in an emergent global tipping point for a long-term commitment to a ‘Hothouse Earth’ trajectory of 4+oC (Steffen et al., 2018). However, much of the recent discussion relies largely on a decade-old characterisation of climate tipping points, based on a literature review and expert elicitation exercise. An updated characterisation would fully utilise more recent results from coupled and offline models, model inter-comparisons, and palaeoclimate studies. The ‘tipping cascade’ hypothesis has also not yet been tested, with the suggestion of 2oC as the global tipping point remaining speculative. Furthermore, the definition of what counts as a climate tipping point is often inconsistent, with some purported tipping points represented more accurately as threshold-free positive feedbacks. Here we perform an updated systematic review of climate tipping points, cataloguing the current evidence for each suggested element with reference to rigorously-applied tipping point definitions. Based on this we test the potential for a global tipping cascade using a stylised model, from which we will present preliminary results.
References
Steffen, W., et al.: Trajectories of the Earth System in the Anthropocene, Proc. Natl. Acad. Sci., 115(33), 8252–8259, doi:10.1073/pnas.1810141115, 2018.
How to cite: Armstrong McKay, D., Staal, A., Cornell, S., Lenton, T., and Fetzer, I.: Climate Tipping Points: Can they trigger a Global Cascade?, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-17889, https://doi.org/10.5194/egusphere-egu2020-17889, 2020.
EGU2020-21507 | Displays | ITS3.1/NP1.2
Risk analysis approach for tipping cascades and domino effects in the Earth system under global warmingJonathan Donges, Nico Wunderling, Jürgen Kurths, and Ricarda Winkelmann
Tipping elements in the Earth's climate system are continental-scale subsystems that are characterized by a threshold behavior with potentially large short- to long-term impacts on human societies. It has been suggested that these include biosphere components (e.g. the Amazon rainforest and coral reefs), cryosphere components (e.g. the Greenland and Antarctic ice sheets) and large-scale atmospheric and oceanic circulations (e.g. the AMOC, ENSO and Indian summer monsoon). Interactions and feedbacks of climate tipping elements via various processes could increase the likelihood of crossing tipping points under a given level of global warming and interaction strength. However, studying these potential domino effects and tipping cascades with process-detailed state-of-the-art Earth system models is difficult so far, because relevant tipping elements are often not represented and uncertainties in their properties and interactions are large.
To bridge this current gap in the model hierarchy, we present a risk analysis approach based on a paradigmatic model of interacting tipping elements that propagates uncertainties in interaction structure, sign and strength as well as critical thresholds and other parameters via large Monte Carlo ensembles. Our approach allows to study the likelihood of domino effects and tipping cascades to emerge due to pairwise interactions and feedbacks to global mean temperature. We apply our approach to a subset of five potential tipping elements (Greenland and West Antarctic ice sheets, AMOC, Amazon rainforest and ENSO) with known parameter uncertainty estimates and find that their interactions overall tend to be destabilizing. The presented framework is flexible and can be adapted to study the interaction effects of other or additional tipping elements and more detailed submodels for describing their individual dynamics.
How to cite: Donges, J., Wunderling, N., Kurths, J., and Winkelmann, R.: Risk analysis approach for tipping cascades and domino effects in the Earth system under global warming, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21507, https://doi.org/10.5194/egusphere-egu2020-21507, 2020.
Tipping elements in the Earth's climate system are continental-scale subsystems that are characterized by a threshold behavior with potentially large short- to long-term impacts on human societies. It has been suggested that these include biosphere components (e.g. the Amazon rainforest and coral reefs), cryosphere components (e.g. the Greenland and Antarctic ice sheets) and large-scale atmospheric and oceanic circulations (e.g. the AMOC, ENSO and Indian summer monsoon). Interactions and feedbacks of climate tipping elements via various processes could increase the likelihood of crossing tipping points under a given level of global warming and interaction strength. However, studying these potential domino effects and tipping cascades with process-detailed state-of-the-art Earth system models is difficult so far, because relevant tipping elements are often not represented and uncertainties in their properties and interactions are large.
To bridge this current gap in the model hierarchy, we present a risk analysis approach based on a paradigmatic model of interacting tipping elements that propagates uncertainties in interaction structure, sign and strength as well as critical thresholds and other parameters via large Monte Carlo ensembles. Our approach allows to study the likelihood of domino effects and tipping cascades to emerge due to pairwise interactions and feedbacks to global mean temperature. We apply our approach to a subset of five potential tipping elements (Greenland and West Antarctic ice sheets, AMOC, Amazon rainforest and ENSO) with known parameter uncertainty estimates and find that their interactions overall tend to be destabilizing. The presented framework is flexible and can be adapted to study the interaction effects of other or additional tipping elements and more detailed submodels for describing their individual dynamics.
How to cite: Donges, J., Wunderling, N., Kurths, J., and Winkelmann, R.: Risk analysis approach for tipping cascades and domino effects in the Earth system under global warming, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21507, https://doi.org/10.5194/egusphere-egu2020-21507, 2020.
EGU2020-3790 | Displays | ITS3.1/NP1.2
Atlantic Salinity Pileup as a Remote Fingerprint of Weakening Atlantic Overturning Circulation under Anthropogenic WarmingChenyu Zhu and Zhengyu Liu
Climate models show a weakening Atlantic meridional overturning circulation (AMOC) under global warming. Limited by short direct measurements, this AMOC slowdown has been inferred, with some uncertainties, indirectly from some AMOC fingerprints locally over the subpolar North Atlantic region. Here we present observational and modeling evidences of the first remote fingerprint of AMOC slowdown outside the North Atlantic. Under global warming, the weakening AMOC reduces the salinity divergence and then leads to a remote fingerprint of “salinity pileup” in the South Atlantic. Our study supports the AMOC slowdown under anthropogenic warming and, furthermore, shows that this weakening has occurred all the way into the South Atlantic.
How to cite: Zhu, C. and Liu, Z.: Atlantic Salinity Pileup as a Remote Fingerprint of Weakening Atlantic Overturning Circulation under Anthropogenic Warming, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3790, https://doi.org/10.5194/egusphere-egu2020-3790, 2020.
Climate models show a weakening Atlantic meridional overturning circulation (AMOC) under global warming. Limited by short direct measurements, this AMOC slowdown has been inferred, with some uncertainties, indirectly from some AMOC fingerprints locally over the subpolar North Atlantic region. Here we present observational and modeling evidences of the first remote fingerprint of AMOC slowdown outside the North Atlantic. Under global warming, the weakening AMOC reduces the salinity divergence and then leads to a remote fingerprint of “salinity pileup” in the South Atlantic. Our study supports the AMOC slowdown under anthropogenic warming and, furthermore, shows that this weakening has occurred all the way into the South Atlantic.
How to cite: Zhu, C. and Liu, Z.: Atlantic Salinity Pileup as a Remote Fingerprint of Weakening Atlantic Overturning Circulation under Anthropogenic Warming, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3790, https://doi.org/10.5194/egusphere-egu2020-3790, 2020.
EGU2020-5847 | Displays | ITS3.1/NP1.2
Abrupt changes across the Arctic permafrost region endanger northern developmentBernardo Teufel and Laxmi Sushama
Extensive degradation of near-surface permafrost is projected during the 21st century, which will have detrimental effects on northern communities, ecosystems and engineering systems. This degradation will expectedly have consequences for many processes, which most previous modelling studies suggested would occur gradually. Here, we project that soil moisture will decrease abruptly (within a few months) in response to permafrost degradation over large areas of the present-day permafrost region, based on analysis of transient climate change simulations performed using a state-of-the-art regional climate model. This regime shift is reflected in abrupt increases in summer near-surface temperature and convective precipitation, and decreases in relative humidity and surface runoff. Of particular relevance to northern systems are changes to the bearing capacity of the soil due to increased drainage, increases in the potential for intense rainfall events and increases in lightning frequency, which combined with increases in forest fuel combustibility are projected to abruptly and substantially increase the severity of wildfires, which constitute one of the greatest risks to northern ecosystems, communities and infrastructure. The fact that these changes are projected to occur abruptly further increases the challenges associated with climate change adaptation and potential retrofitting measures.
How to cite: Teufel, B. and Sushama, L.: Abrupt changes across the Arctic permafrost region endanger northern development, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5847, https://doi.org/10.5194/egusphere-egu2020-5847, 2020.
Extensive degradation of near-surface permafrost is projected during the 21st century, which will have detrimental effects on northern communities, ecosystems and engineering systems. This degradation will expectedly have consequences for many processes, which most previous modelling studies suggested would occur gradually. Here, we project that soil moisture will decrease abruptly (within a few months) in response to permafrost degradation over large areas of the present-day permafrost region, based on analysis of transient climate change simulations performed using a state-of-the-art regional climate model. This regime shift is reflected in abrupt increases in summer near-surface temperature and convective precipitation, and decreases in relative humidity and surface runoff. Of particular relevance to northern systems are changes to the bearing capacity of the soil due to increased drainage, increases in the potential for intense rainfall events and increases in lightning frequency, which combined with increases in forest fuel combustibility are projected to abruptly and substantially increase the severity of wildfires, which constitute one of the greatest risks to northern ecosystems, communities and infrastructure. The fact that these changes are projected to occur abruptly further increases the challenges associated with climate change adaptation and potential retrofitting measures.
How to cite: Teufel, B. and Sushama, L.: Abrupt changes across the Arctic permafrost region endanger northern development, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5847, https://doi.org/10.5194/egusphere-egu2020-5847, 2020.
EGU2020-7369 | Displays | ITS3.1/NP1.2
Observed early-warning signals for a Greenland-ice-sheet tipping pointMartin Rypdal and Niklas Boers
Nonlinear feedbacks, such as the melt-elevation feedback, may produce a critical temperature threshold beyond which the current state of the Greenland Ice Sheet loses stability. Hence, the ice sheet may exhibit an abrupt transition under ongoing global warming, with substantial impacts on global sea level and the Atlantic Meridional Overturning Circulation. Melting rates across Greenland and solid ice discharge at the ice sheet's margins have recently accelerated. In this work, we analyze ice sheet runoff reconstructions and process-based simulations using new methods. We compare the acceleration in the runoff with the statistical properties of fluctuations around the system's equilibrium. The analysis uncovers significant early-warning signals for an ongoing destabilization and substantial further mass loss in the near future.
How to cite: Rypdal, M. and Boers, N.: Observed early-warning signals for a Greenland-ice-sheet tipping point , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7369, https://doi.org/10.5194/egusphere-egu2020-7369, 2020.
Nonlinear feedbacks, such as the melt-elevation feedback, may produce a critical temperature threshold beyond which the current state of the Greenland Ice Sheet loses stability. Hence, the ice sheet may exhibit an abrupt transition under ongoing global warming, with substantial impacts on global sea level and the Atlantic Meridional Overturning Circulation. Melting rates across Greenland and solid ice discharge at the ice sheet's margins have recently accelerated. In this work, we analyze ice sheet runoff reconstructions and process-based simulations using new methods. We compare the acceleration in the runoff with the statistical properties of fluctuations around the system's equilibrium. The analysis uncovers significant early-warning signals for an ongoing destabilization and substantial further mass loss in the near future.
How to cite: Rypdal, M. and Boers, N.: Observed early-warning signals for a Greenland-ice-sheet tipping point , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7369, https://doi.org/10.5194/egusphere-egu2020-7369, 2020.
EGU2020-7454 | Displays | ITS3.1/NP1.2
Potential Tipping Points of Antarctic Ice Sheet BasinsSainan Sun, Frank Pattyn, Gael Durand, Lars Zipf, Kevin Bulthuis, Heiko Goelzer, and Konstanze Haubner
Antarctica is loosing mass in an accelerating way and these losses are considered as the major source of sea-level rise in the coming centuries. Ice-sheet mass loss is mainly triggered by the decreased buttressing from ice shelves mainly due to ice-ocean interaction. This loss could be self-sustained in potentially unstable regions where the grounded ice lies on a bedrock below sea level sloping down towards the interior of the ice sheet, leading to the so-called marine ice sheet instability (MISI).
Recent observations on accelerated grounding-line retreat and insights in modelling the West Antarctica ice sheet give evidence that MISI is already on its way. Moreover, similar topographic configurations are also observed in East Antarctica, particularly in Wilkes Land. We present an ensemble of simulations of the Antarctic ice sheet using the f.ETISh ice-sheet model to evaluate tipping points that trigger MISI by forcing the model with sub-shelf melt pulses of varying amplitude and duration. As uncertainties in ice-sheet models limit the ability to provide precise sea-level rise projections, we implement probabilistic methods to investigate the influence of several sources of uncertainty, such as basal conditions. From the uncertainty analysis, we identify confidence regions for grounded ice interpreted as regions of the Antarctic ice sheet that remain ice-covered for a given level of probability. Finally, we discuss for each Antarctic basin the total melt energy needed to reach tipping points leading to sustained MISI.
How to cite: Sun, S., Pattyn, F., Durand, G., Zipf, L., Bulthuis, K., Goelzer, H., and Haubner, K.: Potential Tipping Points of Antarctic Ice Sheet Basins, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7454, https://doi.org/10.5194/egusphere-egu2020-7454, 2020.
Antarctica is loosing mass in an accelerating way and these losses are considered as the major source of sea-level rise in the coming centuries. Ice-sheet mass loss is mainly triggered by the decreased buttressing from ice shelves mainly due to ice-ocean interaction. This loss could be self-sustained in potentially unstable regions where the grounded ice lies on a bedrock below sea level sloping down towards the interior of the ice sheet, leading to the so-called marine ice sheet instability (MISI).
Recent observations on accelerated grounding-line retreat and insights in modelling the West Antarctica ice sheet give evidence that MISI is already on its way. Moreover, similar topographic configurations are also observed in East Antarctica, particularly in Wilkes Land. We present an ensemble of simulations of the Antarctic ice sheet using the f.ETISh ice-sheet model to evaluate tipping points that trigger MISI by forcing the model with sub-shelf melt pulses of varying amplitude and duration. As uncertainties in ice-sheet models limit the ability to provide precise sea-level rise projections, we implement probabilistic methods to investigate the influence of several sources of uncertainty, such as basal conditions. From the uncertainty analysis, we identify confidence regions for grounded ice interpreted as regions of the Antarctic ice sheet that remain ice-covered for a given level of probability. Finally, we discuss for each Antarctic basin the total melt energy needed to reach tipping points leading to sustained MISI.
How to cite: Sun, S., Pattyn, F., Durand, G., Zipf, L., Bulthuis, K., Goelzer, H., and Haubner, K.: Potential Tipping Points of Antarctic Ice Sheet Basins, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7454, https://doi.org/10.5194/egusphere-egu2020-7454, 2020.
EGU2020-3797 | Displays | ITS3.1/NP1.2
Diversity of Global Change Factors and Tipping PointsMasahiro Ryo and Matthias Rillig
Global change is not only about climate change. Several changes in the Earth System occur concurrently and sequentially, and still, novel factors are being identified as emerging problems such as microplastic pollutants. Global change is diverse; nonetheless, little is known about the role of multiple global change co-occurrences. Can we safely anticipate that the effects of multiple global change factors are independent of each other? Or, should we be concerned about the potential of their synergistic interaction, where the joint effect of multiple factors can be larger than the addition of their single effects?
Our talk focuses on ‘the diversity of global change factors’—How the diversity of global change factors can increase, and how the diversity of global change can affect environmental systems in the context of tipping points. We also show empirical evidence that an increasing number of global change factors can cause abrupt shifts in a soil system (cf. Rillig et al. 2019 in Science). We emphasize the urgent need to investigate the expected roles of an increasing diversity of global change factors as an emerging threat to nature and society.
How to cite: Ryo, M. and Rillig, M.: Diversity of Global Change Factors and Tipping Points, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3797, https://doi.org/10.5194/egusphere-egu2020-3797, 2020.
Global change is not only about climate change. Several changes in the Earth System occur concurrently and sequentially, and still, novel factors are being identified as emerging problems such as microplastic pollutants. Global change is diverse; nonetheless, little is known about the role of multiple global change co-occurrences. Can we safely anticipate that the effects of multiple global change factors are independent of each other? Or, should we be concerned about the potential of their synergistic interaction, where the joint effect of multiple factors can be larger than the addition of their single effects?
Our talk focuses on ‘the diversity of global change factors’—How the diversity of global change factors can increase, and how the diversity of global change can affect environmental systems in the context of tipping points. We also show empirical evidence that an increasing number of global change factors can cause abrupt shifts in a soil system (cf. Rillig et al. 2019 in Science). We emphasize the urgent need to investigate the expected roles of an increasing diversity of global change factors as an emerging threat to nature and society.
How to cite: Ryo, M. and Rillig, M.: Diversity of Global Change Factors and Tipping Points, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3797, https://doi.org/10.5194/egusphere-egu2020-3797, 2020.
EGU2020-5979 | Displays | ITS3.1/NP1.2
Physical measures and tipping points in a changing climatePeter Ashwin and Julian Newman
For an autonomous dynamical system, an invariant measure is called physical or natural if it describes the statistics of a typically chosen trajectory that started an arbitrarily long time ago in the past, i.e. without transients. In order to apply such a concept to systems where there is time-varying forcing, we need to develop an analogous notion for such nonautonomous dynamical systems, where the measure is not fixed but evolves in time under the action of the nonautonomous system. The importance of such measures, and the pullback attractors on which they are supported, for interpreting climate statistics have been highlighted by Chekroun, Simmonet and Ghil (2011) Physica D 240:1685. We seek to gain a deeper understanding of these measures and implications for tipping points. We present some results for two classes of nonautonomous systems: autonomous random dynamical systems driven by stationary memoryless noise, and deterministic nonautonomous systems that are asymptotically autonomous in the negative-time limit. In both cases we show existence of a physical measure under suitable assumptions. We highlight further questions about defining rates of mixing in such a setting, as well as implications for prediction of tipping points.
This work has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 820970 (TiPES).
How to cite: Ashwin, P. and Newman, J.: Physical measures and tipping points in a changing climate, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5979, https://doi.org/10.5194/egusphere-egu2020-5979, 2020.
For an autonomous dynamical system, an invariant measure is called physical or natural if it describes the statistics of a typically chosen trajectory that started an arbitrarily long time ago in the past, i.e. without transients. In order to apply such a concept to systems where there is time-varying forcing, we need to develop an analogous notion for such nonautonomous dynamical systems, where the measure is not fixed but evolves in time under the action of the nonautonomous system. The importance of such measures, and the pullback attractors on which they are supported, for interpreting climate statistics have been highlighted by Chekroun, Simmonet and Ghil (2011) Physica D 240:1685. We seek to gain a deeper understanding of these measures and implications for tipping points. We present some results for two classes of nonautonomous systems: autonomous random dynamical systems driven by stationary memoryless noise, and deterministic nonautonomous systems that are asymptotically autonomous in the negative-time limit. In both cases we show existence of a physical measure under suitable assumptions. We highlight further questions about defining rates of mixing in such a setting, as well as implications for prediction of tipping points.
This work has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 820970 (TiPES).
How to cite: Ashwin, P. and Newman, J.: Physical measures and tipping points in a changing climate, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5979, https://doi.org/10.5194/egusphere-egu2020-5979, 2020.
EGU2020-5555 | Displays | ITS3.1/NP1.2
How fast to turn around: preventing tipping after a system has crossed a climate tipping thresholdPaul Ritchie, Peter Cox, and Jan Sieber
A classical scenario for tipping is that a dynamical system experiences a slow parameter drift across a fold tipping point, caused by a run-away positive
feedback loop. We study what happens if one turns around after one has crossed the threshold. We derive a simple criterion that relates how far the parameter exceeds the tipping threshold maximally and how long the parameter stays above the threshold to avoid tipping in an inverse-square law to observable properties of the dynamical system near the fold. We demonstrate the inverse-square law relationship using simple models of recognised potential future tipping points in the climate system.
How to cite: Ritchie, P., Cox, P., and Sieber, J.: How fast to turn around: preventing tipping after a system has crossed a climate tipping threshold, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5555, https://doi.org/10.5194/egusphere-egu2020-5555, 2020.
A classical scenario for tipping is that a dynamical system experiences a slow parameter drift across a fold tipping point, caused by a run-away positive
feedback loop. We study what happens if one turns around after one has crossed the threshold. We derive a simple criterion that relates how far the parameter exceeds the tipping threshold maximally and how long the parameter stays above the threshold to avoid tipping in an inverse-square law to observable properties of the dynamical system near the fold. We demonstrate the inverse-square law relationship using simple models of recognised potential future tipping points in the climate system.
How to cite: Ritchie, P., Cox, P., and Sieber, J.: How fast to turn around: preventing tipping after a system has crossed a climate tipping threshold, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5555, https://doi.org/10.5194/egusphere-egu2020-5555, 2020.
EGU2020-11690 | Displays | ITS3.1/NP1.2
Abrupt transitions, wave interactions and precipitation extremes in climateJohn Bruun, Spiros Evangelou, Katy Sheen, and Mat Collins
There is an urgent need to better understand the how climatic change may cause abrupt transitions and tipping points in the underlying dynamic process (κ →κ′) that can result in more severe extremes. The variability in precipitation based flooding and arid events in the Sahel and SE Asia may be related to alterations to Atlantic (AMOC) and Pacific (ENSO) modes, and how they teleconnect. Extreme value process distributions are widely used for assessing the environment. In this work we apply a spatial Dominant Frequency State Analysis (DFSA) to GPCC reanalysis data to evaluate the extreme properties of precipitation extremes and dry arid events in these regions. The spatial variation we find implies that how the wave interaction properties vary and that wave guide teleconnection is important. The physical wave interaction reasons for why extremes occur and how they vary has not been fully explained to date: that is a statistical mechanics problem. For earth system climate analysis General Circulation Model simulation sizes are too small, 10 to 30 ensemble members (due to computational complexity), to carry out such a large ensemble analysis. However large ensembles are intrinsic to the study of Anderson localization and Random Matrix Theory (RMT) transport study. So we use a theoretical based approach to provide a wave interaction explanation of how differing forms of extreme can occur. This theory work is a generic advance in the study of wave propagation phenomena and extremes in the presence of disorder. To do this we merge the universal wave transport approach used in solids with the geometrical extreme type max stable universal law to evaluate the ensemble based on wave interaction principles. This provides a generic ensemble random Hamiltonian and characteristic polynomial to give a physical proof for encountering extreme value processes. This shows that the Generalized Extreme Value (GEV) shape parameter ξ is a diagnostic tool that accurately distinguishes localized from unlocalized systems and this property should hold for all wave based transport phenomena. This work establishes that ξ(κ) can change when the dynamical system fundamentally changes its physical structure κ →κ′ and that this is a universal result. For our earth system a disorder induced state transition to a heavy tailed process could indicate a wave localization state has occurred in some locations. If this was the case the associated climate phenomena would become dominated by destructive wave interference that can manifest as a catastrophic breakdown, for example as an extreme runaway of temperatures. We discuss this wave interaction theory result in the context of the precipitation extremes and how these could be altering for the Sahel and SE Asia.
Bruun and Evangelou (2019) Anderson localization and extreme values in chaotic climate dynamics, arXiv:1911.03998.
Bruun, Sheen, Skákala, Evangelou and Collins (2019), Modulation of arid Sahel conditions by earth system modes, Geophysical Research Abstracts.
Bruun, Allen and Smyth (2017) Heartbeat of the Southern Oscillation explains ENSO climatic resonances, JGR Oceans, 122, 6746–6772, doi:10.1002/2017JC012892.
How to cite: Bruun, J., Evangelou, S., Sheen, K., and Collins, M.: Abrupt transitions, wave interactions and precipitation extremes in climate, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11690, https://doi.org/10.5194/egusphere-egu2020-11690, 2020.
There is an urgent need to better understand the how climatic change may cause abrupt transitions and tipping points in the underlying dynamic process (κ →κ′) that can result in more severe extremes. The variability in precipitation based flooding and arid events in the Sahel and SE Asia may be related to alterations to Atlantic (AMOC) and Pacific (ENSO) modes, and how they teleconnect. Extreme value process distributions are widely used for assessing the environment. In this work we apply a spatial Dominant Frequency State Analysis (DFSA) to GPCC reanalysis data to evaluate the extreme properties of precipitation extremes and dry arid events in these regions. The spatial variation we find implies that how the wave interaction properties vary and that wave guide teleconnection is important. The physical wave interaction reasons for why extremes occur and how they vary has not been fully explained to date: that is a statistical mechanics problem. For earth system climate analysis General Circulation Model simulation sizes are too small, 10 to 30 ensemble members (due to computational complexity), to carry out such a large ensemble analysis. However large ensembles are intrinsic to the study of Anderson localization and Random Matrix Theory (RMT) transport study. So we use a theoretical based approach to provide a wave interaction explanation of how differing forms of extreme can occur. This theory work is a generic advance in the study of wave propagation phenomena and extremes in the presence of disorder. To do this we merge the universal wave transport approach used in solids with the geometrical extreme type max stable universal law to evaluate the ensemble based on wave interaction principles. This provides a generic ensemble random Hamiltonian and characteristic polynomial to give a physical proof for encountering extreme value processes. This shows that the Generalized Extreme Value (GEV) shape parameter ξ is a diagnostic tool that accurately distinguishes localized from unlocalized systems and this property should hold for all wave based transport phenomena. This work establishes that ξ(κ) can change when the dynamical system fundamentally changes its physical structure κ →κ′ and that this is a universal result. For our earth system a disorder induced state transition to a heavy tailed process could indicate a wave localization state has occurred in some locations. If this was the case the associated climate phenomena would become dominated by destructive wave interference that can manifest as a catastrophic breakdown, for example as an extreme runaway of temperatures. We discuss this wave interaction theory result in the context of the precipitation extremes and how these could be altering for the Sahel and SE Asia.
Bruun and Evangelou (2019) Anderson localization and extreme values in chaotic climate dynamics, arXiv:1911.03998.
Bruun, Sheen, Skákala, Evangelou and Collins (2019), Modulation of arid Sahel conditions by earth system modes, Geophysical Research Abstracts.
Bruun, Allen and Smyth (2017) Heartbeat of the Southern Oscillation explains ENSO climatic resonances, JGR Oceans, 122, 6746–6772, doi:10.1002/2017JC012892.
How to cite: Bruun, J., Evangelou, S., Sheen, K., and Collins, M.: Abrupt transitions, wave interactions and precipitation extremes in climate, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11690, https://doi.org/10.5194/egusphere-egu2020-11690, 2020.
EGU2020-9491 | Displays | ITS3.1/NP1.2 | Lewis Fry Richardson Medal Lecture
Global Stability Properties of the Climate: Melancholia States, Invariant Measures, and Phase TransitionsValerio Lucarini
For a wide range of values of the incoming solar radiation, the Earth features at least two attracting states, which correspond to competing climates. The warm climate is analogous to the present one; the snowball climate features global glaciation and conditions that can hardly support life forms. Paleoclimatic evidences suggest that in past our planet flipped between these two states. The main physical mechanism responsible for such instability is the ice-albedo feedback. Following an idea developed by Eckhardt and co. for the investigation of multistable turbulent flows, we study the global instability giving rise to the snowball/warm multistability in the climate system by identifying the climatic Melancholia state, a saddle embedded in the boundary between the two basins of attraction of the stable climates. We then introduce random perturbations as modulations to the intensity of the incoming solar radiation. We observe noise-induced transitions between the competing basins of attractions. In the weak noise limit, large deviation laws define the invariant measure and the statistics of escape times. By empirically constructing the instantons, we show that the Melancholia states are the gateways for the noise-induced transitions in the weak-noise limit. In the region of multistability, in the zero-noise limit, the measure is supported only on one of the competing attractors. For low (high) values of the solar irradiance, the limit measure is the snowball (warm) climate. The changeover between the two regimes corresponds to a first order phase transition in the system. The framework we propose seems of general relevance for the study of complex multistable systems. Finally, we propose a new method for constructing Melancholia states from direct numerical simulations, thus bypassing the need to use the edge-tracking algorithm.
Refs.
V. Lucarini, T. Bodai, Edge States in the Climate System: Exploring Global Instabilities and Critical Transitions, Nonlinearity 30, R32 (2017)
V. Lucarini, T. Bodai, Transitions across Melancholia States in a Climate Model: Reconciling the Deterministic and Stochastic Points of View, Phys. Rev. Lett. 122,158701 (2019)
How to cite: Lucarini, V.: Global Stability Properties of the Climate: Melancholia States, Invariant Measures, and Phase Transitions, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9491, https://doi.org/10.5194/egusphere-egu2020-9491, 2020.
For a wide range of values of the incoming solar radiation, the Earth features at least two attracting states, which correspond to competing climates. The warm climate is analogous to the present one; the snowball climate features global glaciation and conditions that can hardly support life forms. Paleoclimatic evidences suggest that in past our planet flipped between these two states. The main physical mechanism responsible for such instability is the ice-albedo feedback. Following an idea developed by Eckhardt and co. for the investigation of multistable turbulent flows, we study the global instability giving rise to the snowball/warm multistability in the climate system by identifying the climatic Melancholia state, a saddle embedded in the boundary between the two basins of attraction of the stable climates. We then introduce random perturbations as modulations to the intensity of the incoming solar radiation. We observe noise-induced transitions between the competing basins of attractions. In the weak noise limit, large deviation laws define the invariant measure and the statistics of escape times. By empirically constructing the instantons, we show that the Melancholia states are the gateways for the noise-induced transitions in the weak-noise limit. In the region of multistability, in the zero-noise limit, the measure is supported only on one of the competing attractors. For low (high) values of the solar irradiance, the limit measure is the snowball (warm) climate. The changeover between the two regimes corresponds to a first order phase transition in the system. The framework we propose seems of general relevance for the study of complex multistable systems. Finally, we propose a new method for constructing Melancholia states from direct numerical simulations, thus bypassing the need to use the edge-tracking algorithm.
Refs.
V. Lucarini, T. Bodai, Edge States in the Climate System: Exploring Global Instabilities and Critical Transitions, Nonlinearity 30, R32 (2017)
V. Lucarini, T. Bodai, Transitions across Melancholia States in a Climate Model: Reconciling the Deterministic and Stochastic Points of View, Phys. Rev. Lett. 122,158701 (2019)
How to cite: Lucarini, V.: Global Stability Properties of the Climate: Melancholia States, Invariant Measures, and Phase Transitions, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9491, https://doi.org/10.5194/egusphere-egu2020-9491, 2020.
EGU2020-2378 | Displays | ITS3.1/NP1.2
Complexity-based approach for El Niño magnitude forecasting before the spring predictability barrierJun Meng, Jingfang Fan, Josef Ludescher, Ankit Agarwala, Xiaosong Chen, Armin Bunde, Juergen Kurths, and Hans Joachim Schellnhuber
The El Niño Southern Oscillation (ENSO) is one of the most prominent interannual climate phenomena. An early and reliable ENSO forecasting remains a crucial goal, due to its serious implications for economy, society, and ecosystem. Despite the development of various dynamical and statistical prediction models in the recent decades, the “spring predictability barrier” (SPB) remains a great challenge for long (over 6-month) lead-time forecasting. To overcome this barrier, here we develop an analysis tool, the System Sample Entropy (SysSampEn), to measure the complexity (disorder) of the system composed of temperature anomaly time series in the Niño 3.4 region. When applying this tool to several near surface air temperature and sea surface temperature datasets, we find that in all datasets a strong positive correlation exists between the magnitude of El Niño and the previous calendar year’s SysSampEn (complexity). We show that this correlation allows to forecast the magnitude of an El Niño with a prediction horizon of 1 year and high accuracy (i.e., Root Mean Square Error = 0.23°C for the average of the individual datasets forecasts). For the 2018 El Niño event, our method forecasts a weak El Niño with a magnitude of 1.11±0.23°C. Our framework presented here not only facilitates a long–term forecasting of the El Niño magnitude but can potentially also be used as a measure for the complexity of other natural or engineering complex systems.
How to cite: Meng, J., Fan, J., Ludescher, J., Agarwala, A., Chen, X., Bunde, A., Kurths, J., and Schellnhuber, H. J.: Complexity-based approach for El Niño magnitude forecasting before the spring predictability barrier, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2378, https://doi.org/10.5194/egusphere-egu2020-2378, 2020.
The El Niño Southern Oscillation (ENSO) is one of the most prominent interannual climate phenomena. An early and reliable ENSO forecasting remains a crucial goal, due to its serious implications for economy, society, and ecosystem. Despite the development of various dynamical and statistical prediction models in the recent decades, the “spring predictability barrier” (SPB) remains a great challenge for long (over 6-month) lead-time forecasting. To overcome this barrier, here we develop an analysis tool, the System Sample Entropy (SysSampEn), to measure the complexity (disorder) of the system composed of temperature anomaly time series in the Niño 3.4 region. When applying this tool to several near surface air temperature and sea surface temperature datasets, we find that in all datasets a strong positive correlation exists between the magnitude of El Niño and the previous calendar year’s SysSampEn (complexity). We show that this correlation allows to forecast the magnitude of an El Niño with a prediction horizon of 1 year and high accuracy (i.e., Root Mean Square Error = 0.23°C for the average of the individual datasets forecasts). For the 2018 El Niño event, our method forecasts a weak El Niño with a magnitude of 1.11±0.23°C. Our framework presented here not only facilitates a long–term forecasting of the El Niño magnitude but can potentially also be used as a measure for the complexity of other natural or engineering complex systems.
How to cite: Meng, J., Fan, J., Ludescher, J., Agarwala, A., Chen, X., Bunde, A., Kurths, J., and Schellnhuber, H. J.: Complexity-based approach for El Niño magnitude forecasting before the spring predictability barrier, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2378, https://doi.org/10.5194/egusphere-egu2020-2378, 2020.
EGU2020-5470 | Displays | ITS3.1/NP1.2
Optimal policies with tipping points and uncertaintiesMarina Martinez Montero, Nicola Botta, Nuria Brede, and Michel Crucifix
Global warming generates the possibility of 'abrupt' or 'irreversible' changes, associated with tipping points. Uncertainties are however sometimes invoked as an argument against political action. The Tipping Points in the Earth System (TIPES) project includes a workpackage whose goal is to rationalise the effects of uncertainty on what should be regarded as an 'optimal policy', given the possibility of tipping points.
To this end, we rely on two disciplinary fields. On the one hand, climate models integrate the dynamical principles, which determine the existence of 'tipping points'. On the other hand, formal decision theory defines the concept of optimal policies and allows us to compute them.
The current contribution outlines the implications and hypotheses needed for combining both frameworks. To exemplify this, we use a simple ice sheet model coupled to both carbon and aerosol models. The coupled system provides us with the formal basis to define the notions of control, irreversibility, and commitment. From this basis, we sketch out the mathematical problem of finding an optimal policy, with emphasis on what needs to be defined to pose the problem properly.
How to cite: Martinez Montero, M., Botta, N., Brede, N., and Crucifix, M.: Optimal policies with tipping points and uncertainties, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5470, https://doi.org/10.5194/egusphere-egu2020-5470, 2020.
Global warming generates the possibility of 'abrupt' or 'irreversible' changes, associated with tipping points. Uncertainties are however sometimes invoked as an argument against political action. The Tipping Points in the Earth System (TIPES) project includes a workpackage whose goal is to rationalise the effects of uncertainty on what should be regarded as an 'optimal policy', given the possibility of tipping points.
To this end, we rely on two disciplinary fields. On the one hand, climate models integrate the dynamical principles, which determine the existence of 'tipping points'. On the other hand, formal decision theory defines the concept of optimal policies and allows us to compute them.
The current contribution outlines the implications and hypotheses needed for combining both frameworks. To exemplify this, we use a simple ice sheet model coupled to both carbon and aerosol models. The coupled system provides us with the formal basis to define the notions of control, irreversibility, and commitment. From this basis, we sketch out the mathematical problem of finding an optimal policy, with emphasis on what needs to be defined to pose the problem properly.
How to cite: Martinez Montero, M., Botta, N., Brede, N., and Crucifix, M.: Optimal policies with tipping points and uncertainties, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5470, https://doi.org/10.5194/egusphere-egu2020-5470, 2020.
EGU2020-6894 | Displays | ITS3.1/NP1.2
Composite data set of last glacial Dansgaard/Oeschger events obtained from stable oxygen isotopes in speleothemsJens Fohlmeister, Niklas Bores, Norbert Marwan, Andrea Columbu, Kira Rehfeld, Natasha Sekhon, Louise Sime, and Cristina Veiga-Pires
Millennial scale climate variations called Dansgaard-Oeschger cycles occurred frequently during the last glacial, with their central impact on climate in the North Atlantic region. These events are, for example, well captured by the stable oxygen isotope composition in continental ice from Greenland, but also in records from other regions. Recently, it has been shown that a water isotope enabled general circulation model is able to reproduce those millennial-scale oxygen isotope changes from Greenland (Sime et al., 2019). On a global scale, this isotope-enabled model has not been tested in its performance, as stable oxygen isotope records covering those millennial scale variability were so far missing or not systematically compiled.
In the continental realm, speleothems provide an excellent archive to store the oxygen isotope composition in precipitation during those rapid events. Here, we use a newly established speleothem data base (SISAL, Atsawawaranunt et al., 2018) from which we extracted 126 speleothems, growing in some interval during the last glacial period. We established an automated method for identification of the rapid onsets of interstadials. While the applied method seems to be not sensitive enough to capture all warming events due to the diverse characteristics of speleothem data (temporal resolution, growth stops and dating uncertainties) and low signal-to-noise-ratio, we are confident that our method is not detecting variations in stable oxygen isotopes that do not reflect stadial-interstadial transitions. Finally, all found transitions were stacked for individual speleothem records in order to provide a mean stadial-interstadial transition for various continental locations. This data set could be useful for future comparison of isotope enabled model simulations and corresponding observations, and to test their ability in modelling millennial scale variability.
References
Atsawawaranunt, et al. (2018). The SISAL database: A global resource to document oxygen and carbon isotope records from speleothems. Earth System Science Data 10, 1687–1713
Sime, L. C., Hopcroft, P. O., Rhodes, R. H. (2019). Impact of abrupt sea ice loss on Greenland water isotopes during the last glacial period. PNAS 116, 4099-4104.
How to cite: Fohlmeister, J., Bores, N., Marwan, N., Columbu, A., Rehfeld, K., Sekhon, N., Sime, L., and Veiga-Pires, C.: Composite data set of last glacial Dansgaard/Oeschger events obtained from stable oxygen isotopes in speleothems, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6894, https://doi.org/10.5194/egusphere-egu2020-6894, 2020.
Millennial scale climate variations called Dansgaard-Oeschger cycles occurred frequently during the last glacial, with their central impact on climate in the North Atlantic region. These events are, for example, well captured by the stable oxygen isotope composition in continental ice from Greenland, but also in records from other regions. Recently, it has been shown that a water isotope enabled general circulation model is able to reproduce those millennial-scale oxygen isotope changes from Greenland (Sime et al., 2019). On a global scale, this isotope-enabled model has not been tested in its performance, as stable oxygen isotope records covering those millennial scale variability were so far missing or not systematically compiled.
In the continental realm, speleothems provide an excellent archive to store the oxygen isotope composition in precipitation during those rapid events. Here, we use a newly established speleothem data base (SISAL, Atsawawaranunt et al., 2018) from which we extracted 126 speleothems, growing in some interval during the last glacial period. We established an automated method for identification of the rapid onsets of interstadials. While the applied method seems to be not sensitive enough to capture all warming events due to the diverse characteristics of speleothem data (temporal resolution, growth stops and dating uncertainties) and low signal-to-noise-ratio, we are confident that our method is not detecting variations in stable oxygen isotopes that do not reflect stadial-interstadial transitions. Finally, all found transitions were stacked for individual speleothem records in order to provide a mean stadial-interstadial transition for various continental locations. This data set could be useful for future comparison of isotope enabled model simulations and corresponding observations, and to test their ability in modelling millennial scale variability.
References
Atsawawaranunt, et al. (2018). The SISAL database: A global resource to document oxygen and carbon isotope records from speleothems. Earth System Science Data 10, 1687–1713
Sime, L. C., Hopcroft, P. O., Rhodes, R. H. (2019). Impact of abrupt sea ice loss on Greenland water isotopes during the last glacial period. PNAS 116, 4099-4104.
How to cite: Fohlmeister, J., Bores, N., Marwan, N., Columbu, A., Rehfeld, K., Sekhon, N., Sime, L., and Veiga-Pires, C.: Composite data set of last glacial Dansgaard/Oeschger events obtained from stable oxygen isotopes in speleothems, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6894, https://doi.org/10.5194/egusphere-egu2020-6894, 2020.
EGU2020-7874 | Displays | ITS3.1/NP1.2
Identifying Antarctic Ice Sheet Tipping PointsBradley Reed, Mattias Green, Hilmar Gudmundsson, and Adrian Jenkins
Warmer atmospheric and oceanic temperatures have led to a six-fold increase in mass loss from Antarctica in the last four decades. It is difficult to predict how the ice sheet will respond to future warming because it is subject to positive feedback mechanisms, which could lead to destabilisation. Observational and modelling work has shown that ice streams in West Antarctica may be undergoing unstable and possibly irreversible retreat due to increased basal melting beneath their ice shelves. Being able to identify and predict stability thresholds in ice streams draining the Antarctic Ice Sheet could help establish early warning indicators of near-future abrupt changes in sea level.
Here, we use the shallow-ice flow model Úa to investigate the stability of an idealised ice stream from the third Marine Ice Sheet Model Intercomparison Project (MISMIP+). Initial results show that a gradual variation in ice viscosity, which corresponds to a change in temperature, causes the ice stream to undergo hysteresis across an overdeepened bed. This hysteresis means there are two tipping points, one for an advance phase and one for a retreat phase, both of which lie off the retrograde sloping bedrock. Beyond these tipping points, changes in ice stream grounding line position are unstable and irreversible. This behaviour is also apparent in wider ice streams although there is a change to the onset of instability and the location of tipping points. Further studies will investigate the additional effects of basal melting on these tipping points.
How to cite: Reed, B., Green, M., Gudmundsson, H., and Jenkins, A.: Identifying Antarctic Ice Sheet Tipping Points, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7874, https://doi.org/10.5194/egusphere-egu2020-7874, 2020.
Warmer atmospheric and oceanic temperatures have led to a six-fold increase in mass loss from Antarctica in the last four decades. It is difficult to predict how the ice sheet will respond to future warming because it is subject to positive feedback mechanisms, which could lead to destabilisation. Observational and modelling work has shown that ice streams in West Antarctica may be undergoing unstable and possibly irreversible retreat due to increased basal melting beneath their ice shelves. Being able to identify and predict stability thresholds in ice streams draining the Antarctic Ice Sheet could help establish early warning indicators of near-future abrupt changes in sea level.
Here, we use the shallow-ice flow model Úa to investigate the stability of an idealised ice stream from the third Marine Ice Sheet Model Intercomparison Project (MISMIP+). Initial results show that a gradual variation in ice viscosity, which corresponds to a change in temperature, causes the ice stream to undergo hysteresis across an overdeepened bed. This hysteresis means there are two tipping points, one for an advance phase and one for a retreat phase, both of which lie off the retrograde sloping bedrock. Beyond these tipping points, changes in ice stream grounding line position are unstable and irreversible. This behaviour is also apparent in wider ice streams although there is a change to the onset of instability and the location of tipping points. Further studies will investigate the additional effects of basal melting on these tipping points.
How to cite: Reed, B., Green, M., Gudmundsson, H., and Jenkins, A.: Identifying Antarctic Ice Sheet Tipping Points, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7874, https://doi.org/10.5194/egusphere-egu2020-7874, 2020.
EGU2020-9362 | Displays | ITS3.1/NP1.2
Periodicity disruption of a model quasi-biennial oscillation of equatorial windsAntoine Renaud, Louis-Philippe Nadeau, and Antoine Venaille
In the Earth's atmosphere, fast propagating equatorial waves generate slow reversals of the large scale stratospheric winds with a period of about 28 months. This quasi-biennial oscillation is a spectacular manifestation of wave-mean flow interactions in stratified fluids, with analogues in other planetary atmospheres and laboratory experiments. Recent observations of a disruption of this periodic behavior have been attributed to external perturbations, but the mechanism explaining the disrupted response has remained elusive. We show the existence of secondary bifurcations and a quasiperiodic route to chaos in simplified models of the equatorial atmosphere ranging from the classical Holton-Lindzen-Plumb model to fully nonlinear simulations of stratified fluids. Perturbations of the slow oscillations are widely amplified in the proximity of the secondary bifurcation point. This suggests that intrinsic dynamics may be equally influential as external variability in explaining disruptions of regular wind reversals [1].
[1] Renaud, A., Nadeau, L. P., & Venaille, A. (2019). Periodicity Disruption of a Model Quasibiennial Oscillation of Equatorial Winds. Physical Review Letters, 122(21), 214504.
How to cite: Renaud, A., Nadeau, L.-P., and Venaille, A.: Periodicity disruption of a model quasi-biennial oscillation of equatorial winds, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9362, https://doi.org/10.5194/egusphere-egu2020-9362, 2020.
In the Earth's atmosphere, fast propagating equatorial waves generate slow reversals of the large scale stratospheric winds with a period of about 28 months. This quasi-biennial oscillation is a spectacular manifestation of wave-mean flow interactions in stratified fluids, with analogues in other planetary atmospheres and laboratory experiments. Recent observations of a disruption of this periodic behavior have been attributed to external perturbations, but the mechanism explaining the disrupted response has remained elusive. We show the existence of secondary bifurcations and a quasiperiodic route to chaos in simplified models of the equatorial atmosphere ranging from the classical Holton-Lindzen-Plumb model to fully nonlinear simulations of stratified fluids. Perturbations of the slow oscillations are widely amplified in the proximity of the secondary bifurcation point. This suggests that intrinsic dynamics may be equally influential as external variability in explaining disruptions of regular wind reversals [1].
[1] Renaud, A., Nadeau, L. P., & Venaille, A. (2019). Periodicity Disruption of a Model Quasibiennial Oscillation of Equatorial Winds. Physical Review Letters, 122(21), 214504.
How to cite: Renaud, A., Nadeau, L.-P., and Venaille, A.: Periodicity disruption of a model quasi-biennial oscillation of equatorial winds, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9362, https://doi.org/10.5194/egusphere-egu2020-9362, 2020.
EGU2020-9765 | Displays | ITS3.1/NP1.2
Antarctic ice-sheet hysteresis in a three-dimensional hybrid ice-sheet modelMarisa Montoya, Jorge Alvarez-Solas, Alexander Robinson, Javier Blasco, Ilaria Tabone, and Daniel Moreno
Ice sheets, in particular the Antarctic Ice Sheet (AIS), are considered as potential tipping elements (TEs) of the Earth system. The mechanism underlying tipping is the existence of positive feedbacks leading to self-amplification processes that, once triggered, dominate the dynamics of the system. Positive feedbacks can also lead to hysteresis, with implications for reversibility in the context of long-term future climate change. The main mechanism underlying ice-sheet hysteresis is the positive surface mass balance-elevation feedback. Marine-based ice sheets, such as the western sector of the AIS, are furthermore subject to specific instability mechanisms that can potentially also lead to hysteresis. Simulations with ice-sheet models have robustly confirmed the presence of different degrees of hysteresis in the evolution of the AIS volume with respect to model parameters and/or climate forcing, suggesting that ice-sheet changes are potentially irreversible on long timescales. Nevertheless, AIS hysteresis is only now becoming a focus of more intensive modeling efforts, including active oceanic forcing in particular. Here, we investigate the hysteresis of the AIS in a three-dimensional hybrid ice-sheet--ice-shelf model with respect to individual atmospheric forcing, ocean forcing and both. The aim is to obtain a probabilistic assessment of the AIS hysteresis and of its critical temperature thresholds by investigating the effect of structural uncertainty, including the representation of ice-sheet dynamics, basal melting and internal feedbacks.
How to cite: Montoya, M., Alvarez-Solas, J., Robinson, A., Blasco, J., Tabone, I., and Moreno, D.: Antarctic ice-sheet hysteresis in a three-dimensional hybrid ice-sheet model, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9765, https://doi.org/10.5194/egusphere-egu2020-9765, 2020.
Ice sheets, in particular the Antarctic Ice Sheet (AIS), are considered as potential tipping elements (TEs) of the Earth system. The mechanism underlying tipping is the existence of positive feedbacks leading to self-amplification processes that, once triggered, dominate the dynamics of the system. Positive feedbacks can also lead to hysteresis, with implications for reversibility in the context of long-term future climate change. The main mechanism underlying ice-sheet hysteresis is the positive surface mass balance-elevation feedback. Marine-based ice sheets, such as the western sector of the AIS, are furthermore subject to specific instability mechanisms that can potentially also lead to hysteresis. Simulations with ice-sheet models have robustly confirmed the presence of different degrees of hysteresis in the evolution of the AIS volume with respect to model parameters and/or climate forcing, suggesting that ice-sheet changes are potentially irreversible on long timescales. Nevertheless, AIS hysteresis is only now becoming a focus of more intensive modeling efforts, including active oceanic forcing in particular. Here, we investigate the hysteresis of the AIS in a three-dimensional hybrid ice-sheet--ice-shelf model with respect to individual atmospheric forcing, ocean forcing and both. The aim is to obtain a probabilistic assessment of the AIS hysteresis and of its critical temperature thresholds by investigating the effect of structural uncertainty, including the representation of ice-sheet dynamics, basal melting and internal feedbacks.
How to cite: Montoya, M., Alvarez-Solas, J., Robinson, A., Blasco, J., Tabone, I., and Moreno, D.: Antarctic ice-sheet hysteresis in a three-dimensional hybrid ice-sheet model, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9765, https://doi.org/10.5194/egusphere-egu2020-9765, 2020.
EGU2020-9929 | Displays | ITS3.1/NP1.2
Conditions for the Compost Bomb InstabilityJoe Clarke, Paul Ritchie, and Peter Cox
Under global warming, soil temperatures are expected to rise. This increases the specific rate of microbial respiration in the soils which in turn warms the soil, creating a positive feedback process. This leads to the possibility of an instability, known as the compost bomb, in which rapidly warming soils release their soil carbon as CO2 to the atmosphere, accelerating global warming. Models of the compost bomb have exhibited interesting dynamical phenomena: excitability, rate induced tipping and bifurcation induced tipping. We examine models with increasing degrees of sophistication, to help understand the conditions that give rise to the compost bomb. We clarify the role an insulating moss layer plays and demonstrate that it has a 'most dangerous' thickness. We also use JULES, a land surface model, to examine where a compost bomb might occur and what affect other processes such as hydrology might have on the compost bomb.
How to cite: Clarke, J., Ritchie, P., and Cox, P.: Conditions for the Compost Bomb Instability, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9929, https://doi.org/10.5194/egusphere-egu2020-9929, 2020.
Under global warming, soil temperatures are expected to rise. This increases the specific rate of microbial respiration in the soils which in turn warms the soil, creating a positive feedback process. This leads to the possibility of an instability, known as the compost bomb, in which rapidly warming soils release their soil carbon as CO2 to the atmosphere, accelerating global warming. Models of the compost bomb have exhibited interesting dynamical phenomena: excitability, rate induced tipping and bifurcation induced tipping. We examine models with increasing degrees of sophistication, to help understand the conditions that give rise to the compost bomb. We clarify the role an insulating moss layer plays and demonstrate that it has a 'most dangerous' thickness. We also use JULES, a land surface model, to examine where a compost bomb might occur and what affect other processes such as hydrology might have on the compost bomb.
How to cite: Clarke, J., Ritchie, P., and Cox, P.: Conditions for the Compost Bomb Instability, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9929, https://doi.org/10.5194/egusphere-egu2020-9929, 2020.
EGU2020-15148 | Displays | ITS3.1/NP1.2
Hypothesis testing and uncertainty propagation in paleo climate proxy data evidencing abrupt climate shiftsKeno Riechers, Niklas Boers, Jens Fohlmeister, and Norbert Marwan
Reconstruction of ancient climate variability relies on inference from paleoclimate proxy data. However, such data often suffers from large uncertainties in particular concerning the age assigned to measured proxy values, which makes the derivation of clear conclusions challenging. Especially in the study of abrupt climatic shifts, dating uncertainties in the proxy archives merit increased attention, since they frequently happen to be of the same order of magnitude as the dynamics of interest. Yet, analyses of paleoclimate proxy reconstructions tend to focus on mean values and thereby conceal the full range of uncertainty. In addition, the statistical significance of the reported results is sometimes not or at least not accurately tested. Here we discuss both, methods for rigorous propagation of uncertainties and for hypothesis testing with applications to the Dansgaard-Oeschger (DO) events of the last glacial interval and their varying timings in different proxy variables and archives. We scrutinized the mathematical analysis of different paleoclimate records evidencing the DO events and provide results that take into account the full range of uncertainties. We discuss several possibilities of testing the significance of apparent leads and lags between transitions found in proxy data evidencing DO events within and across different ice core archives from Greenland and Antarctica.
How to cite: Riechers, K., Boers, N., Fohlmeister, J., and Marwan, N.: Hypothesis testing and uncertainty propagation in paleo climate proxy data evidencing abrupt climate shifts, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-15148, https://doi.org/10.5194/egusphere-egu2020-15148, 2020.
Reconstruction of ancient climate variability relies on inference from paleoclimate proxy data. However, such data often suffers from large uncertainties in particular concerning the age assigned to measured proxy values, which makes the derivation of clear conclusions challenging. Especially in the study of abrupt climatic shifts, dating uncertainties in the proxy archives merit increased attention, since they frequently happen to be of the same order of magnitude as the dynamics of interest. Yet, analyses of paleoclimate proxy reconstructions tend to focus on mean values and thereby conceal the full range of uncertainty. In addition, the statistical significance of the reported results is sometimes not or at least not accurately tested. Here we discuss both, methods for rigorous propagation of uncertainties and for hypothesis testing with applications to the Dansgaard-Oeschger (DO) events of the last glacial interval and their varying timings in different proxy variables and archives. We scrutinized the mathematical analysis of different paleoclimate records evidencing the DO events and provide results that take into account the full range of uncertainties. We discuss several possibilities of testing the significance of apparent leads and lags between transitions found in proxy data evidencing DO events within and across different ice core archives from Greenland and Antarctica.
How to cite: Riechers, K., Boers, N., Fohlmeister, J., and Marwan, N.: Hypothesis testing and uncertainty propagation in paleo climate proxy data evidencing abrupt climate shifts, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-15148, https://doi.org/10.5194/egusphere-egu2020-15148, 2020.
EGU2020-22226 | Displays | ITS3.1/NP1.2
A simple conceptual model for Dansgaard-Oeschger oscillations derived from MIROC4m AOGCM experimentsTakahito Mitsui, Ayako Abe-Ouchi, Wing-Le Chan, and Sam Sherriff-Tadano
Dansgaard-Oeschger (DO) oscillations are most pronounced millennial-scale abrupt climate changes in glacial periods. Abe-Ouchi et al. have simulated DO oscillations with MIROC4m, a fully coupled atmosphere-ocean general circulation model (AOGCM). In that modelling study, it is elucidated that the bipolar seesaw and the Southern Ocean dynamics may play an important role for the occurrence of DO oscillations. In this poster, we present a simple conceptual model for DO oscillations based on the mechanism proposed by Abe-Ouchi et al. In this simple model, relaxation oscillations arise via Hopf bifurcations in a particular region of its parameter space, which is qualitatively consistent with the MIROC4m AOGCM experiments. In general, the period of oscillations does not grow drastically near Hopf bifurcation point in deterministic dynamical systems (Strogatz, "Nonlinear Dynamics and Chaos", 2014; Peltier and Vettoretti, GRL 2014). However, the oscillation periods (return times) increase near the bifurcation points in MIROC4m AOGCM. This gives a U-shape dependence of return times on the parameters in the AOGCM. We show that, in the simple model, such a U-shape dependence is achieved by an addition of noise into the system (which may represent fast "weather" forcings to slow climate) (cf. Mitsui and Crucifix, Clim. Dyn. 2017). We will also mention tipping point behavior found in this simple model.
How to cite: Mitsui, T., Abe-Ouchi, A., Chan, W.-L., and Sherriff-Tadano, S.: A simple conceptual model for Dansgaard-Oeschger oscillations derived from MIROC4m AOGCM experiments, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22226, https://doi.org/10.5194/egusphere-egu2020-22226, 2020.
Dansgaard-Oeschger (DO) oscillations are most pronounced millennial-scale abrupt climate changes in glacial periods. Abe-Ouchi et al. have simulated DO oscillations with MIROC4m, a fully coupled atmosphere-ocean general circulation model (AOGCM). In that modelling study, it is elucidated that the bipolar seesaw and the Southern Ocean dynamics may play an important role for the occurrence of DO oscillations. In this poster, we present a simple conceptual model for DO oscillations based on the mechanism proposed by Abe-Ouchi et al. In this simple model, relaxation oscillations arise via Hopf bifurcations in a particular region of its parameter space, which is qualitatively consistent with the MIROC4m AOGCM experiments. In general, the period of oscillations does not grow drastically near Hopf bifurcation point in deterministic dynamical systems (Strogatz, "Nonlinear Dynamics and Chaos", 2014; Peltier and Vettoretti, GRL 2014). However, the oscillation periods (return times) increase near the bifurcation points in MIROC4m AOGCM. This gives a U-shape dependence of return times on the parameters in the AOGCM. We show that, in the simple model, such a U-shape dependence is achieved by an addition of noise into the system (which may represent fast "weather" forcings to slow climate) (cf. Mitsui and Crucifix, Clim. Dyn. 2017). We will also mention tipping point behavior found in this simple model.
How to cite: Mitsui, T., Abe-Ouchi, A., Chan, W.-L., and Sherriff-Tadano, S.: A simple conceptual model for Dansgaard-Oeschger oscillations derived from MIROC4m AOGCM experiments, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22226, https://doi.org/10.5194/egusphere-egu2020-22226, 2020.
EGU2020-12346 | Displays | ITS3.1/NP1.2
Potential state shifts in terrestrial ecosystems related to changes in El Niño-Southern Oscillation dynamicsMateo Duque-Villegas, Juan Fernando Salazar, and Angela Maria Rendón
The El Niño-Southern Oscillation (ENSO) phenomenon is regarded as a policy-relevant tipping element of the Earth's climate system. It has a prominent planetary-scale influence on climatic variability and it is susceptible to anthropogenic forcing, which could alter irreversibly its dynamics. Changes in frequency and/or amplitude of ENSO would have major implications for terrestrial hydrology and ecosystems. The amount of extreme events such as droughts and floods could vary regionally, as well as their intensities. Here, we use an intermediate complexity climate model, namely the Planet Simulator (PlaSim), to study the potential impact on Earth's climate and its terrestrial ecosystems of changing ENSO dynamics in a couple of experiments. Initially we investigate the global effects of a permanent El Niño, and then we analyse changes in the amplitude of the fluctuation. We found that PlaSim model yields a sensible representation of current large-scale climatological patterns, including ENSO-related variability, as well as realistic estimates of global energy and water budgets. For the permanent El Niño state, there were significant differences in the global distribution of water and energy fluxes that led to asymmetrical effects on vegetation production, which increased in the tropics and decreased in temperate regions. In terrestrial ecosystems of regions such as western North America, the Amazon rainforest, south-eastern Africa and Australia, we found that these El Niño-induced changes could be associated with biome state transitions. Particularly for Australia, we found country-wide aridification as a result of sustained El Niño conditions, which is a potential state in which recent wildfires would be even more dramatic. When the amplitude of the ENSO fluctuation changes, we found that although mean climatological values do not change significantly, extreme values of variables such as temperature and precipitation become more extreme. Our approach aims at recognizing potential threats for terrestrial ecosystems in climate change scenarios in which there are more frequent El Niño phenomena or the intensities of the ENSO phases change. Although it is not enough to prove such effects will be observed, we show a consistent picture and it should raise awareness about conservation of global ecosystems.
How to cite: Duque-Villegas, M., Salazar, J. F., and Rendón, A. M.: Potential state shifts in terrestrial ecosystems related to changes in El Niño-Southern Oscillation dynamics, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12346, https://doi.org/10.5194/egusphere-egu2020-12346, 2020.
The El Niño-Southern Oscillation (ENSO) phenomenon is regarded as a policy-relevant tipping element of the Earth's climate system. It has a prominent planetary-scale influence on climatic variability and it is susceptible to anthropogenic forcing, which could alter irreversibly its dynamics. Changes in frequency and/or amplitude of ENSO would have major implications for terrestrial hydrology and ecosystems. The amount of extreme events such as droughts and floods could vary regionally, as well as their intensities. Here, we use an intermediate complexity climate model, namely the Planet Simulator (PlaSim), to study the potential impact on Earth's climate and its terrestrial ecosystems of changing ENSO dynamics in a couple of experiments. Initially we investigate the global effects of a permanent El Niño, and then we analyse changes in the amplitude of the fluctuation. We found that PlaSim model yields a sensible representation of current large-scale climatological patterns, including ENSO-related variability, as well as realistic estimates of global energy and water budgets. For the permanent El Niño state, there were significant differences in the global distribution of water and energy fluxes that led to asymmetrical effects on vegetation production, which increased in the tropics and decreased in temperate regions. In terrestrial ecosystems of regions such as western North America, the Amazon rainforest, south-eastern Africa and Australia, we found that these El Niño-induced changes could be associated with biome state transitions. Particularly for Australia, we found country-wide aridification as a result of sustained El Niño conditions, which is a potential state in which recent wildfires would be even more dramatic. When the amplitude of the ENSO fluctuation changes, we found that although mean climatological values do not change significantly, extreme values of variables such as temperature and precipitation become more extreme. Our approach aims at recognizing potential threats for terrestrial ecosystems in climate change scenarios in which there are more frequent El Niño phenomena or the intensities of the ENSO phases change. Although it is not enough to prove such effects will be observed, we show a consistent picture and it should raise awareness about conservation of global ecosystems.
How to cite: Duque-Villegas, M., Salazar, J. F., and Rendón, A. M.: Potential state shifts in terrestrial ecosystems related to changes in El Niño-Southern Oscillation dynamics, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12346, https://doi.org/10.5194/egusphere-egu2020-12346, 2020.
EGU2020-16536 | Displays | ITS3.1/NP1.2
Mechanisms affecting equilibrium climate sensitivity in the PlaSim Earth System Model with different ocean model configurationsMichela Angeloni, Elisa Palazzi, and Jost von Hardenberg
The equilibrium climate sensitivity (ECS) of a state-of-the-art Earth System Model of intermediate complexity, the Planet Simulator (PlaSim), is determined under three tuned configurations, in which the model is coupled with a simple Mixed Layer (ML) or with the full 3D Large Scale Geostrophic (LSG) ocean model, at two horizontal resolutions, T21 (600 km) and T42 (300 km). Sensitivity experiments with doubled and quadrupled CO2 were run, using either dynamic or prescribed sea ice. The resulting ECS using dynamic sea ice is 6.3 K for PlaSim-ML T21, 5.4 K for PlaSim-ML T42 and a much smaller 4.2 K for PlaSim-LSG T21. A systematic comparison between simulations with dynamic and prescribed sea ice helps to identify a strong contribution of sea ice to the value of the feedback parameter and of the climate sensitivity. Additionally, Antarctic sea ice is underestimated in PlaSim-LSG leading to a further reduction of ECS when the LSG ocean is used. The ECS of ML experiments is generally large compared with current estimates of equilibrium climate sensitivity in CMIP5 models and other EMICs: a relevant observation is that the choice of the ML horizontal diffusion coefficient, and therefore of the parameterized meridional heat transport and in turn the resulting equator-poles temperature gradient, plays an important role in controlling the ECS of the PlaSim-ML configurations. This observation should be possibly taken into account when evaluating ECS estimates in models with a mixed layer ocean. The configuration of PlaSim with the LSG ocean shows very different AMOC regimes, including 250-year oscillations and a complete shutdown of meridional transport, which depend on the ocean vertical diffusion profile and the CO2 forcing conditions. These features can be explored in the framework of tipping points: the simplified and parameterized form of the climate system components included in PlaSim makes this model a suitable tool to study the transitions occurring in the Earth system in presence of critical points.
How to cite: Angeloni, M., Palazzi, E., and von Hardenberg, J.: Mechanisms affecting equilibrium climate sensitivity in the PlaSim Earth System Model with different ocean model configurations, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-16536, https://doi.org/10.5194/egusphere-egu2020-16536, 2020.
The equilibrium climate sensitivity (ECS) of a state-of-the-art Earth System Model of intermediate complexity, the Planet Simulator (PlaSim), is determined under three tuned configurations, in which the model is coupled with a simple Mixed Layer (ML) or with the full 3D Large Scale Geostrophic (LSG) ocean model, at two horizontal resolutions, T21 (600 km) and T42 (300 km). Sensitivity experiments with doubled and quadrupled CO2 were run, using either dynamic or prescribed sea ice. The resulting ECS using dynamic sea ice is 6.3 K for PlaSim-ML T21, 5.4 K for PlaSim-ML T42 and a much smaller 4.2 K for PlaSim-LSG T21. A systematic comparison between simulations with dynamic and prescribed sea ice helps to identify a strong contribution of sea ice to the value of the feedback parameter and of the climate sensitivity. Additionally, Antarctic sea ice is underestimated in PlaSim-LSG leading to a further reduction of ECS when the LSG ocean is used. The ECS of ML experiments is generally large compared with current estimates of equilibrium climate sensitivity in CMIP5 models and other EMICs: a relevant observation is that the choice of the ML horizontal diffusion coefficient, and therefore of the parameterized meridional heat transport and in turn the resulting equator-poles temperature gradient, plays an important role in controlling the ECS of the PlaSim-ML configurations. This observation should be possibly taken into account when evaluating ECS estimates in models with a mixed layer ocean. The configuration of PlaSim with the LSG ocean shows very different AMOC regimes, including 250-year oscillations and a complete shutdown of meridional transport, which depend on the ocean vertical diffusion profile and the CO2 forcing conditions. These features can be explored in the framework of tipping points: the simplified and parameterized form of the climate system components included in PlaSim makes this model a suitable tool to study the transitions occurring in the Earth system in presence of critical points.
How to cite: Angeloni, M., Palazzi, E., and von Hardenberg, J.: Mechanisms affecting equilibrium climate sensitivity in the PlaSim Earth System Model with different ocean model configurations, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-16536, https://doi.org/10.5194/egusphere-egu2020-16536, 2020.
EGU2020-19442 | Displays | ITS3.1/NP1.2
Can increasing greenhouse gases cause tipping of the AMOC?Richard Wood
Climate models consistently project a weakening of the Atlantic Meridional Overturning Circulation (AMOC) in response to increasing greenhouse gases (GHGs) over the 21st Century. Models also show the potential for multiple equilibria and tipping points of the AMOC, in response to fresh water forcing. However longer term model integrations at increased levels of GHGs suggest that AMOC weakening is transient, with the AMOC recovering to its initial strength after GHGs are stabilised. Hence the ‘traditional’ forcing scenarios of increasing GHGs followed by stabilisation do not appear to induce tipping. But with increased interest in ‘overshoot’ scenarios motivated by the Paris climate agreement, is it possible that there are climate mitigation pathways that do carry a risk of AMOC tipping?
In this study we present a simple AMOC model which captures both the thermal and fresh water forcing associated with GHG increase, and is able to reproduce previous GCM results for both GHG and idealised fresh water (‘hosing’) scenarios. We identify the conditions under which AMOC tipping could occur, and their significance for ‘safe’ climate mitigation pathways.
How to cite: Wood, R.: Can increasing greenhouse gases cause tipping of the AMOC?, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19442, https://doi.org/10.5194/egusphere-egu2020-19442, 2020.
Climate models consistently project a weakening of the Atlantic Meridional Overturning Circulation (AMOC) in response to increasing greenhouse gases (GHGs) over the 21st Century. Models also show the potential for multiple equilibria and tipping points of the AMOC, in response to fresh water forcing. However longer term model integrations at increased levels of GHGs suggest that AMOC weakening is transient, with the AMOC recovering to its initial strength after GHGs are stabilised. Hence the ‘traditional’ forcing scenarios of increasing GHGs followed by stabilisation do not appear to induce tipping. But with increased interest in ‘overshoot’ scenarios motivated by the Paris climate agreement, is it possible that there are climate mitigation pathways that do carry a risk of AMOC tipping?
In this study we present a simple AMOC model which captures both the thermal and fresh water forcing associated with GHG increase, and is able to reproduce previous GCM results for both GHG and idealised fresh water (‘hosing’) scenarios. We identify the conditions under which AMOC tipping could occur, and their significance for ‘safe’ climate mitigation pathways.
How to cite: Wood, R.: Can increasing greenhouse gases cause tipping of the AMOC?, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19442, https://doi.org/10.5194/egusphere-egu2020-19442, 2020.
EGU2020-18424 | Displays | ITS3.1/NP1.2
The tipping points of Pine Island Glacier, West AntarcticaSebastian Rosier, Ronja Reese, Jonathan Donges, Jan De Rydt, Hilmar Gudmundsson, and Ricarda Winkelmann
Mass loss from the Antarctic Ice Sheet is the main source of uncertainty in projections of future sea-level rise, with important implications for coastal regions worldwide. Central to this is the marine ice sheet instability: once a critical threshold, or tipping point, is crossed, ice-internal dynamics can drive a self-amplifying retreat committing a glacier to substantial ice loss that is irreversible at time scales most relevant to human societies. This process might have already been triggered in the Amundsen Sea region, where Pine Island and Thwaites glaciers dominate the current mass loss from Antarctica. However, current modelling and observational techniques have not been able to establish this rigorously, leading to divergent views on the future mass loss of the West Antarctic Ice Sheet. Here we aim at closing this knowledge gap by conducting a systematic investigation of the tipping points of Pine Island Glacier using established early warning indicators that detect critical slowing as a system approaches a tipping point. We are thereby able to identify three distinct tipping points in response to increases in ocean-induced melt. The third and final event, triggered for less than a tripling of melt rates, leads to a retreat of the entire glacier that could initiate a collapse of the West Antarctic Ice Sheet.
How to cite: Rosier, S., Reese, R., Donges, J., De Rydt, J., Gudmundsson, H., and Winkelmann, R.: The tipping points of Pine Island Glacier, West Antarctica, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18424, https://doi.org/10.5194/egusphere-egu2020-18424, 2020.
Mass loss from the Antarctic Ice Sheet is the main source of uncertainty in projections of future sea-level rise, with important implications for coastal regions worldwide. Central to this is the marine ice sheet instability: once a critical threshold, or tipping point, is crossed, ice-internal dynamics can drive a self-amplifying retreat committing a glacier to substantial ice loss that is irreversible at time scales most relevant to human societies. This process might have already been triggered in the Amundsen Sea region, where Pine Island and Thwaites glaciers dominate the current mass loss from Antarctica. However, current modelling and observational techniques have not been able to establish this rigorously, leading to divergent views on the future mass loss of the West Antarctic Ice Sheet. Here we aim at closing this knowledge gap by conducting a systematic investigation of the tipping points of Pine Island Glacier using established early warning indicators that detect critical slowing as a system approaches a tipping point. We are thereby able to identify three distinct tipping points in response to increases in ocean-induced melt. The third and final event, triggered for less than a tripling of melt rates, leads to a retreat of the entire glacier that could initiate a collapse of the West Antarctic Ice Sheet.
How to cite: Rosier, S., Reese, R., Donges, J., De Rydt, J., Gudmundsson, H., and Winkelmann, R.: The tipping points of Pine Island Glacier, West Antarctica, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18424, https://doi.org/10.5194/egusphere-egu2020-18424, 2020.
EGU2020-9798 | Displays | ITS3.1/NP1.2
The role of extreme noise in tipping between stable statesPeter Ditlevsen
Paleoclimatic records show that under glacial boundary conditions the climate has jumped irregularly between two different climate states. These are the stadial and interstadial climates characterized by extremely abrupt climate change, the Dansgaard-Oeschger events. The irregularity and the fact that no known external triggering is present indicate that these are induced by internal noise, so-called n-tipping. The high resolution record of dust from Greenland icecores, which is a proxy of the state of the atmosphere, can be well fitted by a non-linear 1D stochastic process. But in order to do so the noise process needs to be an alpha-stable process, which is characterized by heavy tails violating the central limit theorem. I will discus how extreme events can influence the transition from one climate state to the other.
How to cite: Ditlevsen, P.: The role of extreme noise in tipping between stable states , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9798, https://doi.org/10.5194/egusphere-egu2020-9798, 2020.
Paleoclimatic records show that under glacial boundary conditions the climate has jumped irregularly between two different climate states. These are the stadial and interstadial climates characterized by extremely abrupt climate change, the Dansgaard-Oeschger events. The irregularity and the fact that no known external triggering is present indicate that these are induced by internal noise, so-called n-tipping. The high resolution record of dust from Greenland icecores, which is a proxy of the state of the atmosphere, can be well fitted by a non-linear 1D stochastic process. But in order to do so the noise process needs to be an alpha-stable process, which is characterized by heavy tails violating the central limit theorem. I will discus how extreme events can influence the transition from one climate state to the other.
How to cite: Ditlevsen, P.: The role of extreme noise in tipping between stable states , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9798, https://doi.org/10.5194/egusphere-egu2020-9798, 2020.
EGU2020-4103 | Displays | ITS3.1/NP1.2
Reconstruction of Complex Dynamical Systems Using Stochastic Differential EquationsForough Hassanibesheli, Niklas Boers, and Jürgen Kurths
A complex system is a system composed of highly interconnected components in which the collective property of an underlying system cannot be described by dynamical behavior of the individual parts. Typically, complex systems are governed by nonlinear interactions and intricate fluctuations, thus to retrieve dynamics of a system, it is required to characterize and asses interactions between deterministic tendencies and random fluctuations.
For systems with large numbers of degrees of freedom, interacting across various time scales, deriving time-evolution equations from data is computationally expensive. A possible way to circumvent this problem is to isolate a small number of relatively slow degrees of freedom that may suffice to characterize the underlying dynamics and solve the governing motion equation for the reduced-dimension system in the framework of stochastic differential equations(SDEs). For some specific example settings, we have studied the performance of three stochastic dimension-reduction methods (Langevin equation(LE), generalized Langevin Equation(GLE) and Empirical Model Reduction(EMR)) to model various synthetic and real-world time series. In this study corresponding numerical simulations of all models have been examined by probability distribution function(PDF) and Autocorrelation function(ACF) of the average simulated time series as statistical benchmarks for assessing the differnt models' performance.
First we reconstruct the Niño-3 monthly sea surface temperature (SST) indices averages across (5°N–5°S, 150°–90°W) from 1891 to 2015 using the three aforementioned stochastic models. We demonstrate that all these considered models can reproduce the same skewed and heavy-tailed distributions of Niño-3 SST, comparing ACFs, GLE exhibits a tendency towards achieving a higher accuracy than LE and EMR. A particular challenge for deriving the underlying dynamics of complex systems from data is given by situations of abrupt transitions between alternative states. We show how the Kramers-Moyal approach to derive drift and diffusion terms for LEs can help in such situations. A prominent example of such 'Tipping Events' is given by the Dansgaard-Oeschger events during previous glacial intervals. We attempt to obtain the statistical properties of high-resolution, 20yr average, δ18O and Ca+2 collected from the same ice core from the NGRIP on the GICC05 time scale. Through extensive analyses of various systems, our results signify that stochastic differential equation models considering memory effects are comparatively better approaches for understanding complex systems.
How to cite: Hassanibesheli, F., Boers, N., and Kurths, J.: Reconstruction of Complex Dynamical Systems Using Stochastic Differential Equations, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4103, https://doi.org/10.5194/egusphere-egu2020-4103, 2020.
A complex system is a system composed of highly interconnected components in which the collective property of an underlying system cannot be described by dynamical behavior of the individual parts. Typically, complex systems are governed by nonlinear interactions and intricate fluctuations, thus to retrieve dynamics of a system, it is required to characterize and asses interactions between deterministic tendencies and random fluctuations.
For systems with large numbers of degrees of freedom, interacting across various time scales, deriving time-evolution equations from data is computationally expensive. A possible way to circumvent this problem is to isolate a small number of relatively slow degrees of freedom that may suffice to characterize the underlying dynamics and solve the governing motion equation for the reduced-dimension system in the framework of stochastic differential equations(SDEs). For some specific example settings, we have studied the performance of three stochastic dimension-reduction methods (Langevin equation(LE), generalized Langevin Equation(GLE) and Empirical Model Reduction(EMR)) to model various synthetic and real-world time series. In this study corresponding numerical simulations of all models have been examined by probability distribution function(PDF) and Autocorrelation function(ACF) of the average simulated time series as statistical benchmarks for assessing the differnt models' performance.
First we reconstruct the Niño-3 monthly sea surface temperature (SST) indices averages across (5°N–5°S, 150°–90°W) from 1891 to 2015 using the three aforementioned stochastic models. We demonstrate that all these considered models can reproduce the same skewed and heavy-tailed distributions of Niño-3 SST, comparing ACFs, GLE exhibits a tendency towards achieving a higher accuracy than LE and EMR. A particular challenge for deriving the underlying dynamics of complex systems from data is given by situations of abrupt transitions between alternative states. We show how the Kramers-Moyal approach to derive drift and diffusion terms for LEs can help in such situations. A prominent example of such 'Tipping Events' is given by the Dansgaard-Oeschger events during previous glacial intervals. We attempt to obtain the statistical properties of high-resolution, 20yr average, δ18O and Ca+2 collected from the same ice core from the NGRIP on the GICC05 time scale. Through extensive analyses of various systems, our results signify that stochastic differential equation models considering memory effects are comparatively better approaches for understanding complex systems.
How to cite: Hassanibesheli, F., Boers, N., and Kurths, J.: Reconstruction of Complex Dynamical Systems Using Stochastic Differential Equations, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4103, https://doi.org/10.5194/egusphere-egu2020-4103, 2020.
EGU2020-3461 | Displays | ITS3.1/NP1.2
“Climate shift of the Atlantic Meridional Overturning Circulation (AMOC) in Reanalyses (ORAS5): possible causes, and sources of uncertainty”Vincenzo de Toma, Chunxue Yang, and Vincenzo Artale
We present preliminary results and insights from the analysis of the ensemble of Oceanic Reanalysis System 5 (ORAS5), produced by the European Center for Medium Weather Forecast (ECMWF), which reconstruct ocean’s past history from 1979 to 2018, with monthly means temporal and spatial resolution of 0.25° and 75 vertical levels.
We focused on the AMOC, which can be considered as one of the main drivers of the Earth’s Climate System, and we observed that the strength at 26.5°N presents a shift in the mean of about 5 Sverdrup in the period 1995-2000 which can be considered as a climate tipping point.
We aim to investigate the causes of this reduction and propose three mechanisms responsible for the observed AMOC volume transport reduction: the Gulf Stream Separation path, changes of the Mediterranean Outflow Water (MOW) and the North Atlantic Deep Water (NADW) formation processes in the Labrador Sea respectively.
The Gulf Stream Separation path is investigated by visualizing the barotropic stream function averaged over two periods, before and after the 1995-2000. In particular it is possible to detect a shift in the direction of the barotropic currents, which is enhanced further by seasonal climatology analysis. In the first period (greater volume transport), patterns are more intense, and the Gulf Stream reach higher latitudes, allowing for a more vigorous deep water formation in the Labrador Sea than in the second period.
Moreover, we observe the AMOC volume transport reduction at 26.5°N accompanied with a reduction in the heat fluxes over the Labrador Sea. We think this reduction of heat fluxes has a cascade effect on horizontal averages for temperature, salinity, and potential density profiles, which are manifestations of less deep water production in the Labrador Sea, that can ultimately drive the AMOC weakening.
Finally, the Mediterranean Sea has experienced, in the last decades, a general warming trend, in particular of deep water temperatures since the mid-1980s. It is well known that this warming induce a large variability in the hydrological characteristics of the MOW becoming more likely one key factor driving the AMOC variability observed in ORAS5. In fact, there’s a larger ensemble spread in both the temperature and salinity climatological profiles at 40°N, i. e. in correspondence of the Gibraltar Strait and Gulf of Cadiz.
This analysis highlights the high sensitivity of the MOW to perturbations producing the different ensemble members of ORAS5.
Our hypothesis is that the nonlinear interaction between these three mechanisms could have a complex feedback on the AMOC variability.
In conclusion, our preliminary results brought out the relevance of the deep water formation process in the Labrador Sea, the MOW and the Gulf Stream path as the main sources of the AMOC variability and stability. Besides, our analysis points out the need for further studies, e. g. increasing resolution at the Straits (like Gibraltar Strait), investigating correlations with the variability of the subpolar gyre and developing conceptual studies, using Intermediate Complexity Models interpreted under the lens of Dynamical System Theory and Statistical Mechanics.
How to cite: de Toma, V., Yang, C., and Artale, V.: “Climate shift of the Atlantic Meridional Overturning Circulation (AMOC) in Reanalyses (ORAS5): possible causes, and sources of uncertainty”, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3461, https://doi.org/10.5194/egusphere-egu2020-3461, 2020.
We present preliminary results and insights from the analysis of the ensemble of Oceanic Reanalysis System 5 (ORAS5), produced by the European Center for Medium Weather Forecast (ECMWF), which reconstruct ocean’s past history from 1979 to 2018, with monthly means temporal and spatial resolution of 0.25° and 75 vertical levels.
We focused on the AMOC, which can be considered as one of the main drivers of the Earth’s Climate System, and we observed that the strength at 26.5°N presents a shift in the mean of about 5 Sverdrup in the period 1995-2000 which can be considered as a climate tipping point.
We aim to investigate the causes of this reduction and propose three mechanisms responsible for the observed AMOC volume transport reduction: the Gulf Stream Separation path, changes of the Mediterranean Outflow Water (MOW) and the North Atlantic Deep Water (NADW) formation processes in the Labrador Sea respectively.
The Gulf Stream Separation path is investigated by visualizing the barotropic stream function averaged over two periods, before and after the 1995-2000. In particular it is possible to detect a shift in the direction of the barotropic currents, which is enhanced further by seasonal climatology analysis. In the first period (greater volume transport), patterns are more intense, and the Gulf Stream reach higher latitudes, allowing for a more vigorous deep water formation in the Labrador Sea than in the second period.
Moreover, we observe the AMOC volume transport reduction at 26.5°N accompanied with a reduction in the heat fluxes over the Labrador Sea. We think this reduction of heat fluxes has a cascade effect on horizontal averages for temperature, salinity, and potential density profiles, which are manifestations of less deep water production in the Labrador Sea, that can ultimately drive the AMOC weakening.
Finally, the Mediterranean Sea has experienced, in the last decades, a general warming trend, in particular of deep water temperatures since the mid-1980s. It is well known that this warming induce a large variability in the hydrological characteristics of the MOW becoming more likely one key factor driving the AMOC variability observed in ORAS5. In fact, there’s a larger ensemble spread in both the temperature and salinity climatological profiles at 40°N, i. e. in correspondence of the Gibraltar Strait and Gulf of Cadiz.
This analysis highlights the high sensitivity of the MOW to perturbations producing the different ensemble members of ORAS5.
Our hypothesis is that the nonlinear interaction between these three mechanisms could have a complex feedback on the AMOC variability.
In conclusion, our preliminary results brought out the relevance of the deep water formation process in the Labrador Sea, the MOW and the Gulf Stream path as the main sources of the AMOC variability and stability. Besides, our analysis points out the need for further studies, e. g. increasing resolution at the Straits (like Gibraltar Strait), investigating correlations with the variability of the subpolar gyre and developing conceptual studies, using Intermediate Complexity Models interpreted under the lens of Dynamical System Theory and Statistical Mechanics.
How to cite: de Toma, V., Yang, C., and Artale, V.: “Climate shift of the Atlantic Meridional Overturning Circulation (AMOC) in Reanalyses (ORAS5): possible causes, and sources of uncertainty”, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3461, https://doi.org/10.5194/egusphere-egu2020-3461, 2020.
EGU2020-6422 | Displays | ITS3.1/NP1.2
Testing early-warning signals for the transition of ecological network properties in wetland complexBin Kim, Hyojeong Lee, Khawon Lee, and Jeryang Park
Wetlands, which exist in both natural and man-made landscapes, play a critical role in providing various ecosystem services for both ecology and human-being. These services are affected not only by regional hydro-climatic and geologic conditions but also by human activities. On a landscape scale, wetlands form a complex spatial structure by their spatial distribution in a specific geological setting. Consequently, dispersal of inhabiting species between spatially distributed wetlands organizes ecological networks that are consisted of nodes (wetlands) and links (pathways of movement). In this study, we generated and analyzed the ecological networks by introducing deterministic (e.g., threshold distance) or stochastic (e.g., exponential kernel and heavy-tailed model) dispersal models. From these networks, we evaluated structural or functional characteristics including degree, efficiency, and clustering coefficient, all of which are affected by disturbances such as seasonal hydro-climatic conditions that change wetland surface area, and shocks that may remove nodes from the network (e.g., human activities for land development). Specifically, by using the characteristics of the corresponding ecological networks, we analyzed (1) their network robustness by simulating the removal of nodes selected by their degree or area; and (2) the change of variance as the early-warning signal to predict where critical point may occur in global network characteristics affected by disturbances. The results showed that there was not a clear relationship between network robustness and wetland size for node removal. However, when nodes were removed in the order of degree, the network fragmented rapidly. Also, we observed that the variance of network characteristics in the time-series increased in drier hydro-climatic conditions for all the three network models we tested. This result indicates a possibility of using increasing variance as the early-warning signal for detecting a critical transition in network characteristics as the hydro-climatic condition becomes dry. In sum, the observed characteristics of ecological networks are vulnerable to target attack on hubs (structurally important nodes) or drought. Also, the resilience of a wetlandscape can be low after hubs were destroyed or in a dry season causing the fragmentation of habitats. Implications of these results for modeling ecological networks depending on hydrologic systems and influenced by human activities will provide a new decision-making process, especially for restoring and conservation purposes.
How to cite: Kim, B., Lee, H., Lee, K., and Park, J.: Testing early-warning signals for the transition of ecological network properties in wetland complex, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6422, https://doi.org/10.5194/egusphere-egu2020-6422, 2020.
Wetlands, which exist in both natural and man-made landscapes, play a critical role in providing various ecosystem services for both ecology and human-being. These services are affected not only by regional hydro-climatic and geologic conditions but also by human activities. On a landscape scale, wetlands form a complex spatial structure by their spatial distribution in a specific geological setting. Consequently, dispersal of inhabiting species between spatially distributed wetlands organizes ecological networks that are consisted of nodes (wetlands) and links (pathways of movement). In this study, we generated and analyzed the ecological networks by introducing deterministic (e.g., threshold distance) or stochastic (e.g., exponential kernel and heavy-tailed model) dispersal models. From these networks, we evaluated structural or functional characteristics including degree, efficiency, and clustering coefficient, all of which are affected by disturbances such as seasonal hydro-climatic conditions that change wetland surface area, and shocks that may remove nodes from the network (e.g., human activities for land development). Specifically, by using the characteristics of the corresponding ecological networks, we analyzed (1) their network robustness by simulating the removal of nodes selected by their degree or area; and (2) the change of variance as the early-warning signal to predict where critical point may occur in global network characteristics affected by disturbances. The results showed that there was not a clear relationship between network robustness and wetland size for node removal. However, when nodes were removed in the order of degree, the network fragmented rapidly. Also, we observed that the variance of network characteristics in the time-series increased in drier hydro-climatic conditions for all the three network models we tested. This result indicates a possibility of using increasing variance as the early-warning signal for detecting a critical transition in network characteristics as the hydro-climatic condition becomes dry. In sum, the observed characteristics of ecological networks are vulnerable to target attack on hubs (structurally important nodes) or drought. Also, the resilience of a wetlandscape can be low after hubs were destroyed or in a dry season causing the fragmentation of habitats. Implications of these results for modeling ecological networks depending on hydrologic systems and influenced by human activities will provide a new decision-making process, especially for restoring and conservation purposes.
How to cite: Kim, B., Lee, H., Lee, K., and Park, J.: Testing early-warning signals for the transition of ecological network properties in wetland complex, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6422, https://doi.org/10.5194/egusphere-egu2020-6422, 2020.
EGU2020-10861 | Displays | ITS3.1/NP1.2
Recurrence Plot based entropies and their ability to detect transitionsK. Hauke Kraemer, Norbert Marwan, Karoline Wiesner, and Jürgen Kurths
Many dynamical processes in Earth Sciences are the product of many interacting components and have often limited predictability, not least because they can exhibit regime transitions (e.g. tipping points).To quantify complexity, entropy measures such as the Shannon entropy of the value distribution are widely used. Amongst other more sophisticated ideas, a number of entropy measures based on recurrence plots have been suggested. Because different structures, e.g. diagonal lines, of the recurrence plot are used for the estimation of probabilities, these entropy measures represent different aspects of the analyzed system and, thus, behave differently. In the past, this fact has led to difficulties in interpreting and understanding those measures. We review the definitions, the motivation and interpretation of these entropy measures, compare their differences and discuss some of the pitfalls when using them.
Finally, we illustrate their potential in an application on paleoclimate time series. Using the presented entropy measures, changes and transitions in the climate dynamics in the past can be identified and interpreted.
How to cite: Kraemer, K. H., Marwan, N., Wiesner, K., and Kurths, J.: Recurrence Plot based entropies and their ability to detect transitions, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10861, https://doi.org/10.5194/egusphere-egu2020-10861, 2020.
Many dynamical processes in Earth Sciences are the product of many interacting components and have often limited predictability, not least because they can exhibit regime transitions (e.g. tipping points).To quantify complexity, entropy measures such as the Shannon entropy of the value distribution are widely used. Amongst other more sophisticated ideas, a number of entropy measures based on recurrence plots have been suggested. Because different structures, e.g. diagonal lines, of the recurrence plot are used for the estimation of probabilities, these entropy measures represent different aspects of the analyzed system and, thus, behave differently. In the past, this fact has led to difficulties in interpreting and understanding those measures. We review the definitions, the motivation and interpretation of these entropy measures, compare their differences and discuss some of the pitfalls when using them.
Finally, we illustrate their potential in an application on paleoclimate time series. Using the presented entropy measures, changes and transitions in the climate dynamics in the past can be identified and interpreted.
How to cite: Kraemer, K. H., Marwan, N., Wiesner, K., and Kurths, J.: Recurrence Plot based entropies and their ability to detect transitions, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10861, https://doi.org/10.5194/egusphere-egu2020-10861, 2020.
EGU2020-22229 | Displays | ITS3.1/NP1.2
The Baltic proper eutrophication - a runaway system that passed the tipping point at the end of the 1950sAnders Stigebrandt
The magnitude of the annual biological production in the Baltic proper is determined by the phosphorus (P) concentration C in the surface layer in winter. C is proportional to the total P supply TPS to the water column. TPS has three components; the land-based supply LPS; the ocean supply OPS; and the internal supply IPS from anoxic bottoms. The OPS is minor. The land-based P source, LPS, culminated in the 1980s and at present it has about the same value as in the early 1950s. Despite this, C still increases, and the present time C is at least 3 times higher than C in the 1950s. This runaway evolution of the Baltic proper P content demonstrates that the evolution of C cannot be explained only by the evolution of the external sources LPS and OPS. The runaway behaviour suggests that there is a positive feedback between the state C and the supply TPS. It is shown that the internal P-supply IPS provides such a positive feedback via its dependence on the area of anoxic bottoms Aanox, because IPS is proportional to Aanox and Aanox is proportional to C so that IPS is proportional to C. The internal supply IPS thus increases with C if there are anoxic bottoms. Anoxic bottoms start to occur when C passes the threshold value Ct which occurs when TPS passes the threshold value TPSt. This happened in the Baltic proper at the end of the 1950s. A time-dependent P model describes the evolution of C in the Baltic proper from 1950 to the present quite well.
How to cite: Stigebrandt, A.: The Baltic proper eutrophication - a runaway system that passed the tipping point at the end of the 1950s, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22229, https://doi.org/10.5194/egusphere-egu2020-22229, 2020.
The magnitude of the annual biological production in the Baltic proper is determined by the phosphorus (P) concentration C in the surface layer in winter. C is proportional to the total P supply TPS to the water column. TPS has three components; the land-based supply LPS; the ocean supply OPS; and the internal supply IPS from anoxic bottoms. The OPS is minor. The land-based P source, LPS, culminated in the 1980s and at present it has about the same value as in the early 1950s. Despite this, C still increases, and the present time C is at least 3 times higher than C in the 1950s. This runaway evolution of the Baltic proper P content demonstrates that the evolution of C cannot be explained only by the evolution of the external sources LPS and OPS. The runaway behaviour suggests that there is a positive feedback between the state C and the supply TPS. It is shown that the internal P-supply IPS provides such a positive feedback via its dependence on the area of anoxic bottoms Aanox, because IPS is proportional to Aanox and Aanox is proportional to C so that IPS is proportional to C. The internal supply IPS thus increases with C if there are anoxic bottoms. Anoxic bottoms start to occur when C passes the threshold value Ct which occurs when TPS passes the threshold value TPSt. This happened in the Baltic proper at the end of the 1950s. A time-dependent P model describes the evolution of C in the Baltic proper from 1950 to the present quite well.
How to cite: Stigebrandt, A.: The Baltic proper eutrophication - a runaway system that passed the tipping point at the end of the 1950s, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22229, https://doi.org/10.5194/egusphere-egu2020-22229, 2020.
EGU2020-3255 | Displays | ITS3.1/NP1.2
Designing interventions in the ice sheet/sea level systemJohn Moore, Mike Wolovick, Bowie Keefer, and Oliver Levers
The marine ice sheet instability may have already been initiated in several glaciers in West Antarctica. Hence controlling global temperatures is unlikely to be an effective way of preventing considerable sea level rise. This limits both the utility of greenhouse gas mitigation and solar radiation geoengineering as control mechanisms. Instead we evaluate various other options such as allowing ice shelves to thicken by reducing bottom melting, or slowing ice streams by drying their beds. We consider the engineering limitations, costs, and practical consequences of various designs and how a ladder of implementation might be climbed with regard to learning from Greenland and small-scale field trials. The governance, ethics, legality and societal implications for the local indigenous and global South are also discussed.
How to cite: Moore, J., Wolovick, M., Keefer, B., and Levers, O.: Designing interventions in the ice sheet/sea level system , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3255, https://doi.org/10.5194/egusphere-egu2020-3255, 2020.
The marine ice sheet instability may have already been initiated in several glaciers in West Antarctica. Hence controlling global temperatures is unlikely to be an effective way of preventing considerable sea level rise. This limits both the utility of greenhouse gas mitigation and solar radiation geoengineering as control mechanisms. Instead we evaluate various other options such as allowing ice shelves to thicken by reducing bottom melting, or slowing ice streams by drying their beds. We consider the engineering limitations, costs, and practical consequences of various designs and how a ladder of implementation might be climbed with regard to learning from Greenland and small-scale field trials. The governance, ethics, legality and societal implications for the local indigenous and global South are also discussed.
How to cite: Moore, J., Wolovick, M., Keefer, B., and Levers, O.: Designing interventions in the ice sheet/sea level system , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3255, https://doi.org/10.5194/egusphere-egu2020-3255, 2020.
ITS3.2/NH10.7 – Climate Extremes, Tipping Dynamics, and Earth Resilience in the Anthropocene
EGU2020-20192 | Displays | ITS3.2/NH10.7 | Highlight
A global attribution study on historical heat-related mortality impacts attributed to climate change.Ana Maria Vicedo Cabrera, Francesco Sera, Rochelle Schneider dos Santos, Aurelio Tobias, Christopher Astrom, Yuming Guo, Yasushi Honda, Anna Delucca, David Hondula, Dolores Ibarreta, Veronika Huber, and Antonio Gasparrini
On behalf of the Multi-Country Multi-City Collaborative (MCC) Research Network.
Background & Aim: Climate change is considered the most important environmental threat to human health. Substantial mortality and morbidity burden have been directly or indirectly attributed to climate-sensitive environmental stressors. However, limited quantitative evidence exists on how much of this burden can be attributed to man-made influences on climate. In this large health attribution study, we aimed at quantifying the proportion of excess heat-related mortality attributed to anthropogenic climate change in recent decades across 626 locations across 41 countries in various regions of the world included in MCC database.
Methods: We first estimated the location-specific heat-mortality associations through two-stage time-series analyses with quasi-Poisson regression with distributed lag non-linear models and multivariate multilevel meta-regression using observed data. We then quantified the heat-related excess mortality in each location using daily modelled series derived from historical (factual) and preindustrial control (counterfactual) simulations from 5 general circulation models (ISIMIP2b database) in the period between 1991 and 2019. We finally computed the proportion of heat-related excess mortality attributable to anthropogenic influences as the difference between the two scenarios, with associated measures of uncertainty.
Results: We found a steep increase in level of warming, expressed as the difference in annual average temperature between scenarios, with an average increase of 1.0°C (from 0.7°C to 1.2°C) across the 626 locations between 1991 and 2019. Overall excess heat-mortality fractions of 1.92% [95% confidence interval: 0.41, 3.25] and 1.28% [0.20, 2.50] were estimated under the factual and counterfactual scenarios, respectively, with an overall difference of 0.76% [0.25,1.74]. This translates to 33% of historical heat-excess mortality that can be attributed to anthropogenic climate change. Larger proportions were found in North America (46%), Central America (47%), South America (43%), South Africa (48%), Middle-East Asia (61%), South East-Asia (50%) and Australia (42%), although highly imprecise in most of cases.
Conclusions: Our findings suggest that current warming driven by anthropogenic influences is already responsible for a considerable proportion of the heat-related mortality burden. These results stress the importance of strengthening current mitigation strategies to reduce further warming of the planet and related health impacts.
How to cite: Vicedo Cabrera, A. M., Sera, F., Schneider dos Santos, R., Tobias, A., Astrom, C., Guo, Y., Honda, Y., Delucca, A., Hondula, D., Ibarreta, D., Huber, V., and Gasparrini, A.: A global attribution study on historical heat-related mortality impacts attributed to climate change., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20192, https://doi.org/10.5194/egusphere-egu2020-20192, 2020.
On behalf of the Multi-Country Multi-City Collaborative (MCC) Research Network.
Background & Aim: Climate change is considered the most important environmental threat to human health. Substantial mortality and morbidity burden have been directly or indirectly attributed to climate-sensitive environmental stressors. However, limited quantitative evidence exists on how much of this burden can be attributed to man-made influences on climate. In this large health attribution study, we aimed at quantifying the proportion of excess heat-related mortality attributed to anthropogenic climate change in recent decades across 626 locations across 41 countries in various regions of the world included in MCC database.
Methods: We first estimated the location-specific heat-mortality associations through two-stage time-series analyses with quasi-Poisson regression with distributed lag non-linear models and multivariate multilevel meta-regression using observed data. We then quantified the heat-related excess mortality in each location using daily modelled series derived from historical (factual) and preindustrial control (counterfactual) simulations from 5 general circulation models (ISIMIP2b database) in the period between 1991 and 2019. We finally computed the proportion of heat-related excess mortality attributable to anthropogenic influences as the difference between the two scenarios, with associated measures of uncertainty.
Results: We found a steep increase in level of warming, expressed as the difference in annual average temperature between scenarios, with an average increase of 1.0°C (from 0.7°C to 1.2°C) across the 626 locations between 1991 and 2019. Overall excess heat-mortality fractions of 1.92% [95% confidence interval: 0.41, 3.25] and 1.28% [0.20, 2.50] were estimated under the factual and counterfactual scenarios, respectively, with an overall difference of 0.76% [0.25,1.74]. This translates to 33% of historical heat-excess mortality that can be attributed to anthropogenic climate change. Larger proportions were found in North America (46%), Central America (47%), South America (43%), South Africa (48%), Middle-East Asia (61%), South East-Asia (50%) and Australia (42%), although highly imprecise in most of cases.
Conclusions: Our findings suggest that current warming driven by anthropogenic influences is already responsible for a considerable proportion of the heat-related mortality burden. These results stress the importance of strengthening current mitigation strategies to reduce further warming of the planet and related health impacts.
How to cite: Vicedo Cabrera, A. M., Sera, F., Schneider dos Santos, R., Tobias, A., Astrom, C., Guo, Y., Honda, Y., Delucca, A., Hondula, D., Ibarreta, D., Huber, V., and Gasparrini, A.: A global attribution study on historical heat-related mortality impacts attributed to climate change., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20192, https://doi.org/10.5194/egusphere-egu2020-20192, 2020.
EGU2020-14050 | Displays | ITS3.2/NH10.7
Climate extremes and ecosystem resilience in a future worldMichael Bahn
The ability of ecosystems to resist and recover from climate extremes is of fundamental societal importance given the critical role of ecosystems in supplying ecosystem services such as food and fiber production, or water and climate regulation. To date there is a lack of understanding of how the projected increases in the frequency and intensity of climate extremes will affect ecosystems in a future world. Will the legacy of past extreme climatic events alter ecosystem responses to subsequent extreme events? What are the thresholds of severity altering ecosystem recovery from extreme events or causing irreversible shifts in ecosystem functioning? How do ecosystems respond to climate extremes in the context of multiple co-occurring environmental changes, including climate warming, elevated atmospheric CO2 concentrations, and interacting other climate extremes (i.e. ‚compound events‘)? In what ways do biodiversity and the composition of species and their traits affect ecosystem resilience? How do land management and land-use changes alter ecosystem responses to climate extremes? In this talk I will show some recent insights on these questions and will illustrate how observations can be placed in a framework permitting a comparable quantification of resilience across different ecosystems, ecosystem functions and services. Finally, I will discuss implications for enhancing the adapaptive capacity of social-ecological systems to absorb climate extremes.
How to cite: Bahn, M.: Climate extremes and ecosystem resilience in a future world, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-14050, https://doi.org/10.5194/egusphere-egu2020-14050, 2020.
The ability of ecosystems to resist and recover from climate extremes is of fundamental societal importance given the critical role of ecosystems in supplying ecosystem services such as food and fiber production, or water and climate regulation. To date there is a lack of understanding of how the projected increases in the frequency and intensity of climate extremes will affect ecosystems in a future world. Will the legacy of past extreme climatic events alter ecosystem responses to subsequent extreme events? What are the thresholds of severity altering ecosystem recovery from extreme events or causing irreversible shifts in ecosystem functioning? How do ecosystems respond to climate extremes in the context of multiple co-occurring environmental changes, including climate warming, elevated atmospheric CO2 concentrations, and interacting other climate extremes (i.e. ‚compound events‘)? In what ways do biodiversity and the composition of species and their traits affect ecosystem resilience? How do land management and land-use changes alter ecosystem responses to climate extremes? In this talk I will show some recent insights on these questions and will illustrate how observations can be placed in a framework permitting a comparable quantification of resilience across different ecosystems, ecosystem functions and services. Finally, I will discuss implications for enhancing the adapaptive capacity of social-ecological systems to absorb climate extremes.
How to cite: Bahn, M.: Climate extremes and ecosystem resilience in a future world, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-14050, https://doi.org/10.5194/egusphere-egu2020-14050, 2020.
EGU2020-6681 | Displays | ITS3.2/NH10.7
Great divergence in climate change adaptation during 1500-1900AD: A comparative study on social resilience of Germany and China from a food security perspectiveDiyang Zhang, Xiuqi Fang, and Yanjun Wen
The effectiveness of adaptation to climate change depend on the social resilience. Historical case studies of climate change adaptations would be conducive to better understanding the preferred solution of people with different cultural background, and coping with the risk of the ongoing global climate changes. The relationship among climate change, adaptations and social resilience are analyzed based on the previous researches about famines, agricultural production, trade and migration in Germany during the 16th to the early 20th century. Differences in the primary choices and their effectiveness between Germany and China are also discussed from the perspective of food security. The results are as follows. (1) In the 16th and 17th centuries, the German agricultural system was quite sensitive to the cold and abrupt fluctuated climate, and poor harvests always accompanied by famines in which more than 30% were severe famines. After 1700AD, the severity of famine and its correlation with temperature declined gradually. About 29% famines were merely considered as dearth, and the only severe famine (1770-1772AD) occurred after a back-to-back harvest failure. However, the impact of rainfall extremes on harvest still existed. (2) Germany successfully escaped from famine after 1850AD due to four effective adaptations: ① Planting structure adjustment, like increasing the proportion of rye, was first thought of, but the effectiveness was limited until potatoes became widely accepted. ② The rapid increase in crop yield brought by ago-technology progress reversed the trend of social resilience decreasing with population growth, but was not enough to fully offset the impact of climatic deterioration. ③ The degree of dependence on grain import reached 20% in a short time, which improved the food availability and reduced the famine risk in German mainland. ④ Three emigration waves, following the drought (1844-1846AD) and cooling (1870-1890AD) might have partly alleviated food shortage, especially at a local scale. By 1900AD, German social resilience was nearly 20 times than the scenario of lacking adaptation. (3) In contrast to Germany entered a resilience increasing period since the early 18th century, China maintained the decline of resilience as population pressure increased. Differences might be attributed to their location and culture background. China had long been a unified and powerful empire in east Asia with large internal market and self-sufficient agricultural society, which made it more prone to reduce risk through domestic adjustments, such as internal migration and government relief. When the capacity for disaster relief efforts by the government failed to meet the needs of crisis management, social resilience would drop dramatically. Whereas Germany, located in the continent with a long history of division and amalgamation, had a commercial tradition and was close to the origin of the first industrial revolution, was more willing and likely to find new approaches for food supply ensurance or risk transfer in regional exchanges.
How to cite: Zhang, D., Fang, X., and Wen, Y.: Great divergence in climate change adaptation during 1500-1900AD: A comparative study on social resilience of Germany and China from a food security perspective, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6681, https://doi.org/10.5194/egusphere-egu2020-6681, 2020.
The effectiveness of adaptation to climate change depend on the social resilience. Historical case studies of climate change adaptations would be conducive to better understanding the preferred solution of people with different cultural background, and coping with the risk of the ongoing global climate changes. The relationship among climate change, adaptations and social resilience are analyzed based on the previous researches about famines, agricultural production, trade and migration in Germany during the 16th to the early 20th century. Differences in the primary choices and their effectiveness between Germany and China are also discussed from the perspective of food security. The results are as follows. (1) In the 16th and 17th centuries, the German agricultural system was quite sensitive to the cold and abrupt fluctuated climate, and poor harvests always accompanied by famines in which more than 30% were severe famines. After 1700AD, the severity of famine and its correlation with temperature declined gradually. About 29% famines were merely considered as dearth, and the only severe famine (1770-1772AD) occurred after a back-to-back harvest failure. However, the impact of rainfall extremes on harvest still existed. (2) Germany successfully escaped from famine after 1850AD due to four effective adaptations: ① Planting structure adjustment, like increasing the proportion of rye, was first thought of, but the effectiveness was limited until potatoes became widely accepted. ② The rapid increase in crop yield brought by ago-technology progress reversed the trend of social resilience decreasing with population growth, but was not enough to fully offset the impact of climatic deterioration. ③ The degree of dependence on grain import reached 20% in a short time, which improved the food availability and reduced the famine risk in German mainland. ④ Three emigration waves, following the drought (1844-1846AD) and cooling (1870-1890AD) might have partly alleviated food shortage, especially at a local scale. By 1900AD, German social resilience was nearly 20 times than the scenario of lacking adaptation. (3) In contrast to Germany entered a resilience increasing period since the early 18th century, China maintained the decline of resilience as population pressure increased. Differences might be attributed to their location and culture background. China had long been a unified and powerful empire in east Asia with large internal market and self-sufficient agricultural society, which made it more prone to reduce risk through domestic adjustments, such as internal migration and government relief. When the capacity for disaster relief efforts by the government failed to meet the needs of crisis management, social resilience would drop dramatically. Whereas Germany, located in the continent with a long history of division and amalgamation, had a commercial tradition and was close to the origin of the first industrial revolution, was more willing and likely to find new approaches for food supply ensurance or risk transfer in regional exchanges.
How to cite: Zhang, D., Fang, X., and Wen, Y.: Great divergence in climate change adaptation during 1500-1900AD: A comparative study on social resilience of Germany and China from a food security perspective, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6681, https://doi.org/10.5194/egusphere-egu2020-6681, 2020.
EGU2020-3126 | Displays | ITS3.2/NH10.7 | Highlight
Climate attribution of environmental catastrophesTheodore Shepherd
The role of climate change in environmental or ecological catastrophes is generally a complex question to address, because of the importance of non-climatic factors. From a climate perspective, the latter are confounding factors, whereas from an ecological perspective, they are often the heart of the matter. How these factors are treated affects the nature of the scientific questions that can be answered. In particular, the coarse-graining required to address probabilistic questions inevitably blurs the details of any particular event, whereas these details can be retained when addressing singular questions. In this paper, based on an analysis done jointly with Lisa Lloyd, I will present several published case studies of environmental catastrophes associated with extreme weather or climate events. Whilst both the singular ‘storyline’ and probabilistic ‘risk-based’ approaches to extreme-event attribution have uses in the descriptions of such events, we find the storyline approach to be more readily aligned with the forensic approach to evidence that is prevalent in the ecological literature. Implications for the study of environmental catastrophes are discussed.
How to cite: Shepherd, T.: Climate attribution of environmental catastrophes, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3126, https://doi.org/10.5194/egusphere-egu2020-3126, 2020.
The role of climate change in environmental or ecological catastrophes is generally a complex question to address, because of the importance of non-climatic factors. From a climate perspective, the latter are confounding factors, whereas from an ecological perspective, they are often the heart of the matter. How these factors are treated affects the nature of the scientific questions that can be answered. In particular, the coarse-graining required to address probabilistic questions inevitably blurs the details of any particular event, whereas these details can be retained when addressing singular questions. In this paper, based on an analysis done jointly with Lisa Lloyd, I will present several published case studies of environmental catastrophes associated with extreme weather or climate events. Whilst both the singular ‘storyline’ and probabilistic ‘risk-based’ approaches to extreme-event attribution have uses in the descriptions of such events, we find the storyline approach to be more readily aligned with the forensic approach to evidence that is prevalent in the ecological literature. Implications for the study of environmental catastrophes are discussed.
How to cite: Shepherd, T.: Climate attribution of environmental catastrophes, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3126, https://doi.org/10.5194/egusphere-egu2020-3126, 2020.
EGU2020-21467 | Displays | ITS3.2/NH10.7
Resilience for whom? Governing social-ecological transformation in Cambodia’s Tonle Sap LakeAmy Fallon and Marko Keskinen
Growing water scarcity around the world is a crucial issue driven by global environmental change, as well as increasing competition for water resources for different economic and social pursuits. Climate change will have far-reaching consequences for water resources, particularly through increasing frequency and intensity of extreme weather events, such as droughts and floods. Such changes will acutely impact water and food security in developing countries, where large proportions of society depend on natural resources for their livelihoods. This can significantly undermine the resilience of such complex social-ecological systems, and the fulfilment of SDGs, including water-related SDG 6.
The capacity of freshwater systems to cope with stresses and shocks can be weakened when irreversible changes occur and thresholds are exceeded. It is therefore important for water governance arrangements to incorporate characteristics such as non-linear dynamics and unpredictability. Resilience is also gaining traction as a holistic framework to examine social-ecological system components, processes and feedback loops under change across scales. However, resilience has been critiqued for its inability to appropriately reflect socio-political dynamics, including power asymmetries, cultural values, and human well-being.
In this presentation, a novel theoretical framework for studying and describing resilience is presented for the analysis of freshwater system governance, using three dimensions of resilience across multiple scales of society: absorptive, adaptive, and transformative capacity. The audience is encouraged to engage critically with the concept, asking the question “resilience of what, to what, and for whom?”. In doing so, we will also address the typically narrow technical focus on resilience, and its potential challenges in achieving societal resilience to climate extremes.
The framework is applied to Cambodia’s Tonle Sap and its hydrologically and culturally unique flood pulse system. The lake provides food security for millions, yet is undergoing negative ecological and social transformation due to pressures along the Mekong River including climate change, hydropower development, and weak governance. The changing dynamics in its flood pulse system and an increasingly complex socio-political landscape are presented through the framework, addressing both positive and negative components of resilience. In this way, the framework helps to put the current research and actions on the lake’s management into the broader context of resilience and change.
We will demonstrate absorptive and adaptive responses of people living on and around the lake, including urban migration and illegal fishing practices. The risk of so-called rigidity traps (inflexible system components) is also explored, including an increasingly resilient autocratic government regime – and the potential of such rigidity traps to undermine the resilience of the overall system. An enduring status quo of narratives around agriculture and hydropower development is shown as a key aspect of resilience of the Tonle Sap. Finally, we will present the key windows of opportunity for transformation, focusing on the role of local, largely informal institutions in facilitating sustainable and equitable governance outcomes.
The key aims of this presentation are to present a novel framing of resilience that incorporates societal dimensions more fully, and to identify pathways for transformative change that benefit all relevant groups of society.
How to cite: Fallon, A. and Keskinen, M.: Resilience for whom? Governing social-ecological transformation in Cambodia’s Tonle Sap Lake, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21467, https://doi.org/10.5194/egusphere-egu2020-21467, 2020.
Growing water scarcity around the world is a crucial issue driven by global environmental change, as well as increasing competition for water resources for different economic and social pursuits. Climate change will have far-reaching consequences for water resources, particularly through increasing frequency and intensity of extreme weather events, such as droughts and floods. Such changes will acutely impact water and food security in developing countries, where large proportions of society depend on natural resources for their livelihoods. This can significantly undermine the resilience of such complex social-ecological systems, and the fulfilment of SDGs, including water-related SDG 6.
The capacity of freshwater systems to cope with stresses and shocks can be weakened when irreversible changes occur and thresholds are exceeded. It is therefore important for water governance arrangements to incorporate characteristics such as non-linear dynamics and unpredictability. Resilience is also gaining traction as a holistic framework to examine social-ecological system components, processes and feedback loops under change across scales. However, resilience has been critiqued for its inability to appropriately reflect socio-political dynamics, including power asymmetries, cultural values, and human well-being.
In this presentation, a novel theoretical framework for studying and describing resilience is presented for the analysis of freshwater system governance, using three dimensions of resilience across multiple scales of society: absorptive, adaptive, and transformative capacity. The audience is encouraged to engage critically with the concept, asking the question “resilience of what, to what, and for whom?”. In doing so, we will also address the typically narrow technical focus on resilience, and its potential challenges in achieving societal resilience to climate extremes.
The framework is applied to Cambodia’s Tonle Sap and its hydrologically and culturally unique flood pulse system. The lake provides food security for millions, yet is undergoing negative ecological and social transformation due to pressures along the Mekong River including climate change, hydropower development, and weak governance. The changing dynamics in its flood pulse system and an increasingly complex socio-political landscape are presented through the framework, addressing both positive and negative components of resilience. In this way, the framework helps to put the current research and actions on the lake’s management into the broader context of resilience and change.
We will demonstrate absorptive and adaptive responses of people living on and around the lake, including urban migration and illegal fishing practices. The risk of so-called rigidity traps (inflexible system components) is also explored, including an increasingly resilient autocratic government regime – and the potential of such rigidity traps to undermine the resilience of the overall system. An enduring status quo of narratives around agriculture and hydropower development is shown as a key aspect of resilience of the Tonle Sap. Finally, we will present the key windows of opportunity for transformation, focusing on the role of local, largely informal institutions in facilitating sustainable and equitable governance outcomes.
The key aims of this presentation are to present a novel framing of resilience that incorporates societal dimensions more fully, and to identify pathways for transformative change that benefit all relevant groups of society.
How to cite: Fallon, A. and Keskinen, M.: Resilience for whom? Governing social-ecological transformation in Cambodia’s Tonle Sap Lake, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21467, https://doi.org/10.5194/egusphere-egu2020-21467, 2020.
EGU2020-18648 | Displays | ITS3.2/NH10.7
The kids aren’t alrightWim Thiery, Stefan Lange, Joeri Rogelj, Sonia I. Seneviratne, Carl-Friedrich Schleussner, Katja Frieler, and Nico Bauer and the ISIMIP modelling team
Will a new-born experience more impacts from climate change compared to a 60-year old? While the obvious answer to this question is yes, impacts accumulated across an average person’s lifetime have so far not been quantified. Providing such information is however relevant and timely, given the recent surge in societal debate regarding inter-generational solidarity and considering ongoing climate litigation. Here we combine multi-model impact projections from the Inter-Sectoral Impact Model Intercomparison Project (ISIMIP) with temperature trajectories from the IPCC special report on warming of 1.5°C and life expectancy data from the World Bank to compute accumulated impact exposure across lifetimes of people in different age groups and countries. We consider six impacts categories (droughts, heatwaves, tropical cyclones, crop failure, floods, and wildfires), for which ISIMIP provides a total of 170 impact projections with 15 different impact models under RCP2.6 and 6.0. Our results highlight that the combined increase in life expectancy and unfolding climate impacts leads to 2-4 times more impacts affecting a new-born compared to a 60-year old person under current policy pledges. Globally, the increase in exposure for young people is dominated by the strong increase in heatwave hazards. The strongest increases occur in low and lower-middle income countries, where rising impacts compound a substantial increase in life expectancy. Our results overall highlight the strong benefit of aligning policies with the Paris Agreement for safeguarding the future of current young generations.
How to cite: Thiery, W., Lange, S., Rogelj, J., Seneviratne, S. I., Schleussner, C.-F., Frieler, K., and Bauer, N. and the ISIMIP modelling team: The kids aren’t alright, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18648, https://doi.org/10.5194/egusphere-egu2020-18648, 2020.
Will a new-born experience more impacts from climate change compared to a 60-year old? While the obvious answer to this question is yes, impacts accumulated across an average person’s lifetime have so far not been quantified. Providing such information is however relevant and timely, given the recent surge in societal debate regarding inter-generational solidarity and considering ongoing climate litigation. Here we combine multi-model impact projections from the Inter-Sectoral Impact Model Intercomparison Project (ISIMIP) with temperature trajectories from the IPCC special report on warming of 1.5°C and life expectancy data from the World Bank to compute accumulated impact exposure across lifetimes of people in different age groups and countries. We consider six impacts categories (droughts, heatwaves, tropical cyclones, crop failure, floods, and wildfires), for which ISIMIP provides a total of 170 impact projections with 15 different impact models under RCP2.6 and 6.0. Our results highlight that the combined increase in life expectancy and unfolding climate impacts leads to 2-4 times more impacts affecting a new-born compared to a 60-year old person under current policy pledges. Globally, the increase in exposure for young people is dominated by the strong increase in heatwave hazards. The strongest increases occur in low and lower-middle income countries, where rising impacts compound a substantial increase in life expectancy. Our results overall highlight the strong benefit of aligning policies with the Paris Agreement for safeguarding the future of current young generations.
How to cite: Thiery, W., Lange, S., Rogelj, J., Seneviratne, S. I., Schleussner, C.-F., Frieler, K., and Bauer, N. and the ISIMIP modelling team: The kids aren’t alright, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18648, https://doi.org/10.5194/egusphere-egu2020-18648, 2020.
EGU2020-10527 | Displays | ITS3.2/NH10.7
Warming-level dependent adaptation requirements and consequent limits to adaptationTabea Lissner
The level of detail to understand the impacts associated with different levels of global temperature increase across space and time is increasing with and since the publication of the IPCC Special Report on Global Warming of 1.5°C (SR1.5). However, current adaptation assessments are limited in picking up this information to understand what the implications of different impact pathways are for adaptation. Yet, the intensity, frequency and timing of impacts are critical determinants of the feasibility of different adaptation options and their associated costs.
There is increasing awareness that limits to adaptation are likely to be approached or crossed with higher levels of warming, however differential analyses of adaptation needs as a consequence of different warming pathways are so far limited. While case study based assessment of warming- and scenario-depend adaptation responses are emerging, so far an aggregated regional to global assessment is lacking.
Adaptation is often seen as a process that can draw on existing approaches that have been successfully implemented elsewhere, assuming that a linear increase of impacts would allow impacted regions to scale existing approaches to deal with hazards. However, climate impacts are unlikely to increase in a linear fashion across space and time and we are likely to enter new regimes of impacts in terms of intensity and frequency, with implications on recovery times. As a consequence, existing approaches are unlikely to be able to respond to these fundamentally changed conditions and limits to adaptation as we know it are likely to be reached. Understanding adaptation needs and potentials at different levels of global warming is therefore an urgent need in order to understand the full scale of the challenge. Such information is also critical in understanding the full cost of different mitigation pathway choices.
This contribution presents a framework for the systematic assessment of warming-level dependent adaptation needs and potentials across sectors and presents first results of this approach in the context of adaptation in the water sector. Our results highlight the need for differentiated approaches to planning adaptation, drawing a strong link to available impacts and vulnerability science to avoid mal-adaptation and to understand the full scope of the challenge. Where transformational adaptation will be required and current warming trajectories, early action may reduce associated costs of such measures across different dimensions, including financial, social or cultural aspects.
How to cite: Lissner, T.: Warming-level dependent adaptation requirements and consequent limits to adaptation, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10527, https://doi.org/10.5194/egusphere-egu2020-10527, 2020.
The level of detail to understand the impacts associated with different levels of global temperature increase across space and time is increasing with and since the publication of the IPCC Special Report on Global Warming of 1.5°C (SR1.5). However, current adaptation assessments are limited in picking up this information to understand what the implications of different impact pathways are for adaptation. Yet, the intensity, frequency and timing of impacts are critical determinants of the feasibility of different adaptation options and their associated costs.
There is increasing awareness that limits to adaptation are likely to be approached or crossed with higher levels of warming, however differential analyses of adaptation needs as a consequence of different warming pathways are so far limited. While case study based assessment of warming- and scenario-depend adaptation responses are emerging, so far an aggregated regional to global assessment is lacking.
Adaptation is often seen as a process that can draw on existing approaches that have been successfully implemented elsewhere, assuming that a linear increase of impacts would allow impacted regions to scale existing approaches to deal with hazards. However, climate impacts are unlikely to increase in a linear fashion across space and time and we are likely to enter new regimes of impacts in terms of intensity and frequency, with implications on recovery times. As a consequence, existing approaches are unlikely to be able to respond to these fundamentally changed conditions and limits to adaptation as we know it are likely to be reached. Understanding adaptation needs and potentials at different levels of global warming is therefore an urgent need in order to understand the full scale of the challenge. Such information is also critical in understanding the full cost of different mitigation pathway choices.
This contribution presents a framework for the systematic assessment of warming-level dependent adaptation needs and potentials across sectors and presents first results of this approach in the context of adaptation in the water sector. Our results highlight the need for differentiated approaches to planning adaptation, drawing a strong link to available impacts and vulnerability science to avoid mal-adaptation and to understand the full scope of the challenge. Where transformational adaptation will be required and current warming trajectories, early action may reduce associated costs of such measures across different dimensions, including financial, social or cultural aspects.
How to cite: Lissner, T.: Warming-level dependent adaptation requirements and consequent limits to adaptation, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10527, https://doi.org/10.5194/egusphere-egu2020-10527, 2020.
EGU2020-11591 | Displays | ITS3.2/NH10.7
Extreme events and resilience at different scalesMarkus Reichstein and the RISK-KAN
In this talk we highlight the consideration of extreme events for (Earth) system dynamics and sustainable development, as opposed to perspective which mostly perceive gradual changes. We show that climate extremes can contribute to positive carbon-cycle-climate feedbacks and conjeture that extreme events can trigger fast system changes and the instigation of "vicious cycles", illustrating this conceptually with ecosystem-related and societal examples. We further discuss risk cascades, emergent and systemic risks in this context with recent local and more global example. Counter-strategies will also be elaborated. Overall, we propose to consider more strongly risk-aware development and systems thinking in future research and implementation related to extreme events and resilience.
How to cite: Reichstein, M. and the RISK-KAN: Extreme events and resilience at different scales, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11591, https://doi.org/10.5194/egusphere-egu2020-11591, 2020.
In this talk we highlight the consideration of extreme events for (Earth) system dynamics and sustainable development, as opposed to perspective which mostly perceive gradual changes. We show that climate extremes can contribute to positive carbon-cycle-climate feedbacks and conjeture that extreme events can trigger fast system changes and the instigation of "vicious cycles", illustrating this conceptually with ecosystem-related and societal examples. We further discuss risk cascades, emergent and systemic risks in this context with recent local and more global example. Counter-strategies will also be elaborated. Overall, we propose to consider more strongly risk-aware development and systems thinking in future research and implementation related to extreme events and resilience.
How to cite: Reichstein, M. and the RISK-KAN: Extreme events and resilience at different scales, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11591, https://doi.org/10.5194/egusphere-egu2020-11591, 2020.
EGU2020-8486 | Displays | ITS3.2/NH10.7
Understanding Turning Points in Dryland Ecosystem Functioning (U-TURN)Stephanie Horion, Paulo Bernardino, Wanda De Keersmaecker, Rasmus Fensholt, Stef Lhermitte, Guy Schurgers, Niels Souverijns, Ruben Van De Kerchove, Hans Verbeeck, Jan Verbesselt, Wim Verbruggen, and Ben Somers
Pressures on dryland ecosystems are ever growing. Large-scale vegetation die-offs, biodiversity loss and loss in ecosystem services are reported as a result of unsustainable land use, climate change and extreme events. Yet major uncertainties remain regarding our capability to accurately assess on-going land changes, as well as to comprehensively attribute drivers to these changes. Indeed ecosystem response to external pressures is often complex (e.g. non-linear) and non-unique (i.e. same response, different drivers). Besides critical knowledge on ecosystem stability and coping capacities to extreme events has still to be consolidated.
Recent advances in time series analysis and in the assessment of breakpoint open a new door in ecosystem research as they allow for the detection of turning points and tipping points in ecosystem development (Horion et al., 2016 and 2019). Identifying ecosystems that have significantly changed their way of functioning, i.e. that have tipped to a new functioning state, is of crucial importance for Ecology studies. These extremes cases of vegetation instability are golden mines for researches that try to understand how resilient are ecosystems to climate change and to non-sustainable use of land.
This is precisely what the U-TURN project is about:
- Developing methods for detecting turning points in dryland ecosystem functioning; Here we defined turning point in ecosystem functioning as a key moment in the ecosystem development where its functioning is significantly changed or altered without implying the irreversibility of the process (Horion et al. (2016)), by opposition to the term ‘tipping point’ that implies irreversibility (Lenton et al. 2008).
- Studying the contribution of climate and human pressure (e.g. land-use intensification, human induced land soil degradation) in pushing the ecosystem outside its safe operating space ; Here we used Earth Observation techniques coupled with Dynamic Vegetation Models to get process-based insights on the drivers of the observed changes in ecosystem functioning.
- Exploring whether early warning signal of turning points can be identified.
During our talk, we will present key methodological advances being achieved within the U-TURN project, and showcase some of our major findings in relation to abrupt changes in dryland ecosystem functioning.
References:
Horion, S., Ivits, E., De Keersmaecker, W., Tagesson, T., Vogt, J., & Fensholt, R. (2019). Mapping European ecosystem change types in response to land‐use change, extreme climate events, and land degradation. Land Degradation & Development, 30(8), 951-963. doi:10.1002/ldr.3282
Horion, S., Prishchepov, A. V., Verbesselt, J., de Beurs, K., Tagesson, T., & Fensholt, R. (2016). Revealing turning points in ecosystem functioning over the Northern Eurasian agricultural frontier. Global Change Biology, 22(8), 2801-2817. doi:10.1111/gcb.13267
Lenton, T. M., Held, H., Kriegler, E., Hall, J. W., Lucht, W., Rahmstorf, S., & Schellnhuber, H. J. (2008). Tipping elements in the Earth's climate system. Proc Natl Acad Sci U S A, 105(6), 1786-1793. doi:10.1073/pnas.0705414105
Project website: http://uturndryland.wixsite.com/uturn
This research is funded by the Belgian Federal Science Policy Office (Grant/Award Number:SR/00/339)
How to cite: Horion, S., Bernardino, P., De Keersmaecker, W., Fensholt, R., Lhermitte, S., Schurgers, G., Souverijns, N., Van De Kerchove, R., Verbeeck, H., Verbesselt, J., Verbruggen, W., and Somers, B.: Understanding Turning Points in Dryland Ecosystem Functioning (U-TURN), EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8486, https://doi.org/10.5194/egusphere-egu2020-8486, 2020.
Pressures on dryland ecosystems are ever growing. Large-scale vegetation die-offs, biodiversity loss and loss in ecosystem services are reported as a result of unsustainable land use, climate change and extreme events. Yet major uncertainties remain regarding our capability to accurately assess on-going land changes, as well as to comprehensively attribute drivers to these changes. Indeed ecosystem response to external pressures is often complex (e.g. non-linear) and non-unique (i.e. same response, different drivers). Besides critical knowledge on ecosystem stability and coping capacities to extreme events has still to be consolidated.
Recent advances in time series analysis and in the assessment of breakpoint open a new door in ecosystem research as they allow for the detection of turning points and tipping points in ecosystem development (Horion et al., 2016 and 2019). Identifying ecosystems that have significantly changed their way of functioning, i.e. that have tipped to a new functioning state, is of crucial importance for Ecology studies. These extremes cases of vegetation instability are golden mines for researches that try to understand how resilient are ecosystems to climate change and to non-sustainable use of land.
This is precisely what the U-TURN project is about:
- Developing methods for detecting turning points in dryland ecosystem functioning; Here we defined turning point in ecosystem functioning as a key moment in the ecosystem development where its functioning is significantly changed or altered without implying the irreversibility of the process (Horion et al. (2016)), by opposition to the term ‘tipping point’ that implies irreversibility (Lenton et al. 2008).
- Studying the contribution of climate and human pressure (e.g. land-use intensification, human induced land soil degradation) in pushing the ecosystem outside its safe operating space ; Here we used Earth Observation techniques coupled with Dynamic Vegetation Models to get process-based insights on the drivers of the observed changes in ecosystem functioning.
- Exploring whether early warning signal of turning points can be identified.
During our talk, we will present key methodological advances being achieved within the U-TURN project, and showcase some of our major findings in relation to abrupt changes in dryland ecosystem functioning.
References:
Horion, S., Ivits, E., De Keersmaecker, W., Tagesson, T., Vogt, J., & Fensholt, R. (2019). Mapping European ecosystem change types in response to land‐use change, extreme climate events, and land degradation. Land Degradation & Development, 30(8), 951-963. doi:10.1002/ldr.3282
Horion, S., Prishchepov, A. V., Verbesselt, J., de Beurs, K., Tagesson, T., & Fensholt, R. (2016). Revealing turning points in ecosystem functioning over the Northern Eurasian agricultural frontier. Global Change Biology, 22(8), 2801-2817. doi:10.1111/gcb.13267
Lenton, T. M., Held, H., Kriegler, E., Hall, J. W., Lucht, W., Rahmstorf, S., & Schellnhuber, H. J. (2008). Tipping elements in the Earth's climate system. Proc Natl Acad Sci U S A, 105(6), 1786-1793. doi:10.1073/pnas.0705414105
Project website: http://uturndryland.wixsite.com/uturn
This research is funded by the Belgian Federal Science Policy Office (Grant/Award Number:SR/00/339)
How to cite: Horion, S., Bernardino, P., De Keersmaecker, W., Fensholt, R., Lhermitte, S., Schurgers, G., Souverijns, N., Van De Kerchove, R., Verbeeck, H., Verbesselt, J., Verbruggen, W., and Somers, B.: Understanding Turning Points in Dryland Ecosystem Functioning (U-TURN), EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8486, https://doi.org/10.5194/egusphere-egu2020-8486, 2020.
EGU2020-2735 | Displays | ITS3.2/NH10.7 | Highlight
Is there warming in the pipeline? A multi-model analysis of the zero emission commitment from CO2Andrew H. MacDougall, Thomas L. Frölicher, Chirs D. Jones, Joeri Rogelj, H. Damon Matthews, and Kirsten Zickfeld and the Zero Emissions Commitment Model Intercomparison Project
The Zero Emissions Commitment (ZEC) is the change in global mean temperature expected to occur following the cessation of net CO2 emissions, and as such is a critical parameter for calculating the remaining carbon budget. The Zero Emissions Commitment Model Intercomparison Project (ZECMIP) was established to gain a better understanding of the potential magnitude and sign of ZEC, in addition to the processes that underlie this metric. Eighteen Earth system models of both full and intermediate complexity participated in ZECMIP. All models conducted an experiment where atmospheric CO2 concentration increases exponentially until 1000 PgC has been emitted. Thereafter emissions are set to zero and models are configured to allow free evolution of atmospheric CO2 concentration. The inter-model range of ZEC 50 years after emissions cease for the 1000 PgC experiment is -0.36 to 0.29 oC with a model ensemble mean of -0.06 oC, median of -0.05 oC and standard deviation of 0.19 oC. Models exhibit a wide variety of behaviours after emissions cease, with some models continuing to warm for decades to millennia and others cooling substantially. Analysis shows that both ocean carbon uptake and carbon uptake by the terrestrial biosphere are important for counteracting the warming effect from reduction in ocean heat uptake in the decades after emissions cease.
Overall, the most likely value of ZEC on decadal time-scales is assessed to be close to zero, consistent with prior work. However substantial continued warming for decades or centuries following cessation of emission is a feature of a minority of the assessed models and thus cannot be ruled out.
How to cite: MacDougall, A. H., Frölicher, T. L., Jones, C. D., Rogelj, J., Matthews, H. D., and Zickfeld, K. and the Zero Emissions Commitment Model Intercomparison Project: Is there warming in the pipeline? A multi-model analysis of the zero emission commitment from CO2, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2735, https://doi.org/10.5194/egusphere-egu2020-2735, 2020.
The Zero Emissions Commitment (ZEC) is the change in global mean temperature expected to occur following the cessation of net CO2 emissions, and as such is a critical parameter for calculating the remaining carbon budget. The Zero Emissions Commitment Model Intercomparison Project (ZECMIP) was established to gain a better understanding of the potential magnitude and sign of ZEC, in addition to the processes that underlie this metric. Eighteen Earth system models of both full and intermediate complexity participated in ZECMIP. All models conducted an experiment where atmospheric CO2 concentration increases exponentially until 1000 PgC has been emitted. Thereafter emissions are set to zero and models are configured to allow free evolution of atmospheric CO2 concentration. The inter-model range of ZEC 50 years after emissions cease for the 1000 PgC experiment is -0.36 to 0.29 oC with a model ensemble mean of -0.06 oC, median of -0.05 oC and standard deviation of 0.19 oC. Models exhibit a wide variety of behaviours after emissions cease, with some models continuing to warm for decades to millennia and others cooling substantially. Analysis shows that both ocean carbon uptake and carbon uptake by the terrestrial biosphere are important for counteracting the warming effect from reduction in ocean heat uptake in the decades after emissions cease.
Overall, the most likely value of ZEC on decadal time-scales is assessed to be close to zero, consistent with prior work. However substantial continued warming for decades or centuries following cessation of emission is a feature of a minority of the assessed models and thus cannot be ruled out.
How to cite: MacDougall, A. H., Frölicher, T. L., Jones, C. D., Rogelj, J., Matthews, H. D., and Zickfeld, K. and the Zero Emissions Commitment Model Intercomparison Project: Is there warming in the pipeline? A multi-model analysis of the zero emission commitment from CO2, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2735, https://doi.org/10.5194/egusphere-egu2020-2735, 2020.
EGU2020-20525 | Displays | ITS3.2/NH10.7
Towards a quantification of the water planetary boundaryLan Wang-Erlandsson, Tom Gleeson, Fernando Jaramillo, Samuel C. Zipper, Dieter Gerten, Arne Tobian, Miina Porkka, Agnes Pranindita, Ruud van der Ent, Patrick Keys, Ingo Fetzer, Matti Kummu, Anna Chrysafi, Will Steffen, Hubert Savenije, Makoto Taniguchi, Line Gordon, Sarah Cornell, Arie Staal, and Yoshihide Wada and the et. al.
The planetary boundaries framework defines nine Earth system processes that together demarcate a safe operating space for humanity at the planetary scale. Freshwater - the bloodstream of the biosphere - is an obvious member of the planetary boundary framework. Water fluxes and stores play a key role for the stability of the Earth’s climate and the world’s aquatic and terrestrial ecosystems. Recent work has proposed to represent the water planetary boundary through six sub-boundaries based on the five primary water stores, i.e., atmospheric water, soil moisture, surface water, groundwater, and frozen water. In order to make it usable on all spatial scales we examine bottom-up and top-down approaches for quantification of the water planetary boundary. For the bottom-up approaches, we explore possible spatially distributed variables defining each of the proposed sub-boundaries, as well as possible weighting factors and keystone regions that can be used for aggregation of the distributed water sub-boundaries to the global scale. For the top-down approaches, we re-examine the stability of key biomes and tipping elements in the Earth System that may be crucially influenced by water cycle modifications. To identify the most appropriate variables for representing the water planetary boundary, we evaluate the range of explored variables with regard to scientific evidence and scientific representation using a hierarchy-based evaluation framework. Finally, we compare the highest ranked top-down and bottom-up approaches in terms of the scientific outcome and implications for governance. In sum, this comprehensive and systematic identification and evaluation of variables, weighting factors, and baseline conditions provides a detailed basis for the future operational quantification of the water planetary boundary.
How to cite: Wang-Erlandsson, L., Gleeson, T., Jaramillo, F., Zipper, S. C., Gerten, D., Tobian, A., Porkka, M., Pranindita, A., van der Ent, R., Keys, P., Fetzer, I., Kummu, M., Chrysafi, A., Steffen, W., Savenije, H., Taniguchi, M., Gordon, L., Cornell, S., Staal, A., and Wada, Y. and the et. al.: Towards a quantification of the water planetary boundary , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20525, https://doi.org/10.5194/egusphere-egu2020-20525, 2020.
The planetary boundaries framework defines nine Earth system processes that together demarcate a safe operating space for humanity at the planetary scale. Freshwater - the bloodstream of the biosphere - is an obvious member of the planetary boundary framework. Water fluxes and stores play a key role for the stability of the Earth’s climate and the world’s aquatic and terrestrial ecosystems. Recent work has proposed to represent the water planetary boundary through six sub-boundaries based on the five primary water stores, i.e., atmospheric water, soil moisture, surface water, groundwater, and frozen water. In order to make it usable on all spatial scales we examine bottom-up and top-down approaches for quantification of the water planetary boundary. For the bottom-up approaches, we explore possible spatially distributed variables defining each of the proposed sub-boundaries, as well as possible weighting factors and keystone regions that can be used for aggregation of the distributed water sub-boundaries to the global scale. For the top-down approaches, we re-examine the stability of key biomes and tipping elements in the Earth System that may be crucially influenced by water cycle modifications. To identify the most appropriate variables for representing the water planetary boundary, we evaluate the range of explored variables with regard to scientific evidence and scientific representation using a hierarchy-based evaluation framework. Finally, we compare the highest ranked top-down and bottom-up approaches in terms of the scientific outcome and implications for governance. In sum, this comprehensive and systematic identification and evaluation of variables, weighting factors, and baseline conditions provides a detailed basis for the future operational quantification of the water planetary boundary.
How to cite: Wang-Erlandsson, L., Gleeson, T., Jaramillo, F., Zipper, S. C., Gerten, D., Tobian, A., Porkka, M., Pranindita, A., van der Ent, R., Keys, P., Fetzer, I., Kummu, M., Chrysafi, A., Steffen, W., Savenije, H., Taniguchi, M., Gordon, L., Cornell, S., Staal, A., and Wada, Y. and the et. al.: Towards a quantification of the water planetary boundary , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20525, https://doi.org/10.5194/egusphere-egu2020-20525, 2020.
EGU2020-7217 | Displays | ITS3.2/NH10.7
Hysteresis of tropical forests in the 21st centuryArie Staal, Ingo Fetzer, Lan Wang-Erlandsson, Joyce Bosmans, Stefan Dekker, Egbert van Nes, Johan Rockström, and Obbe Tuinenburg
Tropical forests modify the conditions they depend on through feedbacks on different spatial scales. These feedbacks shape the hysteresis (history-dependence) of tropical forests, thus controlling their resilience to deforestation and response to climate change. Here we present the emergent hysteresis from local-scale tipping points and regional-scale forest-rainfall feedbacks across the tropics under the recent climate and a severe climate-change scenario. By integrating remote sensing, a global hydrological model, and detailed atmospheric moisture tracking simulations, we find that forest-rainfall feedback expands the range of possible forest distributions especially in the Amazon. The Amazon forest could partially recover from complete deforestation, but may lose that resilience later this century. The Congo forest lacks resilience, but gains it under climate change, whereas forests in Australasia are resilient under both current and future climates. Our results show how tropical forests shape their own distributions and create the climatic conditions that enable them.
How to cite: Staal, A., Fetzer, I., Wang-Erlandsson, L., Bosmans, J., Dekker, S., van Nes, E., Rockström, J., and Tuinenburg, O.: Hysteresis of tropical forests in the 21st century, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7217, https://doi.org/10.5194/egusphere-egu2020-7217, 2020.
Tropical forests modify the conditions they depend on through feedbacks on different spatial scales. These feedbacks shape the hysteresis (history-dependence) of tropical forests, thus controlling their resilience to deforestation and response to climate change. Here we present the emergent hysteresis from local-scale tipping points and regional-scale forest-rainfall feedbacks across the tropics under the recent climate and a severe climate-change scenario. By integrating remote sensing, a global hydrological model, and detailed atmospheric moisture tracking simulations, we find that forest-rainfall feedback expands the range of possible forest distributions especially in the Amazon. The Amazon forest could partially recover from complete deforestation, but may lose that resilience later this century. The Congo forest lacks resilience, but gains it under climate change, whereas forests in Australasia are resilient under both current and future climates. Our results show how tropical forests shape their own distributions and create the climatic conditions that enable them.
How to cite: Staal, A., Fetzer, I., Wang-Erlandsson, L., Bosmans, J., Dekker, S., van Nes, E., Rockström, J., and Tuinenburg, O.: Hysteresis of tropical forests in the 21st century, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7217, https://doi.org/10.5194/egusphere-egu2020-7217, 2020.
EGU2020-21230 | Displays | ITS3.2/NH10.7 | Highlight
Social tipping dynamics for stabilizing Earth’s climate by 2050Ilona M. Otto and Jonathan Donges
Safely achieving the goals of the Paris Climate Agreement requires a world-wide transformation to carbon-neutral societies within the next 30 years. Accelerated technological progress and policy implementations are required to deliver emissions reductions at rates sufficiently fast to avoid crossing dangerous tipping points in the Earth’s climate system. Here, we discuss and evaluate the potential of social tipping interventions (STIs) that can activate contagious processes of rapidly spreading technologies, behaviors, social norms and structural reorganization within their functional domains that we refer to as social tipping elements (STEs). STEs are subdomains of the planetary socio-economic system where the required disruptive change may take place and lead to a sufficiently fast reduction in anthropogenic greenhouse gas emissions. The results are based on online expert elicitation, a subsequent expert workshop, and a literature review. The social tipping interventions that could trigger the tipping of STE subsystems include (i) removing fossil fuel subsidies and incentivizing decentralized energy generation (STE1: energy production and storage systems), (ii) building carbon-neutral cities (STE2: human settlements), (iii) divesting from assets linked to fossil fuels (STE3: financial markets), (iv) revealing the moral implications of fossil fuels (STE4: norms and value systems), (v) strengthening climate education and engagement (STE5: education system) and (vi) disclosing information on greenhouse gas emissions (STE6: information feedbacks). Our research reveals important areas of focus for larger-scale empirical and modeling efforts to better understand the potentials of harnessing social tipping dynamics for climate change mitigation.
How to cite: Otto, I. M. and Donges, J.: Social tipping dynamics for stabilizing Earth’s climate by 2050, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21230, https://doi.org/10.5194/egusphere-egu2020-21230, 2020.
Safely achieving the goals of the Paris Climate Agreement requires a world-wide transformation to carbon-neutral societies within the next 30 years. Accelerated technological progress and policy implementations are required to deliver emissions reductions at rates sufficiently fast to avoid crossing dangerous tipping points in the Earth’s climate system. Here, we discuss and evaluate the potential of social tipping interventions (STIs) that can activate contagious processes of rapidly spreading technologies, behaviors, social norms and structural reorganization within their functional domains that we refer to as social tipping elements (STEs). STEs are subdomains of the planetary socio-economic system where the required disruptive change may take place and lead to a sufficiently fast reduction in anthropogenic greenhouse gas emissions. The results are based on online expert elicitation, a subsequent expert workshop, and a literature review. The social tipping interventions that could trigger the tipping of STE subsystems include (i) removing fossil fuel subsidies and incentivizing decentralized energy generation (STE1: energy production and storage systems), (ii) building carbon-neutral cities (STE2: human settlements), (iii) divesting from assets linked to fossil fuels (STE3: financial markets), (iv) revealing the moral implications of fossil fuels (STE4: norms and value systems), (v) strengthening climate education and engagement (STE5: education system) and (vi) disclosing information on greenhouse gas emissions (STE6: information feedbacks). Our research reveals important areas of focus for larger-scale empirical and modeling efforts to better understand the potentials of harnessing social tipping dynamics for climate change mitigation.
How to cite: Otto, I. M. and Donges, J.: Social tipping dynamics for stabilizing Earth’s climate by 2050, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21230, https://doi.org/10.5194/egusphere-egu2020-21230, 2020.
EGU2020-21493 | Displays | ITS3.2/NH10.7
Climate change induced socio-economic tipping pointsKees van Ginkel, Wouter Botzen, Marjolijn Haasnoot, Gabriel Bachner, Karl Steininger, Jochen Hinkel, Paul Watkiss, Esther Boere, Ad Jeuken, Elisa Sainz de Murieta, and Francesco Bosello
The concept of tipping points has received much attention in research on climate change. In the biophysical realm, climate tipping points describe critical thresholds at which large-scale elements of the Earth switch to a qualitatively different state; and ecological tipping points describe thresholds separating distinct dynamic regimes of ecosystems. The tipping point metaphor is also used to indicate transformative change in adaptation and mitigation strategies. However, there remains an underexplored field: climate change induced socio-economic tipping points (SETPs). We define an SETP as: a climate change induced, abrupt change of a socio-economic system, into a new, fundamentally different state. We make a distinction between SETPs in terms of transformational response to climate change and SETPs in terms of socio-economic impacts.
SETPs are points where a gradual change in climatic conditions causes an abrupt, fundamental reconfiguration of the socio-economic system. Through a stakeholder consultation, we identified 22 candidate SETP examples with policy relevance for Europe. Three of these were investigated in more detail, with special attention for their tipping point characteristics (stable states at both sides of a critical threshold, abrupt transition between those states, and the mechanism explaining the non-linear and abrupt behaviour).
The first example is the collapse of winter sports tourism in low-altitude ski resorts. In the face of climate change, this may occur abrupt, cause a fundamental reconfiguration of the local and regional economy, and is very hard to reverse. In some cases, it could be possible to achieve a fundamental shift towards summer tourism.
The second example is the farmland abandonment in Southern Europe. Large parts of Spain have already seen widespread farmland abandonment and associated migration. Increasing heat and drought may worsen the conditions, with considerable social, and to a lesser extent, economic consequences. On the local scale, this manifests itself as a clear SETP: a lively agricultural area suddenly tips to the ‘Spanish Lapland’: deserted farms, villages with ageing population, little economic activity and underdeveloped infrastructure and facilities.
The third example is sea-level rise induced reconfiguration of coastal zones. In the face of accelerating sea level rise (SLR), threatened communities may retreat from vulnerable coastal zones. This may be caused by migration (voluntary human mobility), displacement (involuntary movement following a disaster) or relocation (retreat managed by the government). The SETP of retreat from a certain area is usually triggered by a flood event. However, also the adaptation to increasing flood risk may be so transformative, that it can be considered a structural configuration of the system. This is currently seen in The Netherlands, where studies on extreme SLR have triggered a debate in which very transformative strategies are proposed, such as: constructing a dike in front of the entire coast, retreat from areas with economic stagnation and population decline, or elevating all new buildings above sea level.
A key insight is that the rate of climate change may exceed the capacity of society to adapt in the traditional way, triggering a shift towards fundamentally different policies and a reconfiguration of the socio-economic system.
How to cite: van Ginkel, K., Botzen, W., Haasnoot, M., Bachner, G., Steininger, K., Hinkel, J., Watkiss, P., Boere, E., Jeuken, A., Sainz de Murieta, E., and Bosello, F.: Climate change induced socio-economic tipping points , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21493, https://doi.org/10.5194/egusphere-egu2020-21493, 2020.
The concept of tipping points has received much attention in research on climate change. In the biophysical realm, climate tipping points describe critical thresholds at which large-scale elements of the Earth switch to a qualitatively different state; and ecological tipping points describe thresholds separating distinct dynamic regimes of ecosystems. The tipping point metaphor is also used to indicate transformative change in adaptation and mitigation strategies. However, there remains an underexplored field: climate change induced socio-economic tipping points (SETPs). We define an SETP as: a climate change induced, abrupt change of a socio-economic system, into a new, fundamentally different state. We make a distinction between SETPs in terms of transformational response to climate change and SETPs in terms of socio-economic impacts.
SETPs are points where a gradual change in climatic conditions causes an abrupt, fundamental reconfiguration of the socio-economic system. Through a stakeholder consultation, we identified 22 candidate SETP examples with policy relevance for Europe. Three of these were investigated in more detail, with special attention for their tipping point characteristics (stable states at both sides of a critical threshold, abrupt transition between those states, and the mechanism explaining the non-linear and abrupt behaviour).
The first example is the collapse of winter sports tourism in low-altitude ski resorts. In the face of climate change, this may occur abrupt, cause a fundamental reconfiguration of the local and regional economy, and is very hard to reverse. In some cases, it could be possible to achieve a fundamental shift towards summer tourism.
The second example is the farmland abandonment in Southern Europe. Large parts of Spain have already seen widespread farmland abandonment and associated migration. Increasing heat and drought may worsen the conditions, with considerable social, and to a lesser extent, economic consequences. On the local scale, this manifests itself as a clear SETP: a lively agricultural area suddenly tips to the ‘Spanish Lapland’: deserted farms, villages with ageing population, little economic activity and underdeveloped infrastructure and facilities.
The third example is sea-level rise induced reconfiguration of coastal zones. In the face of accelerating sea level rise (SLR), threatened communities may retreat from vulnerable coastal zones. This may be caused by migration (voluntary human mobility), displacement (involuntary movement following a disaster) or relocation (retreat managed by the government). The SETP of retreat from a certain area is usually triggered by a flood event. However, also the adaptation to increasing flood risk may be so transformative, that it can be considered a structural configuration of the system. This is currently seen in The Netherlands, where studies on extreme SLR have triggered a debate in which very transformative strategies are proposed, such as: constructing a dike in front of the entire coast, retreat from areas with economic stagnation and population decline, or elevating all new buildings above sea level.
A key insight is that the rate of climate change may exceed the capacity of society to adapt in the traditional way, triggering a shift towards fundamentally different policies and a reconfiguration of the socio-economic system.
How to cite: van Ginkel, K., Botzen, W., Haasnoot, M., Bachner, G., Steininger, K., Hinkel, J., Watkiss, P., Boere, E., Jeuken, A., Sainz de Murieta, E., and Bosello, F.: Climate change induced socio-economic tipping points , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21493, https://doi.org/10.5194/egusphere-egu2020-21493, 2020.
EGU2020-2374 | Displays | ITS3.2/NH10.7
Indian Monsoon Rainfall Amount Forecast: Network-based Approach and Climate Change BenefitsJingfang Fan, Jun Meng, Josef Ludescher, Zhaoyuan Li, Elena Surovyatkina, Xiaosong Chen, Juergen Kurths, and Hans Joachim Schellnhuber
The Indian summer monsoon rainfall (ISMR) has decisive influence on India's agricultural output and economy. Extreme departures from the normal seasonal amount of rainfall can cause severe droughts or floods, affecting Indian food production and security. Despite the development of sophisticated statistical and dynamical climate models, a long-term and reliable prediction of ISMR has remained a challenging problem. Towards achieving this goal, here we construct a series of dynamical and physical climate networks based on the global near surface air temperature field. We uncover that some characteristics of the directed and weighted climate networks can serve as good early warning signals for ISMR forecasting. The developed prediction method can produce a forecast skill of 0.5, by using the previous calendar year's data (5-month lead-time). The skill of our ISMR forecast is comparable to the best statistical and dynamical forecast models, which start in May or June. We reveal that global warming affects climate network, by enchanting cross-equatorial teleconnections between Southwest Atlantic and North Asia-Pacific, which significantly impacts on global precipitation. Remarkably, the consequences of climate change lead to improving the prediction skills. We discuss the underlying mechanism of our predictor and associate it with network--delayed--ENSO and ENSO--monsoon connections. Moreover, we find out that this approach is not limited to prediction of all India rainfall but also can be applied to forecast the Indian homogeneous regions rainfall. Our network-based approach developed in the present work provides a new perspective on the regional forecasting of the ISMR, and can potentially be used as a prototype for other monsoon systems.
How to cite: Fan, J., Meng, J., Ludescher, J., Li, Z., Surovyatkina, E., Chen, X., Kurths, J., and Schellnhuber, H. J.: Indian Monsoon Rainfall Amount Forecast: Network-based Approach and Climate Change Benefits, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2374, https://doi.org/10.5194/egusphere-egu2020-2374, 2020.
The Indian summer monsoon rainfall (ISMR) has decisive influence on India's agricultural output and economy. Extreme departures from the normal seasonal amount of rainfall can cause severe droughts or floods, affecting Indian food production and security. Despite the development of sophisticated statistical and dynamical climate models, a long-term and reliable prediction of ISMR has remained a challenging problem. Towards achieving this goal, here we construct a series of dynamical and physical climate networks based on the global near surface air temperature field. We uncover that some characteristics of the directed and weighted climate networks can serve as good early warning signals for ISMR forecasting. The developed prediction method can produce a forecast skill of 0.5, by using the previous calendar year's data (5-month lead-time). The skill of our ISMR forecast is comparable to the best statistical and dynamical forecast models, which start in May or June. We reveal that global warming affects climate network, by enchanting cross-equatorial teleconnections between Southwest Atlantic and North Asia-Pacific, which significantly impacts on global precipitation. Remarkably, the consequences of climate change lead to improving the prediction skills. We discuss the underlying mechanism of our predictor and associate it with network--delayed--ENSO and ENSO--monsoon connections. Moreover, we find out that this approach is not limited to prediction of all India rainfall but also can be applied to forecast the Indian homogeneous regions rainfall. Our network-based approach developed in the present work provides a new perspective on the regional forecasting of the ISMR, and can potentially be used as a prototype for other monsoon systems.
How to cite: Fan, J., Meng, J., Ludescher, J., Li, Z., Surovyatkina, E., Chen, X., Kurths, J., and Schellnhuber, H. J.: Indian Monsoon Rainfall Amount Forecast: Network-based Approach and Climate Change Benefits, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2374, https://doi.org/10.5194/egusphere-egu2020-2374, 2020.
EGU2020-4335 | Displays | ITS3.2/NH10.7
Risk and vulnerability of Mongolian grasslands to climate change and grazingBanzragch Nandintsetseg, Bazartsersen Boldgiv, Jinfeng Chang, Philippe Ciais, Masato Shinoda, Yong Mei, and Nils Christian Stenseth
Robust changes in climatic hazards, including droughts, heatwaves and dust storms, are evident in many parts of the world and they are expected to increase in magnitude and frequency in the future. At the same time, socio-ecological damage from climate-related disasters has increased worldwide, including the Eurasian steppes, notably Mongolian grasslands (MGs), which occur in arid and harsh cold climate and still support traditional nomadic livelihood and culture through the food supply, and agricultural and ecosystem services. In the 2000s, increasing climate disasters (droughts combined with anomalously harsh winters (dzuds in Mongolian), and dust storms) resulted in massive livestock deaths, causing socioeconomic stagnation. In this context, assessments of risk and vulnerability of MGs to climate change and grazing may support disaster risk management by helping to identify hazard risk hotspots, allowing herders in risky areas to be prepared for events, and to mitigate the future potential impacts. Here, we examine the risk and vulnerability of the MG ecosystem to droughts at the national-level during a 40-year (1976–2015) using simulations of a gridded process-based ecosystem model by contrasting the recent (1996–2015) and past (1976–1995) 20-years. In general, the model realistically simulates temporal and spatial variations of vegetation biomass and soil moisture that were captured by field and satellite observations during 2000–2015 over MGs. We apply a probabilistic risk analysis in which risk is the product of the probability of hazardous drought during June-August and ecosystem vulnerability. Results reveal that during 1976–2015, increases in droughts with rapid warming and slight drying occurred over MGs, particularly in the recent 20-year, accompanied by ever-increasing grazing intensity, which together resulted in declining trends in grassland productivity. During the recent 20-year, the risk of drought to productivity slightly increased over extended areas in MGs compared to the past 20-year. The increase in the risk to MGs predominantly caused by the climate change-induced increase in the probability of hazardous drought, and less by the vulnerability. Regionally, recent droughts modify the risk to grasslands particularly in northcentral and northeast Mongolia. Given the benefits of MGs for both ecosystem services and socio-economic consequences, recent increases in drought hazards and associated risk to MGs signal an urgent need to implement drought management policies that sustain MGs.
How to cite: Nandintsetseg, B., Boldgiv, B., Chang, J., Ciais, P., Shinoda, M., Mei, Y., and Stenseth, N. C.: Risk and vulnerability of Mongolian grasslands to climate change and grazing, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4335, https://doi.org/10.5194/egusphere-egu2020-4335, 2020.
Robust changes in climatic hazards, including droughts, heatwaves and dust storms, are evident in many parts of the world and they are expected to increase in magnitude and frequency in the future. At the same time, socio-ecological damage from climate-related disasters has increased worldwide, including the Eurasian steppes, notably Mongolian grasslands (MGs), which occur in arid and harsh cold climate and still support traditional nomadic livelihood and culture through the food supply, and agricultural and ecosystem services. In the 2000s, increasing climate disasters (droughts combined with anomalously harsh winters (dzuds in Mongolian), and dust storms) resulted in massive livestock deaths, causing socioeconomic stagnation. In this context, assessments of risk and vulnerability of MGs to climate change and grazing may support disaster risk management by helping to identify hazard risk hotspots, allowing herders in risky areas to be prepared for events, and to mitigate the future potential impacts. Here, we examine the risk and vulnerability of the MG ecosystem to droughts at the national-level during a 40-year (1976–2015) using simulations of a gridded process-based ecosystem model by contrasting the recent (1996–2015) and past (1976–1995) 20-years. In general, the model realistically simulates temporal and spatial variations of vegetation biomass and soil moisture that were captured by field and satellite observations during 2000–2015 over MGs. We apply a probabilistic risk analysis in which risk is the product of the probability of hazardous drought during June-August and ecosystem vulnerability. Results reveal that during 1976–2015, increases in droughts with rapid warming and slight drying occurred over MGs, particularly in the recent 20-year, accompanied by ever-increasing grazing intensity, which together resulted in declining trends in grassland productivity. During the recent 20-year, the risk of drought to productivity slightly increased over extended areas in MGs compared to the past 20-year. The increase in the risk to MGs predominantly caused by the climate change-induced increase in the probability of hazardous drought, and less by the vulnerability. Regionally, recent droughts modify the risk to grasslands particularly in northcentral and northeast Mongolia. Given the benefits of MGs for both ecosystem services and socio-economic consequences, recent increases in drought hazards and associated risk to MGs signal an urgent need to implement drought management policies that sustain MGs.
How to cite: Nandintsetseg, B., Boldgiv, B., Chang, J., Ciais, P., Shinoda, M., Mei, Y., and Stenseth, N. C.: Risk and vulnerability of Mongolian grasslands to climate change and grazing, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4335, https://doi.org/10.5194/egusphere-egu2020-4335, 2020.
EGU2020-4490 | Displays | ITS3.2/NH10.7
Dynamic emergence of domino effects in systems of interacting tipping elements in ecology and climateAnn Kristin Klose, Volker Karle, Ricarda Winkelmann, and Jonathan F. Donges
In ecology, climate and other fields, systems have been identified that can transition into a qualitatively different state when a critical threshold or tipping point in a driving process is crossed. An understanding of those tipping elements is of great interest given the increasing influence of humans on the biophysical Earth system. Tipping elements are not independent from each other as there exist complex interactions, e.g. through physical mechanisms that connect subsystems of the climate system.
Based on earlier work on such coupled nonlinear systems, we systematically assessed the qualitative long-term behavior of interacting tipping elements. We developed an understanding of the consequences of interactions on the tipping behavior allowing for domino effects and tipping cascades to emerge under certain conditions.
The application of these qualitative results to real-world examples of interacting tipping elements shows that domino effects with profound consequences can occur: the interacting Greenland ice sheet and thermohaline ocean circulation might tip before the tipping points of the isolated subsystems are crossed. The eutrophication of the first lake in a lake chain might propagate through the following lakes without a crossing of their individual critical nutrient input levels.
The possibility of emerging domino effects calls for the development of a unified theory of interacting tipping elements and the quantitative analysis of interacting real-world tipping elements.
How to cite: Klose, A. K., Karle, V., Winkelmann, R., and Donges, J. F.: Dynamic emergence of domino effects in systems of interacting tipping elements in ecology and climate, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4490, https://doi.org/10.5194/egusphere-egu2020-4490, 2020.
In ecology, climate and other fields, systems have been identified that can transition into a qualitatively different state when a critical threshold or tipping point in a driving process is crossed. An understanding of those tipping elements is of great interest given the increasing influence of humans on the biophysical Earth system. Tipping elements are not independent from each other as there exist complex interactions, e.g. through physical mechanisms that connect subsystems of the climate system.
Based on earlier work on such coupled nonlinear systems, we systematically assessed the qualitative long-term behavior of interacting tipping elements. We developed an understanding of the consequences of interactions on the tipping behavior allowing for domino effects and tipping cascades to emerge under certain conditions.
The application of these qualitative results to real-world examples of interacting tipping elements shows that domino effects with profound consequences can occur: the interacting Greenland ice sheet and thermohaline ocean circulation might tip before the tipping points of the isolated subsystems are crossed. The eutrophication of the first lake in a lake chain might propagate through the following lakes without a crossing of their individual critical nutrient input levels.
The possibility of emerging domino effects calls for the development of a unified theory of interacting tipping elements and the quantitative analysis of interacting real-world tipping elements.
How to cite: Klose, A. K., Karle, V., Winkelmann, R., and Donges, J. F.: Dynamic emergence of domino effects in systems of interacting tipping elements in ecology and climate, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4490, https://doi.org/10.5194/egusphere-egu2020-4490, 2020.
EGU2020-4630 | Displays | ITS3.2/NH10.7
Systemic risks emerging from global climate hotspots and their impacts on EuropeFranziska Gaupp and Jana Sillmann
In a globalized world, Europe is increasingly affected by climate change events beyond its borders that propagate through our interconnected systems impacting the socio-economic welfare in Europe. The REmote Climate Effects and their Impact on European sustainability, Policy and Trade (RECEIPT) project uses a novel stakeholder-driven storytelling approach that maps representative connections between remote climate hazards such as droughts or hurricanes and European socio-economic activities in the agricultural, finance, development, shipping and manufacturing sectors. As part of RECEIPT, this work focuses on systemic risks in global climate risk hotspots and their knock-on effects on the European economy. In five stakeholder workshops, expert elicitation methods are used to identify and map sector- and storyline-specific systemic risks: interlinkages between different events, hidden causes and consequences, potential feedback loops, uncertainties and other systemic risk characteristics will be investigated. A special focus lies on “gray rhino” events, “foreseeable random surprises” that follow clear warning signs but are only known to a smaller group of people. Results reveal sector-specific “topographies of risk” within the storylines identified by stakeholders.
How to cite: Gaupp, F. and Sillmann, J.: Systemic risks emerging from global climate hotspots and their impacts on Europe, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4630, https://doi.org/10.5194/egusphere-egu2020-4630, 2020.
In a globalized world, Europe is increasingly affected by climate change events beyond its borders that propagate through our interconnected systems impacting the socio-economic welfare in Europe. The REmote Climate Effects and their Impact on European sustainability, Policy and Trade (RECEIPT) project uses a novel stakeholder-driven storytelling approach that maps representative connections between remote climate hazards such as droughts or hurricanes and European socio-economic activities in the agricultural, finance, development, shipping and manufacturing sectors. As part of RECEIPT, this work focuses on systemic risks in global climate risk hotspots and their knock-on effects on the European economy. In five stakeholder workshops, expert elicitation methods are used to identify and map sector- and storyline-specific systemic risks: interlinkages between different events, hidden causes and consequences, potential feedback loops, uncertainties and other systemic risk characteristics will be investigated. A special focus lies on “gray rhino” events, “foreseeable random surprises” that follow clear warning signs but are only known to a smaller group of people. Results reveal sector-specific “topographies of risk” within the storylines identified by stakeholders.
How to cite: Gaupp, F. and Sillmann, J.: Systemic risks emerging from global climate hotspots and their impacts on Europe, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4630, https://doi.org/10.5194/egusphere-egu2020-4630, 2020.
EGU2020-9696 | Displays | ITS3.2/NH10.7
Rootzone storage potential indicates the extent of rainforest resilienceChandrakant Singh, Ruud J. van der Ent, Ingo Fetzer, and Lan Wang-Erlandsson
Change in rainfall patterns and extended drought events can cause water stress in the rainforest which can also lead to a permanent shift of the biome into a savanna state. Rainforest in response may adapt to such environmental stress conditions to sustain ecosystem functioning or reduce functioning altogether. Previous studies related to forest resilience have mostly relied on precipitation or climatological drought as a control variable, but neither is a direct measure of forest resilience. As such, forest adaptability dynamics of the forest is poorly understood. Our research defines this adaptation capacity of vegetation as a dynamic reserve which rainforest can utilize before a potential shift to an alternate stable state, as the resilience of the rainforest. Here, we introduce the Rootzone Storage Potential (RZSP) as a direct water stress metric to understand adaptive forest resilience behaviour, using the cumulative difference between precipitation and potential evaporation (radiation-based). Since the potential evaporation used for RZSP calculation is purely radiation-based, it minimizes the effect of moisture recycling (and transport) on the system. RZSP is the potential of vegetation to optimize their resources enduring the greatest dry period. In this study, we have investigated the spatio-temporal resilience loss of the South American and African rainforest. An increasing trend of resilience loss was observed in the past few decades. Using RZSP is a useful indicator for estimating resilience dynamics and water stress characteristics of the rainforest.
How to cite: Singh, C., van der Ent, R. J., Fetzer, I., and Wang-Erlandsson, L.: Rootzone storage potential indicates the extent of rainforest resilience, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9696, https://doi.org/10.5194/egusphere-egu2020-9696, 2020.
Change in rainfall patterns and extended drought events can cause water stress in the rainforest which can also lead to a permanent shift of the biome into a savanna state. Rainforest in response may adapt to such environmental stress conditions to sustain ecosystem functioning or reduce functioning altogether. Previous studies related to forest resilience have mostly relied on precipitation or climatological drought as a control variable, but neither is a direct measure of forest resilience. As such, forest adaptability dynamics of the forest is poorly understood. Our research defines this adaptation capacity of vegetation as a dynamic reserve which rainforest can utilize before a potential shift to an alternate stable state, as the resilience of the rainforest. Here, we introduce the Rootzone Storage Potential (RZSP) as a direct water stress metric to understand adaptive forest resilience behaviour, using the cumulative difference between precipitation and potential evaporation (radiation-based). Since the potential evaporation used for RZSP calculation is purely radiation-based, it minimizes the effect of moisture recycling (and transport) on the system. RZSP is the potential of vegetation to optimize their resources enduring the greatest dry period. In this study, we have investigated the spatio-temporal resilience loss of the South American and African rainforest. An increasing trend of resilience loss was observed in the past few decades. Using RZSP is a useful indicator for estimating resilience dynamics and water stress characteristics of the rainforest.
How to cite: Singh, C., van der Ent, R. J., Fetzer, I., and Wang-Erlandsson, L.: Rootzone storage potential indicates the extent of rainforest resilience, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9696, https://doi.org/10.5194/egusphere-egu2020-9696, 2020.
EGU2020-12439 | Displays | ITS3.2/NH10.7
Intelligent Extraction and Generation Technology of Emergency Planhaiyang liu
Natural disasters will bring a huge threat to the safety of human life and property. When disasters happen, leaders at all levels need to respond in time. Emergency plans can be regarded as the effective guidance of natural disaster emergency responses, and they include the textual descriptions of emergency response processes in terms of natural language. In this paper, we propose an approach to automatically extract emergency response process models from Chinese emergency plans, and can automatically generate appropriate emergency plans. First, the emergency plan is represented as a text tree according to its layout markups and sentence-sequential relations. Then, process model elements, including four-level response condition formulas, executive roles, response tasks, and flow relations, are identified by rule-based approaches. An emergency response process tree is generated from both the text tree and extracted process model elements, and is transformed to an emergency response process that is modeled as business process modeling notation. Finally, when different disasters occur, a new plan is generated according to the training of historical plan database. A large number of experiments in the actual emergency plan show that this method can extract the emergency response process model, and can generate a suitable new plan.
How to cite: liu, H.: Intelligent Extraction and Generation Technology of Emergency Plan, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12439, https://doi.org/10.5194/egusphere-egu2020-12439, 2020.
Natural disasters will bring a huge threat to the safety of human life and property. When disasters happen, leaders at all levels need to respond in time. Emergency plans can be regarded as the effective guidance of natural disaster emergency responses, and they include the textual descriptions of emergency response processes in terms of natural language. In this paper, we propose an approach to automatically extract emergency response process models from Chinese emergency plans, and can automatically generate appropriate emergency plans. First, the emergency plan is represented as a text tree according to its layout markups and sentence-sequential relations. Then, process model elements, including four-level response condition formulas, executive roles, response tasks, and flow relations, are identified by rule-based approaches. An emergency response process tree is generated from both the text tree and extracted process model elements, and is transformed to an emergency response process that is modeled as business process modeling notation. Finally, when different disasters occur, a new plan is generated according to the training of historical plan database. A large number of experiments in the actual emergency plan show that this method can extract the emergency response process model, and can generate a suitable new plan.
How to cite: liu, H.: Intelligent Extraction and Generation Technology of Emergency Plan, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12439, https://doi.org/10.5194/egusphere-egu2020-12439, 2020.
EGU2020-13341 | Displays | ITS3.2/NH10.7
Analysis and management of unexpected and cumulative climate risks in SwitzerlandRaphael Neukom, Nadine Salzmann, Christian Huggel, Veruska Mucchione, Sabine Kleppek, and Roland Hohmann
The study on ‘climate-related risks and opportunities’ of the Swiss Federal Office for the Environment (FOEN) provides a comprehensive analysis of climate-related risks and opportunities for Switzerland until 2060. The synthesis of the study results has been the basis for the development of adaptation strategies and measures in Switzerland. The study also identifies knowledge gaps and related missing planning tools for risks, which are difficult to assess as they typically have a low probability of occurrence but have potentially very high impacts for society and/or the environment. Such risks refer in particular to risks, which cumulate through process cascades or are triggered by meteorological/climatic extremes events, which return within shorter time intervals than expected.
To respond to these gaps, a collaborative effort including academic and government institutions at different administrative levels is undertaken in order to explore and analyse the potential of such cumulative risks and actions needed to manage them in Switzerland. The project focuses on two case studies, which are developed in consultation with stakeholders from science, policy and practice at the national and sub-national level.
The case studies analyse risks triggered by meteorological events based on projected and recently published Swiss Climate Scenarios CH2018, considering rare but plausible scenarios where such triggering events cumulate and/or occur in combinations.
We discuss international terminologies and experience with unexpected and cumulative extreme events and put them in relation to the Swiss context. Specifically, we present the cascading processes of the first case study, which focuses on the protective forests in the eastern Swiss Alps. Potential reduction of the protective capacity caused by extreme drought and heat and subsequent increase of risks caused by multiple natural hazards, such as fires and mass movements (snow avalanche, landslide), are assessed in this case study using semi-quantitative methods of risk analysis.
How to cite: Neukom, R., Salzmann, N., Huggel, C., Mucchione, V., Kleppek, S., and Hohmann, R.: Analysis and management of unexpected and cumulative climate risks in Switzerland, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13341, https://doi.org/10.5194/egusphere-egu2020-13341, 2020.
The study on ‘climate-related risks and opportunities’ of the Swiss Federal Office for the Environment (FOEN) provides a comprehensive analysis of climate-related risks and opportunities for Switzerland until 2060. The synthesis of the study results has been the basis for the development of adaptation strategies and measures in Switzerland. The study also identifies knowledge gaps and related missing planning tools for risks, which are difficult to assess as they typically have a low probability of occurrence but have potentially very high impacts for society and/or the environment. Such risks refer in particular to risks, which cumulate through process cascades or are triggered by meteorological/climatic extremes events, which return within shorter time intervals than expected.
To respond to these gaps, a collaborative effort including academic and government institutions at different administrative levels is undertaken in order to explore and analyse the potential of such cumulative risks and actions needed to manage them in Switzerland. The project focuses on two case studies, which are developed in consultation with stakeholders from science, policy and practice at the national and sub-national level.
The case studies analyse risks triggered by meteorological events based on projected and recently published Swiss Climate Scenarios CH2018, considering rare but plausible scenarios where such triggering events cumulate and/or occur in combinations.
We discuss international terminologies and experience with unexpected and cumulative extreme events and put them in relation to the Swiss context. Specifically, we present the cascading processes of the first case study, which focuses on the protective forests in the eastern Swiss Alps. Potential reduction of the protective capacity caused by extreme drought and heat and subsequent increase of risks caused by multiple natural hazards, such as fires and mass movements (snow avalanche, landslide), are assessed in this case study using semi-quantitative methods of risk analysis.
How to cite: Neukom, R., Salzmann, N., Huggel, C., Mucchione, V., Kleppek, S., and Hohmann, R.: Analysis and management of unexpected and cumulative climate risks in Switzerland, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13341, https://doi.org/10.5194/egusphere-egu2020-13341, 2020.
EGU2020-13529 | Displays | ITS3.2/NH10.7
An Analysis of the Characteristics of Adjacent Climatic Factors Affecting Extreme Snowfall on the Korean PeninsulaYoung Hye Bae, Jaewon Jung, Imee V. Necesito, Soojun Kim, and Hung Soo Kim
Due to global warming, Arctic temperatures are rising, resulting in the fall of polar vortex, a cold Arctic air, causing heavy snow and a record-breaking cold wave in the middle latitude northern hemisphere. A record-breaking cold wave and extreme snowfall have paralyzed infrastructure and caused enormous human and property damage. In this study, we analyzed the deployment characteristics of winter climate that affect the extreme snowfall on the Korean Peninsula. The European Centre for Medium-Range Weather Forecasting (ECMWF) data provided by the Copernicus Climate Change Service (C3S) was used to analyze the characteristics of climate factors around the Korean Peninsula. The period of all data was from November to March, the winter season of South Korea. Teleconnections were analyzed using climatic factors such as snowfall, humidity, air pressure, wind and sea surface temperature to analyze the characteristics of climate factors affecting the heavy snowfall on the Korean Peninsula. The results of the future main study are expected to be used as basic data to predict the occurrence and impact of heavy snow in areas surrounding the Korean Peninsula in the future and to assess its vulnerability to damage from the snowfall.
How to cite: Bae, Y. H., Jung, J., Necesito, I. V., Kim, S., and Kim, H. S.: An Analysis of the Characteristics of Adjacent Climatic Factors Affecting Extreme Snowfall on the Korean Peninsula, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13529, https://doi.org/10.5194/egusphere-egu2020-13529, 2020.
Due to global warming, Arctic temperatures are rising, resulting in the fall of polar vortex, a cold Arctic air, causing heavy snow and a record-breaking cold wave in the middle latitude northern hemisphere. A record-breaking cold wave and extreme snowfall have paralyzed infrastructure and caused enormous human and property damage. In this study, we analyzed the deployment characteristics of winter climate that affect the extreme snowfall on the Korean Peninsula. The European Centre for Medium-Range Weather Forecasting (ECMWF) data provided by the Copernicus Climate Change Service (C3S) was used to analyze the characteristics of climate factors around the Korean Peninsula. The period of all data was from November to March, the winter season of South Korea. Teleconnections were analyzed using climatic factors such as snowfall, humidity, air pressure, wind and sea surface temperature to analyze the characteristics of climate factors affecting the heavy snowfall on the Korean Peninsula. The results of the future main study are expected to be used as basic data to predict the occurrence and impact of heavy snow in areas surrounding the Korean Peninsula in the future and to assess its vulnerability to damage from the snowfall.
How to cite: Bae, Y. H., Jung, J., Necesito, I. V., Kim, S., and Kim, H. S.: An Analysis of the Characteristics of Adjacent Climatic Factors Affecting Extreme Snowfall on the Korean Peninsula, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13529, https://doi.org/10.5194/egusphere-egu2020-13529, 2020.
EGU2020-2809 | Displays | ITS3.2/NH10.7
Introduction to the International Association for Disaster Risk ReductionTianhai Jiang
The devastating impact of natural hazards on social and economic development worldwide and on the security of human and natural systems, is well-known, as is their impact on the most vulnerable. The intensity and frequency of disasters are exacerbated by climate changes, and increasing concomitant losses due to rapid urbanization and poor planning decisions, among other factors.
The International Association for Disaster Risk Reduction (IADRR) is an international, non-governmental, professional body of individual membership, devoted to the development of global disaster risk reduction (DRR) endeavors with emphases on:
- Interdisciplinary, multi-sector, and multi-stakeholder cooperation among all categories of DRR professionals.
- Exchanging knowledge and ideas, research outputs and practical experiences, establishing a trans-disciplinary science and technology application interface.
- Promoting disciplinary development of disaster risk sciences and engineering, increasing risk awareness and education among the general public for resilient communities.
DRR is a global challenge, complicated, interplayed, and widely dispersed. It is only through multi-national, multi-sector, and interdisciplinary efforts, that DRR challenges can be dealt with at our best. At present, there are many well-established organizations focusing on specific natural hazards (e.g., earthquake, landslides) or certain aspect of disaster issues (emergency management), while no comprehensive international association for DRR professionals of different disciplines (e.g., social, natural and healthcare sciences), sectors (engineers, urban planners, emergency responders, private), which can jointly tackle the complex real-world DRR challenges, and to connect professionals with different expertise for collaborative implementation of Sendai Framework, SDGs and Paris Agreement.
It is through more effective collaborations among all related communities, through broader vision and social consensus-building, through influencing policy-making process and governance at various scales of actions, and exploring application interface, that the most effective new approaches to DRR can be sought.
Uniqueness and Synergies
- An overarching networking umbrella for individual professionals
- A sense of belonging and career development platform for all categories of professionals
- A science and application interface to promote research and public education and to bring science into action
- An expert and talent pool of different expertise for long-term communication and transboundary cooperation
- Synergize with existing mechanisms as a united front for real-world DRR challenges
Missions
-
- 1. Online & Offline networks
- 2. Filling the gaps between research, practice and policy
- 3. Expert and talent pool of DRR professionals for expertise and career development
- 4. Promote disaster risk science and risk awareness
- 5. Information sharing and application
- 6. Career development
The Association was well discussed among dozens of international/national/regional DRR organizations, such as UNDRR, ISC, IRDR, ADPC, IPCC, UNESCAP, WFEO, etc., and will have an official debut on the opening of Asia-Pacific Science & Technology Conference on Disaster Risk Reduction (APSTCDRR) in Malaysia on March 16. EGU will be our one of the first big events after our launching. We would like to share our ideas as a broader umbrella for DRR professionals for a fully mobilized and proactive international community of practice for disaster resilient societies, and to ensure the safe, resilient and sustainable development of human society.
Co-Chairs of the Preparatory Committee for IADRR
Cui Peng Academician, Chinese Academy of Sciences (CAS); Institute of Mountain Hazards and Environment, CAS
Rajib Shaw Professor, Keio University, Japan; Chair, UNDRR Science Technology Advisory Group
How to cite: Jiang, T.: Introduction to the International Association for Disaster Risk Reduction, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2809, https://doi.org/10.5194/egusphere-egu2020-2809, 2020.
The devastating impact of natural hazards on social and economic development worldwide and on the security of human and natural systems, is well-known, as is their impact on the most vulnerable. The intensity and frequency of disasters are exacerbated by climate changes, and increasing concomitant losses due to rapid urbanization and poor planning decisions, among other factors.
The International Association for Disaster Risk Reduction (IADRR) is an international, non-governmental, professional body of individual membership, devoted to the development of global disaster risk reduction (DRR) endeavors with emphases on:
- Interdisciplinary, multi-sector, and multi-stakeholder cooperation among all categories of DRR professionals.
- Exchanging knowledge and ideas, research outputs and practical experiences, establishing a trans-disciplinary science and technology application interface.
- Promoting disciplinary development of disaster risk sciences and engineering, increasing risk awareness and education among the general public for resilient communities.
DRR is a global challenge, complicated, interplayed, and widely dispersed. It is only through multi-national, multi-sector, and interdisciplinary efforts, that DRR challenges can be dealt with at our best. At present, there are many well-established organizations focusing on specific natural hazards (e.g., earthquake, landslides) or certain aspect of disaster issues (emergency management), while no comprehensive international association for DRR professionals of different disciplines (e.g., social, natural and healthcare sciences), sectors (engineers, urban planners, emergency responders, private), which can jointly tackle the complex real-world DRR challenges, and to connect professionals with different expertise for collaborative implementation of Sendai Framework, SDGs and Paris Agreement.
It is through more effective collaborations among all related communities, through broader vision and social consensus-building, through influencing policy-making process and governance at various scales of actions, and exploring application interface, that the most effective new approaches to DRR can be sought.
Uniqueness and Synergies
- An overarching networking umbrella for individual professionals
- A sense of belonging and career development platform for all categories of professionals
- A science and application interface to promote research and public education and to bring science into action
- An expert and talent pool of different expertise for long-term communication and transboundary cooperation
- Synergize with existing mechanisms as a united front for real-world DRR challenges
Missions
-
- 1. Online & Offline networks
- 2. Filling the gaps between research, practice and policy
- 3. Expert and talent pool of DRR professionals for expertise and career development
- 4. Promote disaster risk science and risk awareness
- 5. Information sharing and application
- 6. Career development
The Association was well discussed among dozens of international/national/regional DRR organizations, such as UNDRR, ISC, IRDR, ADPC, IPCC, UNESCAP, WFEO, etc., and will have an official debut on the opening of Asia-Pacific Science & Technology Conference on Disaster Risk Reduction (APSTCDRR) in Malaysia on March 16. EGU will be our one of the first big events after our launching. We would like to share our ideas as a broader umbrella for DRR professionals for a fully mobilized and proactive international community of practice for disaster resilient societies, and to ensure the safe, resilient and sustainable development of human society.
Co-Chairs of the Preparatory Committee for IADRR
Cui Peng Academician, Chinese Academy of Sciences (CAS); Institute of Mountain Hazards and Environment, CAS
Rajib Shaw Professor, Keio University, Japan; Chair, UNDRR Science Technology Advisory Group
How to cite: Jiang, T.: Introduction to the International Association for Disaster Risk Reduction, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2809, https://doi.org/10.5194/egusphere-egu2020-2809, 2020.
Ecosystems around the world are at riks of critical transitions due to increasing anthropogenic preasures and climate change. However, it is not clear where the risks are higher, or where ecosystems are more vulnerable. When a dynamic system is close to a threshold, it leaves a statistical signature on its time series known as critical slowing down. It takes longer to recover after a small disturbance, which translates into increases in variance, autocorrelation, and skewness or flickering. Here I measure critical slowing down on primary production proxies for marine and terrestrial ecosystems globally. Slowness is an indicator of potential instabilities and a proxy of resilience. While slowness is not a universal indicator for critical transitions, it can be used for detection of potential regime shifts.
How to cite: Rocha, J.: Detecting risk of regime shifts in ecosystems, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3956, https://doi.org/10.5194/egusphere-egu2020-3956, 2020.
Ecosystems around the world are at riks of critical transitions due to increasing anthropogenic preasures and climate change. However, it is not clear where the risks are higher, or where ecosystems are more vulnerable. When a dynamic system is close to a threshold, it leaves a statistical signature on its time series known as critical slowing down. It takes longer to recover after a small disturbance, which translates into increases in variance, autocorrelation, and skewness or flickering. Here I measure critical slowing down on primary production proxies for marine and terrestrial ecosystems globally. Slowness is an indicator of potential instabilities and a proxy of resilience. While slowness is not a universal indicator for critical transitions, it can be used for detection of potential regime shifts.
How to cite: Rocha, J.: Detecting risk of regime shifts in ecosystems, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3956, https://doi.org/10.5194/egusphere-egu2020-3956, 2020.
EGU2020-5412 | Displays | ITS3.2/NH10.7
Interacting tipping elements increase risk of climate domino effectsNico Wunderling, Jonathan Donges, Jürgen Kurths, and Ricarda Winkelmann
The Greenland Ice Sheet, West Antarctic Ice Sheet, Atlantic Meridional Overturning Circulation (AMOC), El-Nino Southern Oscillation (ENSO) and the Amazon rainforest have been identified as potential tipping elements in the Earth system, exhibiting threshold behavior. While their individual tipping thresholds are fairly well understood, it is of yet unclear how their interactions might impact the overall stability of the Earth’s climate system. Here, we explicitly study the effects of known physical interactions using a paradigmatic network approach which is not yet possible with more complex global circulation models or process-based models in a comprehensive way.
We analyze the risk of domino effects being triggered by each of the individual tipping elements under global warming in equilibrium experiments, propagating uncertainties in critical temperature thresholds and interaction strengths via a Monte-Carlo approach.
Overall, we find that the interactions tend to destabilize the network, with cascading failures occurring in 41% of cases in warming scenarios up to 2°C. More specifically, we uncover that:
(i) With increasing coupling strength, the temperature thresholds for inducing critical transitions are lowered significantly for West Antarctica, AMOC, ENSO and the Amazon rainforest. The dampening feedback loop between the Greenland Ice Sheet and the AMOC due to increased freshwater flux on the one hand and relative cooling around Greenland on the other, leads to an enhanced ambivalency whether the Greenland Ice Sheet tips or not.
(ii) Furthermore, our analysis reveals the role of each of the five tipping elements showing that the polar ice sheets on Greenland and West Antarctica are oftentimes the initiators of tipping cascades (in up to 40% of ensemble members for Greenland), while the AMOC acts as a mediator, transmitting cascades.
This implies that the ice sheets, which are already at risk of transgressing their temperature thresholds within the Paris range of 1.5 to 2°C, are of particular importance for the stability of the climate system as a whole.
How to cite: Wunderling, N., Donges, J., Kurths, J., and Winkelmann, R.: Interacting tipping elements increase risk of climate domino effects, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5412, https://doi.org/10.5194/egusphere-egu2020-5412, 2020.
The Greenland Ice Sheet, West Antarctic Ice Sheet, Atlantic Meridional Overturning Circulation (AMOC), El-Nino Southern Oscillation (ENSO) and the Amazon rainforest have been identified as potential tipping elements in the Earth system, exhibiting threshold behavior. While their individual tipping thresholds are fairly well understood, it is of yet unclear how their interactions might impact the overall stability of the Earth’s climate system. Here, we explicitly study the effects of known physical interactions using a paradigmatic network approach which is not yet possible with more complex global circulation models or process-based models in a comprehensive way.
We analyze the risk of domino effects being triggered by each of the individual tipping elements under global warming in equilibrium experiments, propagating uncertainties in critical temperature thresholds and interaction strengths via a Monte-Carlo approach.
Overall, we find that the interactions tend to destabilize the network, with cascading failures occurring in 41% of cases in warming scenarios up to 2°C. More specifically, we uncover that:
(i) With increasing coupling strength, the temperature thresholds for inducing critical transitions are lowered significantly for West Antarctica, AMOC, ENSO and the Amazon rainforest. The dampening feedback loop between the Greenland Ice Sheet and the AMOC due to increased freshwater flux on the one hand and relative cooling around Greenland on the other, leads to an enhanced ambivalency whether the Greenland Ice Sheet tips or not.
(ii) Furthermore, our analysis reveals the role of each of the five tipping elements showing that the polar ice sheets on Greenland and West Antarctica are oftentimes the initiators of tipping cascades (in up to 40% of ensemble members for Greenland), while the AMOC acts as a mediator, transmitting cascades.
This implies that the ice sheets, which are already at risk of transgressing their temperature thresholds within the Paris range of 1.5 to 2°C, are of particular importance for the stability of the climate system as a whole.
How to cite: Wunderling, N., Donges, J., Kurths, J., and Winkelmann, R.: Interacting tipping elements increase risk of climate domino effects, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5412, https://doi.org/10.5194/egusphere-egu2020-5412, 2020.
EGU2020-5920 | Displays | ITS3.2/NH10.7
Dynamics of atmospheric oxygen under anthropogenic stressesValerie Livina
We analyse proxy and observed data of atmospheric oxygen (ten contemporary records over the globe) and demonstrate its nonlinear decline, which is small but of uncertain decline rate. This decline was previously thought to be linear and caused mainly by use of fossil fuels (combustion), but by reviewing anthropogenic interventions we list more than a dozen smaller-scale processes that utilise oxygen in various forms. We have identified and quantified a previously unaccounted sink of atmospheric oxygen that serves multiple industries. This sink grows nonlinearly and has already exceeded the natural weathering deoxygenation. It has also been confirmed by means of comparison of the projection of oxygen decline with carbon emissions in the IPCC scenarios. We discuss the updated oxygen budget, possible solutions for the mitigation of the oxygen sink, and future dynamics of atmospheric oxygen.
[1] Livina et al, Tipping point analysis of atmospheric oxygen concentration, Chaos 25, 036403 (2015).
[2] Livina & Vaz Martins, The future of atmospheric oxygen, Springer Nature, in press.
How to cite: Livina, V.: Dynamics of atmospheric oxygen under anthropogenic stresses, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5920, https://doi.org/10.5194/egusphere-egu2020-5920, 2020.
We analyse proxy and observed data of atmospheric oxygen (ten contemporary records over the globe) and demonstrate its nonlinear decline, which is small but of uncertain decline rate. This decline was previously thought to be linear and caused mainly by use of fossil fuels (combustion), but by reviewing anthropogenic interventions we list more than a dozen smaller-scale processes that utilise oxygen in various forms. We have identified and quantified a previously unaccounted sink of atmospheric oxygen that serves multiple industries. This sink grows nonlinearly and has already exceeded the natural weathering deoxygenation. It has also been confirmed by means of comparison of the projection of oxygen decline with carbon emissions in the IPCC scenarios. We discuss the updated oxygen budget, possible solutions for the mitigation of the oxygen sink, and future dynamics of atmospheric oxygen.
[1] Livina et al, Tipping point analysis of atmospheric oxygen concentration, Chaos 25, 036403 (2015).
[2] Livina & Vaz Martins, The future of atmospheric oxygen, Springer Nature, in press.
How to cite: Livina, V.: Dynamics of atmospheric oxygen under anthropogenic stresses, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5920, https://doi.org/10.5194/egusphere-egu2020-5920, 2020.
EGU2020-6655 | Displays | ITS3.2/NH10.7
CompoundEvents: An R package for statistical modeling of compound climate and weather events and their impactsZengchao Hao, Fanghua Hao, and Xuan Zhang
Extremes, such as droughts, floods, heatwaves, and cold waves, may trigger large impacts on human society and the environment. The concurrent or consecutive occurrences of these extreme events (i.e., compound events) may result in even larger impacts than those caused by isolated extremes. Compound weather and climate extremes have attracted much attention in recent decades due to their disastrous impacts on the environment, ecosystem, and socioeconomics. It is thus of particular importance to improve our understanding of their properties, mechanisms, and impacts. Different methods for analyzing compound events and their impacts have been developed in recent decades. In this study, we introduce an R package for statistical modeling and analysis of compound events using compound precipitation and temperature extreme as examples. There are multiple components in this package, including the characterization, driver assessments, prediction, attribution, and impacts of compound events. For example, after extracting compound events based on the threshold of each variable, the package can be employed to assess the driving factors of compound events and predict their occurrences. The impact of compound events on different sectors (e.g., crop yield, vegetation) can also be assessed based on the multivariate model embedded in this package. This package is expected to be useful for compound events modeling and analysis for both researchers and decision-makers.
How to cite: Hao, Z., Hao, F., and Zhang, X.: CompoundEvents: An R package for statistical modeling of compound climate and weather events and their impacts , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6655, https://doi.org/10.5194/egusphere-egu2020-6655, 2020.
Extremes, such as droughts, floods, heatwaves, and cold waves, may trigger large impacts on human society and the environment. The concurrent or consecutive occurrences of these extreme events (i.e., compound events) may result in even larger impacts than those caused by isolated extremes. Compound weather and climate extremes have attracted much attention in recent decades due to their disastrous impacts on the environment, ecosystem, and socioeconomics. It is thus of particular importance to improve our understanding of their properties, mechanisms, and impacts. Different methods for analyzing compound events and their impacts have been developed in recent decades. In this study, we introduce an R package for statistical modeling and analysis of compound events using compound precipitation and temperature extreme as examples. There are multiple components in this package, including the characterization, driver assessments, prediction, attribution, and impacts of compound events. For example, after extracting compound events based on the threshold of each variable, the package can be employed to assess the driving factors of compound events and predict their occurrences. The impact of compound events on different sectors (e.g., crop yield, vegetation) can also be assessed based on the multivariate model embedded in this package. This package is expected to be useful for compound events modeling and analysis for both researchers and decision-makers.
How to cite: Hao, Z., Hao, F., and Zhang, X.: CompoundEvents: An R package for statistical modeling of compound climate and weather events and their impacts , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6655, https://doi.org/10.5194/egusphere-egu2020-6655, 2020.
EGU2020-8572 | Displays | ITS3.2/NH10.7
A typology of compound weather and climate eventsJakob Zscheischler, Olivia Martius, Seth Westra, Emanuele Bevacqua, Colin Raymond, Radley Horton, Bart van den Hurk, Amir AghaKouchak, Aglaé Jézéquel, Miguel Mahecha, Douglas Maraun, Alexandre Ramos, Nina Ridder, Wim Thiery, and Edoardo Vignotto
Weather- and climate-related extreme events such as droughts, heatwaves and storms arise from interactions between complex sets of physical processes across multiple spatial and temporal scales, often overwhelming the capacity of natural and/or human systems to cope. In many cases, the greatest impacts arise through the ‘compounding’ effect of weather and climate-related drivers and/or hazards, where the scale of the impacts can be much greater than if any of the drivers or hazards occur in isolation; for instance, when a heavy precipitation falls on an already saturated soil causing a devastating flood. Compounding in this context refers to the amplification of an impact due to the occurrence of multiple drivers and/or hazards either because multiple hazards occur at the same time, previous climate conditions or weather events have increased a system’s vulnerability to a successive event, or spatially concurrent hazards lead to a regionally or globally integrated impact. More generally, compound weather and climate events refer to a combination of multiple climate drivers and/or hazards that contributes to societal or environmental risk.
Although many climate-related disasters are caused by compound events, our ability to understand, analyse and project these events and interactions between their drivers is still in its infancy. Here we review the current state of knowledge on compound events and propose a typology to synthesize the available literature and guide future research. We organize the highly diverse event types broadly along four main themes, namely preconditioned, multivariate, temporally compounding, and spatially compounding events. We highlight promising analytical approaches tailored to the different event types, which will aid future research and pave the way to a coherent framework for compound event analysis. We further illustrate how human-induced climate change affects different aspects of compound events, such as their frequency and intensity through variations in the mean, variability, and the dependence between their climatic drivers. Finally, we discuss the emergence of new types of events that may become highly relevant in a warmer climate.
How to cite: Zscheischler, J., Martius, O., Westra, S., Bevacqua, E., Raymond, C., Horton, R., van den Hurk, B., AghaKouchak, A., Jézéquel, A., Mahecha, M., Maraun, D., Ramos, A., Ridder, N., Thiery, W., and Vignotto, E.: A typology of compound weather and climate events, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8572, https://doi.org/10.5194/egusphere-egu2020-8572, 2020.
Weather- and climate-related extreme events such as droughts, heatwaves and storms arise from interactions between complex sets of physical processes across multiple spatial and temporal scales, often overwhelming the capacity of natural and/or human systems to cope. In many cases, the greatest impacts arise through the ‘compounding’ effect of weather and climate-related drivers and/or hazards, where the scale of the impacts can be much greater than if any of the drivers or hazards occur in isolation; for instance, when a heavy precipitation falls on an already saturated soil causing a devastating flood. Compounding in this context refers to the amplification of an impact due to the occurrence of multiple drivers and/or hazards either because multiple hazards occur at the same time, previous climate conditions or weather events have increased a system’s vulnerability to a successive event, or spatially concurrent hazards lead to a regionally or globally integrated impact. More generally, compound weather and climate events refer to a combination of multiple climate drivers and/or hazards that contributes to societal or environmental risk.
Although many climate-related disasters are caused by compound events, our ability to understand, analyse and project these events and interactions between their drivers is still in its infancy. Here we review the current state of knowledge on compound events and propose a typology to synthesize the available literature and guide future research. We organize the highly diverse event types broadly along four main themes, namely preconditioned, multivariate, temporally compounding, and spatially compounding events. We highlight promising analytical approaches tailored to the different event types, which will aid future research and pave the way to a coherent framework for compound event analysis. We further illustrate how human-induced climate change affects different aspects of compound events, such as their frequency and intensity through variations in the mean, variability, and the dependence between their climatic drivers. Finally, we discuss the emergence of new types of events that may become highly relevant in a warmer climate.
How to cite: Zscheischler, J., Martius, O., Westra, S., Bevacqua, E., Raymond, C., Horton, R., van den Hurk, B., AghaKouchak, A., Jézéquel, A., Mahecha, M., Maraun, D., Ramos, A., Ridder, N., Thiery, W., and Vignotto, E.: A typology of compound weather and climate events, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8572, https://doi.org/10.5194/egusphere-egu2020-8572, 2020.
EGU2020-8745 | Displays | ITS3.2/NH10.7
Direct and indirect impacts of climate change on wheat yield in the Indo-Gangetic plain in IndiaAnne Sophie Daloz, Johanne Rydsaa, Øivind Hodnebrog, Jana Sillmann, Bob van Oort, Christian Mohr, Madhoolika Agrawal, Lisa Emberson, Frøde Stordal, Tianyi Zhang, and Nathalie Schaller
The Sustainable Development Goals (SDGs) were adopted by all United Nations members states in 2015. “Erase hunger” and “Establish good health and well-being” are part of these goals and have major implications for agriculture and raises the question of how agriculture will be impacted by climate change. This work focuses on the potential impacts of the changing climate for agriculture, using the example of wheat yield in the Indo-Gangetic Plain (IGP) in India. First, the potential future changes in temperature and precipitation are examined over the IGP in regional climate simulations. The results show an increase in mean temperature and precipitation as well as maximum temperature during the growing season or Rabi season (November-April). Then, the direct (via temperature and precipitation) and indirect (via limiting irrigation) impacts of climate change on wheat yield are derived with a crop model for four selected sites in different states of the IGP (Punjab, Haryana, Uttar Pradesh and Bihar). The chosen sites are spread across the region to represent its major wheat growing areas.
The direct impact of climate change leads to wheat yield losses between -1% and -8% depending on the site examined and the irrigation regime chosen (6, 5, 3 or 1 irrigations). In this experiment, the number of irrigations remain the same in present and future climate. Then, when including the indirect impact of climate change the losses become much higher, reaching -4% to -36% depending on the site examined and by how much the irrigation is limited. This work shows the sensitivity of wheat yield to direct and indirect impacts of climate change in the IGP. It also emphasizes the complexity of climatic risk and the necessity of integrating more indirect impacts of climate change to fully assess how it affects agriculture.
How to cite: Daloz, A. S., Rydsaa, J., Hodnebrog, Ø., Sillmann, J., van Oort, B., Mohr, C., Agrawal, M., Emberson, L., Stordal, F., Zhang, T., and Schaller, N.: Direct and indirect impacts of climate change on wheat yield in the Indo-Gangetic plain in India, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8745, https://doi.org/10.5194/egusphere-egu2020-8745, 2020.
The Sustainable Development Goals (SDGs) were adopted by all United Nations members states in 2015. “Erase hunger” and “Establish good health and well-being” are part of these goals and have major implications for agriculture and raises the question of how agriculture will be impacted by climate change. This work focuses on the potential impacts of the changing climate for agriculture, using the example of wheat yield in the Indo-Gangetic Plain (IGP) in India. First, the potential future changes in temperature and precipitation are examined over the IGP in regional climate simulations. The results show an increase in mean temperature and precipitation as well as maximum temperature during the growing season or Rabi season (November-April). Then, the direct (via temperature and precipitation) and indirect (via limiting irrigation) impacts of climate change on wheat yield are derived with a crop model for four selected sites in different states of the IGP (Punjab, Haryana, Uttar Pradesh and Bihar). The chosen sites are spread across the region to represent its major wheat growing areas.
The direct impact of climate change leads to wheat yield losses between -1% and -8% depending on the site examined and the irrigation regime chosen (6, 5, 3 or 1 irrigations). In this experiment, the number of irrigations remain the same in present and future climate. Then, when including the indirect impact of climate change the losses become much higher, reaching -4% to -36% depending on the site examined and by how much the irrigation is limited. This work shows the sensitivity of wheat yield to direct and indirect impacts of climate change in the IGP. It also emphasizes the complexity of climatic risk and the necessity of integrating more indirect impacts of climate change to fully assess how it affects agriculture.
How to cite: Daloz, A. S., Rydsaa, J., Hodnebrog, Ø., Sillmann, J., van Oort, B., Mohr, C., Agrawal, M., Emberson, L., Stordal, F., Zhang, T., and Schaller, N.: Direct and indirect impacts of climate change on wheat yield in the Indo-Gangetic plain in India, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8745, https://doi.org/10.5194/egusphere-egu2020-8745, 2020.
EGU2020-8910 | Displays | ITS3.2/NH10.7
The relevance of climate information in the assessment of flood regulating ecosystem servicesThea Wübbelmann, Steffen Bender, and Benjamin Burkhard
Extreme weather events, failure of climate-change mitigation and adaptation, and biodiversity loss and ecosystem collapse are some of the main global risks. Climate change is one of the major drivers for ecosystem and biodiversity loss as well as for higher frequency and intensity of natural disasters and extreme weather events. Consequently, ecosystem health and the provision of ecosystem services (ES) is affected by the increasing pressures.
However, the provision of ecosystems must be ensured in order to guarantee and maintain human well-being. To define the benefits that people obtain from ecosystems there exist the concept of ES that links social and environmental systems to achieve sustainable use and discover trade-offs between different ES.
With respect to the increasing number of flood events (pluvial and fluvial) and of affected persons in the last years, one important key ES under external pressure is flood regulation. It describes the capacity to reduce flood hazards. Amongst other factors, climate change has a high impact on flood characteristics. Currently, most studies analyse the present status of flood regulating ES. Changing climate conditions and associated functionalities of the flood regulating ES are mostly not taken into account. This study shows the importance of assessing the current and future functionalities of flood regulating ES. In order to adapt ecosystems and their functionalities to projected climate impacts, it is important to consider regional climate information in the estimation process for flood regulating ES.
How to cite: Wübbelmann, T., Bender, S., and Burkhard, B.: The relevance of climate information in the assessment of flood regulating ecosystem services, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8910, https://doi.org/10.5194/egusphere-egu2020-8910, 2020.
Extreme weather events, failure of climate-change mitigation and adaptation, and biodiversity loss and ecosystem collapse are some of the main global risks. Climate change is one of the major drivers for ecosystem and biodiversity loss as well as for higher frequency and intensity of natural disasters and extreme weather events. Consequently, ecosystem health and the provision of ecosystem services (ES) is affected by the increasing pressures.
However, the provision of ecosystems must be ensured in order to guarantee and maintain human well-being. To define the benefits that people obtain from ecosystems there exist the concept of ES that links social and environmental systems to achieve sustainable use and discover trade-offs between different ES.
With respect to the increasing number of flood events (pluvial and fluvial) and of affected persons in the last years, one important key ES under external pressure is flood regulation. It describes the capacity to reduce flood hazards. Amongst other factors, climate change has a high impact on flood characteristics. Currently, most studies analyse the present status of flood regulating ES. Changing climate conditions and associated functionalities of the flood regulating ES are mostly not taken into account. This study shows the importance of assessing the current and future functionalities of flood regulating ES. In order to adapt ecosystems and their functionalities to projected climate impacts, it is important to consider regional climate information in the estimation process for flood regulating ES.
How to cite: Wübbelmann, T., Bender, S., and Burkhard, B.: The relevance of climate information in the assessment of flood regulating ecosystem services, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8910, https://doi.org/10.5194/egusphere-egu2020-8910, 2020.
EGU2020-8957 | Displays | ITS3.2/NH10.7
Dynamics of water quality and phytoplankton community driving by extreme inflow in a huge drinking water source reservoir, ChinaGuangwei Zhu, Wenyi Da, Mengyuan Zhu, and Wei Li
The tail of reservoir is the un-stable zone in water quality and phytoplankton community. Therefore, it is the crucial zone in aquatic ecosystem transition. To understand the transition characteristics and driving mechanisms of water environment dynamics, eighteen months high frequent monitoring of water environment and phytoplankton community in tail of a deep and large reservoir, Xin'anjiang Reservoir in southeast of China, were undertaken by water quality monitoring buoy and 3-days interval water sampling. The result showed that, there are clearly seasonal thermal and oxygen stratification in the river mouth of the reservoir. The nutrient and chlorophyll-a concentrations also show stratifying phenomenon during thermal stratification period. Heavy rain and inflow will shortly destroy the stratification. Nutrient concentrations were high dynamic in the river mouth. Total phosphorus ranges among 0.011 mg·L-1 to 0.188 mg·L-1, and total nitrogen ranges among 0.75 mg·L-1 to 2.76 mg·L-1. The dissolved phosphorus occupied 56% of total phosphorus, and dissolved nitrogen occupied 88% of total nitrogen, respectively. Nutrient concentrations influenced strongly by rainfall intensity and inflow rate. Total phosphorus and nitrogen concentrations were significantly related to 3-days accumulated rainfall. Nutrient concentration in flood season (March to June) was significantly high than non-flood season (P
How to cite: Zhu, G., Da, W., Zhu, M., and Li, W.: Dynamics of water quality and phytoplankton community driving by extreme inflow in a huge drinking water source reservoir, China, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8957, https://doi.org/10.5194/egusphere-egu2020-8957, 2020.
The tail of reservoir is the un-stable zone in water quality and phytoplankton community. Therefore, it is the crucial zone in aquatic ecosystem transition. To understand the transition characteristics and driving mechanisms of water environment dynamics, eighteen months high frequent monitoring of water environment and phytoplankton community in tail of a deep and large reservoir, Xin'anjiang Reservoir in southeast of China, were undertaken by water quality monitoring buoy and 3-days interval water sampling. The result showed that, there are clearly seasonal thermal and oxygen stratification in the river mouth of the reservoir. The nutrient and chlorophyll-a concentrations also show stratifying phenomenon during thermal stratification period. Heavy rain and inflow will shortly destroy the stratification. Nutrient concentrations were high dynamic in the river mouth. Total phosphorus ranges among 0.011 mg·L-1 to 0.188 mg·L-1, and total nitrogen ranges among 0.75 mg·L-1 to 2.76 mg·L-1. The dissolved phosphorus occupied 56% of total phosphorus, and dissolved nitrogen occupied 88% of total nitrogen, respectively. Nutrient concentrations influenced strongly by rainfall intensity and inflow rate. Total phosphorus and nitrogen concentrations were significantly related to 3-days accumulated rainfall. Nutrient concentration in flood season (March to June) was significantly high than non-flood season (P
How to cite: Zhu, G., Da, W., Zhu, M., and Li, W.: Dynamics of water quality and phytoplankton community driving by extreme inflow in a huge drinking water source reservoir, China, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8957, https://doi.org/10.5194/egusphere-egu2020-8957, 2020.
EGU2020-9501 | Displays | ITS3.2/NH10.7
Strengthening climate-resilience in the Mekong Delta – an application of the Economics of Climate Adaption (ECA) methodologyArun Rana, David N. Bresch, Annette Detken, and Maxime Souvignet
Climate Change has presented an ongoing and eminent threat to various regions, communities and infrastructure worldwide and in-turn increasingly pressuring national and local governments to take action. In the current study we identify and evaluate climate impacts faced by the city of Can Tho in Vietnam and the broader Mekong Delta and appraise preparedness options to manage today’s as well as future climate risk. We first, identify the climate risks in co-operation with various local, national and international stakeholders in the region. This is done on the current and future time scales under Socio-Economic Pathways (SSPs) as suggested in the current iteration of IPCC evaluation process. Based on these development pathways, we apply the Economics of Climate Adaptation (ECA) methodology to quantify the climate risks various sectors of the economy will be facing until 2050 with a focus on flood. Further we assess a range of possible adaption measures - including behavioral, environmental, physical as well as financial measures that can mitigate the identified risks by providing a cost-benefit analysis for each of the adaptation measures as well as for bundles thereof. The ECA methodology has proven to be an established tool to enhance our knowledge on the topic and its application in this specific context will enable stakeholders to strengthen societal resilience in the context of both socio-economic development and climate change.
How to cite: Rana, A., Bresch, D. N., Detken, A., and Souvignet, M.: Strengthening climate-resilience in the Mekong Delta – an application of the Economics of Climate Adaption (ECA) methodology, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9501, https://doi.org/10.5194/egusphere-egu2020-9501, 2020.
Climate Change has presented an ongoing and eminent threat to various regions, communities and infrastructure worldwide and in-turn increasingly pressuring national and local governments to take action. In the current study we identify and evaluate climate impacts faced by the city of Can Tho in Vietnam and the broader Mekong Delta and appraise preparedness options to manage today’s as well as future climate risk. We first, identify the climate risks in co-operation with various local, national and international stakeholders in the region. This is done on the current and future time scales under Socio-Economic Pathways (SSPs) as suggested in the current iteration of IPCC evaluation process. Based on these development pathways, we apply the Economics of Climate Adaptation (ECA) methodology to quantify the climate risks various sectors of the economy will be facing until 2050 with a focus on flood. Further we assess a range of possible adaption measures - including behavioral, environmental, physical as well as financial measures that can mitigate the identified risks by providing a cost-benefit analysis for each of the adaptation measures as well as for bundles thereof. The ECA methodology has proven to be an established tool to enhance our knowledge on the topic and its application in this specific context will enable stakeholders to strengthen societal resilience in the context of both socio-economic development and climate change.
How to cite: Rana, A., Bresch, D. N., Detken, A., and Souvignet, M.: Strengthening climate-resilience in the Mekong Delta – an application of the Economics of Climate Adaption (ECA) methodology, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9501, https://doi.org/10.5194/egusphere-egu2020-9501, 2020.
EGU2020-9517 | Displays | ITS3.2/NH10.7
Future compound climate extremes and exposed population in AfricaTorsten Weber, Paul Bowyer, Diana Rechid, Susanne Pfeifer, Francesca Raffaele, Armelle Reca Remedio, Claas Teichmann, and Daniela Jacob
The African population is already exposed to climate extremes such as droughts, heat waves and extreme precipitation, which cause damage to agriculture and infrastructure, and affect people's well-being. However, the simultaneous or sequential occurrence of two single climate extremes (compound event) has a more severe impact on the population and economy than single climate extremes. This circumstance is exacerbated by the increase in the African population, which is expected to double by the middle of this century according to the UN Department of Economic and Social Affairs (DESA). Currently, little is known about the potential future change in the occurrence of compound climate extremes and population exposed to these events in Africa. This knowledge is however needed by stakeholder and decision makers to develop measures for adaptation.
This research analyzes the occurrence of compound climate extremes such as droughts, heat waves and extreme precipitation in Africa under two different emission scenarios for the end of the century. For the analysis, we applied regional climate projections from the newly performed Coordinated Output for Regional Evaluations (CORE) embedded in the WCRP Coordinated Regional Climate Downscaling Experiment (CORDEX) Framework for Africa at a grid spacing of 25 km, and spatial maps of population projections derived from two different Shared Socioeconomic Pathways (SSPs). In order to take into account a low and a high emission scenario, the Representative Concentration Pathways (RCPs) 2.6 and 8.5 were used in the regional climate projections.
We will show that compound climate extremes are projected to be more frequent in Africa under the high emission scenario at the end of the century, and an increase in total exposure is primarily expected for West Africa, Central-East Africa and South-East Africa. Furthermore, combined impacts of population growth and increase in frequencies of compound extremes play an important role in the change of total exposure.
How to cite: Weber, T., Bowyer, P., Rechid, D., Pfeifer, S., Raffaele, F., Remedio, A. R., Teichmann, C., and Jacob, D.: Future compound climate extremes and exposed population in Africa, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9517, https://doi.org/10.5194/egusphere-egu2020-9517, 2020.
The African population is already exposed to climate extremes such as droughts, heat waves and extreme precipitation, which cause damage to agriculture and infrastructure, and affect people's well-being. However, the simultaneous or sequential occurrence of two single climate extremes (compound event) has a more severe impact on the population and economy than single climate extremes. This circumstance is exacerbated by the increase in the African population, which is expected to double by the middle of this century according to the UN Department of Economic and Social Affairs (DESA). Currently, little is known about the potential future change in the occurrence of compound climate extremes and population exposed to these events in Africa. This knowledge is however needed by stakeholder and decision makers to develop measures for adaptation.
This research analyzes the occurrence of compound climate extremes such as droughts, heat waves and extreme precipitation in Africa under two different emission scenarios for the end of the century. For the analysis, we applied regional climate projections from the newly performed Coordinated Output for Regional Evaluations (CORE) embedded in the WCRP Coordinated Regional Climate Downscaling Experiment (CORDEX) Framework for Africa at a grid spacing of 25 km, and spatial maps of population projections derived from two different Shared Socioeconomic Pathways (SSPs). In order to take into account a low and a high emission scenario, the Representative Concentration Pathways (RCPs) 2.6 and 8.5 were used in the regional climate projections.
We will show that compound climate extremes are projected to be more frequent in Africa under the high emission scenario at the end of the century, and an increase in total exposure is primarily expected for West Africa, Central-East Africa and South-East Africa. Furthermore, combined impacts of population growth and increase in frequencies of compound extremes play an important role in the change of total exposure.
How to cite: Weber, T., Bowyer, P., Rechid, D., Pfeifer, S., Raffaele, F., Remedio, A. R., Teichmann, C., and Jacob, D.: Future compound climate extremes and exposed population in Africa, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9517, https://doi.org/10.5194/egusphere-egu2020-9517, 2020.
EGU2020-11387 | Displays | ITS3.2/NH10.7
Single vs. concurrent extreme events: Economic resonance of weather extremes increases impact on societal welfare lossKilian Kuhla, Sven Willner, Christian Otto, Tobias Geiger, and Anders Levermann
Weather extremes such as heat waves, tropical cyclones and river floods are likely to intensify with increasing global mean temperature. In a globally connected supply and trade network such extreme weather events cause economic shocks that may interfere with each other potentially amplifying their overall economic impact.
Here we analyze the economic resonance of concurrent extreme events, that is the overlapping of economic response dynamics of more than one extreme event category both spatially and temporally. In our analysis we focus on the event categories heat stress, river floods and tropical cyclones. We simulate the regional (direct) and global (indirect via supply chains) economic losses and gains for each extreme event category individually as well as for their concurrent occurrence for the next two decades. Thus we compare the outcome of the sum of the three single simulations to the outcome of the concurrent simulation. Here we show that the global welfare loss due to concurrent weather extremes is increased by more than 17% due to market effects compared to the summation of the losses of each single event category. Overall, this economic resonance yields a non-linearly enhanced price effect, which leads to a stronger economic impact. As well as a highly heterogeneous distribution of the amplification of regional welfare losses among countries.
Our analysis is based on the climate models of the CMIP5 ensemble which have been bias-corrected within the ISIMIP2b project towards an observation-based data set using a trend-preserving method. From these we use RCP2.6 and 6.0 for future climate projections. We transfer the three extreme weather event categories to a daily, regional and sectoral production failure. Our agent-based dynamic economic loss-propagation model Acclimate then uses these local production failures to compute the immediate response dynamics within the global supply chain as well as the subsequent trade adjustments. The Acclimate model thereby depicts a highly interconnected network of firms and consumers, which maximize their profits by choosing the optimal production level and corresponding upstream demand as well as the optimal distribution of this demand among its suppliers; transport and storage inventories act as buffers for supply shocks. The model accounts for local price changes, and supply and demand mismatches are resolved explicitly over time.
Our results suggest that economic impacts of weather extremes are larger than can be derived from conventional single event analysis. Consequently the societal cost of climate change are likely to be underestimated in studies focusing on single extreme categories.
How to cite: Kuhla, K., Willner, S., Otto, C., Geiger, T., and Levermann, A.: Single vs. concurrent extreme events: Economic resonance of weather extremes increases impact on societal welfare loss, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11387, https://doi.org/10.5194/egusphere-egu2020-11387, 2020.
Weather extremes such as heat waves, tropical cyclones and river floods are likely to intensify with increasing global mean temperature. In a globally connected supply and trade network such extreme weather events cause economic shocks that may interfere with each other potentially amplifying their overall economic impact.
Here we analyze the economic resonance of concurrent extreme events, that is the overlapping of economic response dynamics of more than one extreme event category both spatially and temporally. In our analysis we focus on the event categories heat stress, river floods and tropical cyclones. We simulate the regional (direct) and global (indirect via supply chains) economic losses and gains for each extreme event category individually as well as for their concurrent occurrence for the next two decades. Thus we compare the outcome of the sum of the three single simulations to the outcome of the concurrent simulation. Here we show that the global welfare loss due to concurrent weather extremes is increased by more than 17% due to market effects compared to the summation of the losses of each single event category. Overall, this economic resonance yields a non-linearly enhanced price effect, which leads to a stronger economic impact. As well as a highly heterogeneous distribution of the amplification of regional welfare losses among countries.
Our analysis is based on the climate models of the CMIP5 ensemble which have been bias-corrected within the ISIMIP2b project towards an observation-based data set using a trend-preserving method. From these we use RCP2.6 and 6.0 for future climate projections. We transfer the three extreme weather event categories to a daily, regional and sectoral production failure. Our agent-based dynamic economic loss-propagation model Acclimate then uses these local production failures to compute the immediate response dynamics within the global supply chain as well as the subsequent trade adjustments. The Acclimate model thereby depicts a highly interconnected network of firms and consumers, which maximize their profits by choosing the optimal production level and corresponding upstream demand as well as the optimal distribution of this demand among its suppliers; transport and storage inventories act as buffers for supply shocks. The model accounts for local price changes, and supply and demand mismatches are resolved explicitly over time.
Our results suggest that economic impacts of weather extremes are larger than can be derived from conventional single event analysis. Consequently the societal cost of climate change are likely to be underestimated in studies focusing on single extreme categories.
How to cite: Kuhla, K., Willner, S., Otto, C., Geiger, T., and Levermann, A.: Single vs. concurrent extreme events: Economic resonance of weather extremes increases impact on societal welfare loss, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11387, https://doi.org/10.5194/egusphere-egu2020-11387, 2020.
EGU2020-11543 | Displays | ITS3.2/NH10.7
Drought monitoring Using Standardized Precipitation Index (SPI), Standardized Precipitation-Evapotranspiration Index (SPEI) and Normalized-Difference Snow Index (NDSI) with observational and ERA5 dataset, within the uremia lake basin, IranMaral Habibi, Wolfgang Schöner, and Iman Babaeian
Abstract
In this study, droughts were assessed for the Uremia Lake Basin located in the North West of Iran which is facing the risk of drying over the last decades. Since long-term and spatially dense observational data are not available, in particular for the mountainous part of the Uremia lake basin, we successfully tested the performance of the ERA5 reanalysis data set for our purpose. By comparing time series plots of drought indices (SPI, SPEI), both indices were able to capture the temporal variation of droughts. SPIE identified more drought events but SPI, as it uses precipitation only as input, fails to show the increasing number of evaporation driven droughts in the Uremia Lake Basin, which were observed in particular for the most recent decade. SPEI was calculated using the monthly temperature and precipitation, the extremely dry conditions of the basin were observed in the mountainous area, it seems that based on SPEI index, the highest values of actual evapotranspiration happens near the lake and in high mountains. Moreover, in recent years, drought has become more extreme in higher elevated areas, then we focused on Snow cover which has a significant role in surface runoff and groundwater recharge in mountainous and semi-arid areas, like within the Uremia lake basin. In recent years climate change impact snow variations distribution, snow cover, and runoff in different scales. Therefore, spatial and temporal monitoring of the snow-covered surface and the impact of these changes is necessary. Consequently, the chances of snow cover (SCA) in the study area were studied using MODIS images by the NDSI index and snow cover data from the ERA5 dataset. Finally, we came to this conclusion that the temperature rise in recent decades led to a high amount of evaporation and consequently the snow surface area has decreased so that it could affect the region’s water reservoir in the future.
Key words: Drought monitoring,ERA5,MODIS,SPI,SPEI,NDSI
How to cite: Habibi, M., Schöner, W., and Babaeian, I.: Drought monitoring Using Standardized Precipitation Index (SPI), Standardized Precipitation-Evapotranspiration Index (SPEI) and Normalized-Difference Snow Index (NDSI) with observational and ERA5 dataset, within the uremia lake basin, Iran, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11543, https://doi.org/10.5194/egusphere-egu2020-11543, 2020.
Abstract
In this study, droughts were assessed for the Uremia Lake Basin located in the North West of Iran which is facing the risk of drying over the last decades. Since long-term and spatially dense observational data are not available, in particular for the mountainous part of the Uremia lake basin, we successfully tested the performance of the ERA5 reanalysis data set for our purpose. By comparing time series plots of drought indices (SPI, SPEI), both indices were able to capture the temporal variation of droughts. SPIE identified more drought events but SPI, as it uses precipitation only as input, fails to show the increasing number of evaporation driven droughts in the Uremia Lake Basin, which were observed in particular for the most recent decade. SPEI was calculated using the monthly temperature and precipitation, the extremely dry conditions of the basin were observed in the mountainous area, it seems that based on SPEI index, the highest values of actual evapotranspiration happens near the lake and in high mountains. Moreover, in recent years, drought has become more extreme in higher elevated areas, then we focused on Snow cover which has a significant role in surface runoff and groundwater recharge in mountainous and semi-arid areas, like within the Uremia lake basin. In recent years climate change impact snow variations distribution, snow cover, and runoff in different scales. Therefore, spatial and temporal monitoring of the snow-covered surface and the impact of these changes is necessary. Consequently, the chances of snow cover (SCA) in the study area were studied using MODIS images by the NDSI index and snow cover data from the ERA5 dataset. Finally, we came to this conclusion that the temperature rise in recent decades led to a high amount of evaporation and consequently the snow surface area has decreased so that it could affect the region’s water reservoir in the future.
Key words: Drought monitoring,ERA5,MODIS,SPI,SPEI,NDSI
How to cite: Habibi, M., Schöner, W., and Babaeian, I.: Drought monitoring Using Standardized Precipitation Index (SPI), Standardized Precipitation-Evapotranspiration Index (SPEI) and Normalized-Difference Snow Index (NDSI) with observational and ERA5 dataset, within the uremia lake basin, Iran, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11543, https://doi.org/10.5194/egusphere-egu2020-11543, 2020.
EGU2020-12141 | Displays | ITS3.2/NH10.7
Cropland carbon uptake delayed by 2019 U.S. Midwest floodsYi Yin, Branden Byrne, Junjie Liu, Paul Wennberg, Philipp Köhler, Vincent Humphrey, Troy Magney, Kenneth Davis, Tobias Gerken, Sha Feng, Joshua Digangi, and Christian Frankenberg
While large-scale floods directly impact human lives and infrastructures, they also profoundly impact agricultural productivity. New satellite observations of vegetation activity and atmospheric CO2 offer the opportunity to quantify the effects of such extreme events on cropland carbon sequestration, which are important for mitigation strategies. Widespread flooding during spring and early summer 2019 delayed crop planting across the U.S. Midwest. As a result, satellite observations of solar-induced chlorophyll fluorescence (SIF) from TROPOspheric Monitoring Instrument (TROPOMI) and Orbiting Carbon Observatory (OCO-2) reveal a shift of 16 days in the seasonal cycle of photosynthetic activity relative to 2018, along with a 15% lower peak photosynthesis. We estimate the 2019 anomaly to have led to a reduction of -0.21 PgC in gross primary production (GPP) in June and July, partially compensated in August and September (+0.14 PgC). The extension of the 2019 growing season into late September is likely to have benefited from increased water availability and late-season temperature. Ultimately, this change is predicted to reduce the crop yield over most of the midwest Corn/Soy belt by ~15%. Using an atmospheric transport model, we show that a decline of ~0.1 PgC in the net carbon uptake during June and July is consistent with observed CO2 enhancements from Atmospheric Carbon and Transport - America (ACT-America) aircraft and OCO-2. This study quantifies the impact of floods on cropland productivity and demonstrates the potential of combining SIF with atmospheric CO2 observations to monitor regional carbon flux anomalies.
How to cite: Yin, Y., Byrne, B., Liu, J., Wennberg, P., Köhler, P., Humphrey, V., Magney, T., Davis, K., Gerken, T., Feng, S., Digangi, J., and Frankenberg, C.: Cropland carbon uptake delayed by 2019 U.S. Midwest floods, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12141, https://doi.org/10.5194/egusphere-egu2020-12141, 2020.
While large-scale floods directly impact human lives and infrastructures, they also profoundly impact agricultural productivity. New satellite observations of vegetation activity and atmospheric CO2 offer the opportunity to quantify the effects of such extreme events on cropland carbon sequestration, which are important for mitigation strategies. Widespread flooding during spring and early summer 2019 delayed crop planting across the U.S. Midwest. As a result, satellite observations of solar-induced chlorophyll fluorescence (SIF) from TROPOspheric Monitoring Instrument (TROPOMI) and Orbiting Carbon Observatory (OCO-2) reveal a shift of 16 days in the seasonal cycle of photosynthetic activity relative to 2018, along with a 15% lower peak photosynthesis. We estimate the 2019 anomaly to have led to a reduction of -0.21 PgC in gross primary production (GPP) in June and July, partially compensated in August and September (+0.14 PgC). The extension of the 2019 growing season into late September is likely to have benefited from increased water availability and late-season temperature. Ultimately, this change is predicted to reduce the crop yield over most of the midwest Corn/Soy belt by ~15%. Using an atmospheric transport model, we show that a decline of ~0.1 PgC in the net carbon uptake during June and July is consistent with observed CO2 enhancements from Atmospheric Carbon and Transport - America (ACT-America) aircraft and OCO-2. This study quantifies the impact of floods on cropland productivity and demonstrates the potential of combining SIF with atmospheric CO2 observations to monitor regional carbon flux anomalies.
How to cite: Yin, Y., Byrne, B., Liu, J., Wennberg, P., Köhler, P., Humphrey, V., Magney, T., Davis, K., Gerken, T., Feng, S., Digangi, J., and Frankenberg, C.: Cropland carbon uptake delayed by 2019 U.S. Midwest floods, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12141, https://doi.org/10.5194/egusphere-egu2020-12141, 2020.
EGU2020-12451 | Displays | ITS3.2/NH10.7
Land management extends the duration of the impact of extreme on vegetationTiexi Chen
Extreme weather events have a severe impact on vegetation and the carbon cycle. It is generally believed that the vegetation will begin to recover immediately after the extremes, slowly or rapidly. This study will initially report a new response mechanism. We investigated a case of an extreme precipitation event that occurred in a double-cropping (DC) systems dominated region located Yangtze-Huai plain in China, where winter crops and summer crops are planted rotationally within one year. Generally October and June are the transitional periods for harvesting and sowing. In October 2016, monthly precipitation showed strong positive anomalies. Strong negative anomalies of EVI (enhanced vegetation index) persisted during March to May 2017, in response to the farmland abundance due to the heavy rain, especially over the farmland with winter crops – summer rice paddy systems. Information on abandonment due to precipitation also has been confirmed in local agro-meteorological monthly reports and some local government announcements. Data from a flux observation station in the region showed that from January to May 2017, NEE dropped significantly compared to the same period in 2016. Our results demonstrate that, in such a double-cropping system, once extreme events occur during the key sowing period and the phenological conditions determine that it cannot be replanted after, the duration of the impact will last through the entire crop growth period until the next sowing. In other word, land management could extend the duration of the impact of extremes on vegetation.
How to cite: Chen, T.: Land management extends the duration of the impact of extreme on vegetation, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12451, https://doi.org/10.5194/egusphere-egu2020-12451, 2020.
Extreme weather events have a severe impact on vegetation and the carbon cycle. It is generally believed that the vegetation will begin to recover immediately after the extremes, slowly or rapidly. This study will initially report a new response mechanism. We investigated a case of an extreme precipitation event that occurred in a double-cropping (DC) systems dominated region located Yangtze-Huai plain in China, where winter crops and summer crops are planted rotationally within one year. Generally October and June are the transitional periods for harvesting and sowing. In October 2016, monthly precipitation showed strong positive anomalies. Strong negative anomalies of EVI (enhanced vegetation index) persisted during March to May 2017, in response to the farmland abundance due to the heavy rain, especially over the farmland with winter crops – summer rice paddy systems. Information on abandonment due to precipitation also has been confirmed in local agro-meteorological monthly reports and some local government announcements. Data from a flux observation station in the region showed that from January to May 2017, NEE dropped significantly compared to the same period in 2016. Our results demonstrate that, in such a double-cropping system, once extreme events occur during the key sowing period and the phenological conditions determine that it cannot be replanted after, the duration of the impact will last through the entire crop growth period until the next sowing. In other word, land management could extend the duration of the impact of extremes on vegetation.
How to cite: Chen, T.: Land management extends the duration of the impact of extreme on vegetation, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12451, https://doi.org/10.5194/egusphere-egu2020-12451, 2020.
EGU2020-13375 | Displays | ITS3.2/NH10.7
Global-scale characterization of turning points in arid and semi-arid ecosystems functioningPaulo Bernardino, Wanda De Keersmaecker, Rasmus Fensholt, Jan Verbesselt, Ben Somers, and Stephanie Horion
Ecosystems in drylands are highly susceptible to changes in their way of functioning due to extreme and prolonged droughts or anthropogenic perturbation. Long-standing pressure, from climate or human action, may result in severe alterations in their dynamics. Moreover, changes in dryland ecosystems functioning can take place abruptly (Horion et al., 2016). Such abrupt changes may have severe ecological and economic consequences, disturbing the livelihood of drylands inhabitants and causing increased poverty and food insecurity. Considering that drylands cover 40% of Earth’s land surface and are home to around one-third of the human population, detecting and characterizing hotspots of abrupt changes in ecosystem functioning (here called turning points) becomes even more crucial.
BFAST, a time series segmentation technique, was used to detect breakpoints in time series (1982-2015) of rain-use efficiency. An abrupt change in rain-use efficiency time series points towards a significant change in the way an ecosystem responds to precipitation, allowing the study of turning points in ecosystem functioning in both natural and anthropogenic landscapes. Moreover, we here proposed a new typology to characterize turning points in ecosystem functioning, which takes into account the trend in ecosystem functioning before and after the turning point, as well as differences in the rate of change. Case studies were used to evaluate the performance of the new typology. Finally, ancillary data on population density and drought were used to have some first insights about the potential determinants of hotspots of turning point occurrence.
Our results showed that 13.6% of global drylands presented a turning point in ecosystem functioning between 1982 and 2015. Hotspots of turning point occurrence were observed in North America (where 62.6% of the turning points were characterized by a decreasing trend in ecosystem functioning), the Sahel, Central Asia, and Australia. The last three hotspot regions were mainly characterized by a positive trend in ecosystem functioning after the turning point. The ancillary data pointed to an influence of both droughts and human action on turning point occurrence in North America, while in Asia and Australia turning point occurrence was higher in areas with higher anthropogenic pressure. In the grasslands of the Sahel, turning points were potentially related to drought.
By detecting where and when hotspots of turning points occurred in recent decades, and by characterizing the trends in ecosystem functioning before and after the turning points, we advanced towards better supporting decision making related to ecosystems conservation and management in drylands. Moreover, we provided first insights about the drivers of ecosystem functioning change in hotspots of turning point occurrence in global drylands (Bernardino et al., 2019).
References:
Bernardino PN, De Keersmaecker W, Fensholt R, Verbesselt J, Somers B, Horion S (2019) Global-scale characterization of turning points in arid and semi-arid ecosystems functioning. Manuscript submitted for publication.
Horion S, Prishchepov A V., Verbesselt J, de Beurs K, Tagesson T, Fensholt R (2016) Revealing turning points in ecosystem functioning over the Northern Eurasian agricultural frontier. Global change biology, 22, 2801–2817.
How to cite: Bernardino, P., De Keersmaecker, W., Fensholt, R., Verbesselt, J., Somers, B., and Horion, S.: Global-scale characterization of turning points in arid and semi-arid ecosystems functioning, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13375, https://doi.org/10.5194/egusphere-egu2020-13375, 2020.
Ecosystems in drylands are highly susceptible to changes in their way of functioning due to extreme and prolonged droughts or anthropogenic perturbation. Long-standing pressure, from climate or human action, may result in severe alterations in their dynamics. Moreover, changes in dryland ecosystems functioning can take place abruptly (Horion et al., 2016). Such abrupt changes may have severe ecological and economic consequences, disturbing the livelihood of drylands inhabitants and causing increased poverty and food insecurity. Considering that drylands cover 40% of Earth’s land surface and are home to around one-third of the human population, detecting and characterizing hotspots of abrupt changes in ecosystem functioning (here called turning points) becomes even more crucial.
BFAST, a time series segmentation technique, was used to detect breakpoints in time series (1982-2015) of rain-use efficiency. An abrupt change in rain-use efficiency time series points towards a significant change in the way an ecosystem responds to precipitation, allowing the study of turning points in ecosystem functioning in both natural and anthropogenic landscapes. Moreover, we here proposed a new typology to characterize turning points in ecosystem functioning, which takes into account the trend in ecosystem functioning before and after the turning point, as well as differences in the rate of change. Case studies were used to evaluate the performance of the new typology. Finally, ancillary data on population density and drought were used to have some first insights about the potential determinants of hotspots of turning point occurrence.
Our results showed that 13.6% of global drylands presented a turning point in ecosystem functioning between 1982 and 2015. Hotspots of turning point occurrence were observed in North America (where 62.6% of the turning points were characterized by a decreasing trend in ecosystem functioning), the Sahel, Central Asia, and Australia. The last three hotspot regions were mainly characterized by a positive trend in ecosystem functioning after the turning point. The ancillary data pointed to an influence of both droughts and human action on turning point occurrence in North America, while in Asia and Australia turning point occurrence was higher in areas with higher anthropogenic pressure. In the grasslands of the Sahel, turning points were potentially related to drought.
By detecting where and when hotspots of turning points occurred in recent decades, and by characterizing the trends in ecosystem functioning before and after the turning points, we advanced towards better supporting decision making related to ecosystems conservation and management in drylands. Moreover, we provided first insights about the drivers of ecosystem functioning change in hotspots of turning point occurrence in global drylands (Bernardino et al., 2019).
References:
Bernardino PN, De Keersmaecker W, Fensholt R, Verbesselt J, Somers B, Horion S (2019) Global-scale characterization of turning points in arid and semi-arid ecosystems functioning. Manuscript submitted for publication.
Horion S, Prishchepov A V., Verbesselt J, de Beurs K, Tagesson T, Fensholt R (2016) Revealing turning points in ecosystem functioning over the Northern Eurasian agricultural frontier. Global change biology, 22, 2801–2817.
How to cite: Bernardino, P., De Keersmaecker, W., Fensholt, R., Verbesselt, J., Somers, B., and Horion, S.: Global-scale characterization of turning points in arid and semi-arid ecosystems functioning, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13375, https://doi.org/10.5194/egusphere-egu2020-13375, 2020.
EGU2020-18140 | Displays | ITS3.2/NH10.7
Assessment of drought and heat coupling during summer using copulasAna Russo, Andreia Ribeiro, Célia M. Gouveia, and Carlos Pires
Droughts and hot extremes constitute key sources of risk to several socio-economic activities and human lives throughout the world, and their impacts can be exacerbated by their co-occurrence. Moreover, their occurrence is expected to increase under future global warming. Therefore, understanding the drought-heatwave feedback mechanisms is crucial for estimating the risk of impacts associated with their compound occurrence.
Several studies have examined individual extreme events or analyzed how intense certain events where. Nevertheless, limited research has explored drought-heatwave dependence, mostly focusing on the contribution of low antecedent soil moisture or preceding precipitation deficits to summer hot extremes. In the latest efforts in assessing the interactions between hot and dry extremes, the development of models describing the joint behaviour of climate extremes is still a challenge.
Here we propose to assess the probability of summer extremely hot days in the Iberian Peninsula (IP) being preceded by drought events in spring and early summer, based on their joint probability distribution through copula theory. Drought events were characterized by the Standardized Precipitation Evaporation Index (SPEI) for May, June and July for different timescales (3-, 6- and 9-months). The Number of Hot Days per month (NHD) summed over July and August were considered to characterize hot extremes.
Asymmetrical copulas with upper tail dependence were identified for the majority of the IP’s regions (except in northwestern regions), suggesting that compound hot and dry extremes are strongly associated. Moreover, the transition from previous wet to dry regimes increases substantially the probability of exceeding summer NHD extreme values. These results are region and time-scale dependent: 1) northeastern, western and central regions were found to be the regions more prone to summer hot extremes induced by dryness; 2) southwestern, northwestern and southeastern regions are less prone.
This assessment could be an important tool for responsible authorities to mitigate the impacts magnified by the interactions between the different hazards.
Acknowledgements: This work was supported by project IMPECAF (PTDC/CTA-CLI/28902/2017). Andreia Ribeiro thanks FCT for the grant PD/BD/114481/2016.
How to cite: Russo, A., Ribeiro, A., Gouveia, C. M., and Pires, C.: Assessment of drought and heat coupling during summer using copulas, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18140, https://doi.org/10.5194/egusphere-egu2020-18140, 2020.
Droughts and hot extremes constitute key sources of risk to several socio-economic activities and human lives throughout the world, and their impacts can be exacerbated by their co-occurrence. Moreover, their occurrence is expected to increase under future global warming. Therefore, understanding the drought-heatwave feedback mechanisms is crucial for estimating the risk of impacts associated with their compound occurrence.
Several studies have examined individual extreme events or analyzed how intense certain events where. Nevertheless, limited research has explored drought-heatwave dependence, mostly focusing on the contribution of low antecedent soil moisture or preceding precipitation deficits to summer hot extremes. In the latest efforts in assessing the interactions between hot and dry extremes, the development of models describing the joint behaviour of climate extremes is still a challenge.
Here we propose to assess the probability of summer extremely hot days in the Iberian Peninsula (IP) being preceded by drought events in spring and early summer, based on their joint probability distribution through copula theory. Drought events were characterized by the Standardized Precipitation Evaporation Index (SPEI) for May, June and July for different timescales (3-, 6- and 9-months). The Number of Hot Days per month (NHD) summed over July and August were considered to characterize hot extremes.
Asymmetrical copulas with upper tail dependence were identified for the majority of the IP’s regions (except in northwestern regions), suggesting that compound hot and dry extremes are strongly associated. Moreover, the transition from previous wet to dry regimes increases substantially the probability of exceeding summer NHD extreme values. These results are region and time-scale dependent: 1) northeastern, western and central regions were found to be the regions more prone to summer hot extremes induced by dryness; 2) southwestern, northwestern and southeastern regions are less prone.
This assessment could be an important tool for responsible authorities to mitigate the impacts magnified by the interactions between the different hazards.
Acknowledgements: This work was supported by project IMPECAF (PTDC/CTA-CLI/28902/2017). Andreia Ribeiro thanks FCT for the grant PD/BD/114481/2016.
How to cite: Russo, A., Ribeiro, A., Gouveia, C. M., and Pires, C.: Assessment of drought and heat coupling during summer using copulas, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18140, https://doi.org/10.5194/egusphere-egu2020-18140, 2020.
EGU2020-19561 | Displays | ITS3.2/NH10.7
Social tipping as a response to anticipated sea level riseMarc Wiedermann, E Keith Smith, Jonathan F Donges, Jobst Heitzig, and Ricarda Winkelmann
Social tipping, where minorities trigger large populations to engage in collective action, has been suggested as a key component to address contemporary global challenges, such as climate change or biodiversity loss. At the same time, certain climate tipping elements, such as the West Antarctic Ice Sheet, are already at risk of transgressing their critical thresholds, even within the aspired goals of the Paris Agreement to limit global temperature rise to 1.5° to 2°C. Consequently, recent studies suggest rapid societal transformations, i.e, wanted tipping, may be required to prevent the crossing of dangerous tipping points or critical thresholds in the climate system.
Here, we explore likelihoods for such social tipping in climate action as a response to anticipated climate impacts, particularly sea-level rise. We first propose a low-dimensional model for social tipping as a refined version of Granovetter's famous and well-established threshold model. This model assumes individuals to become active, e.g., to mitigate climate change, through social influence if a sufficient number of instigators in one’s social network initiate a considered action. We estimate the number of instigators as shares of per-country populations that will likely be impacted by sea-level rise within a given time-window of anticipation. Specifically, we consider sea-level contributions from thermal expansion, mountain glaciers, Greenland as well as Antarctica under different concentration pathways. Additionally, we use nationally aggregated social science survey data of climate change attitudes to estimate the proportion of the population that has the potential to be mobilized for climate action, thereby accounting for heterogeneities across countries as well.
Our model shows that social tipping, i.e., the majority of a population acting against climate change, becomes likely if the individuals' anticipation time horizon of climate impacts lies in the order of a century. This observation aligns well with ethical time horizons that are often assumed in the context of climate tipping points as they represent the expected lifetime of our children and grandchildren. We thus show that, even though sea-level rise is generally a very slow process, a small dedicated minority of anticipatory individuals – usually 10–20 percent of the population – has the potential to tip collective climate action and with it a whole ensemble of attitudes, behaviours and ultimately policies.
How to cite: Wiedermann, M., Smith, E. K., Donges, J. F., Heitzig, J., and Winkelmann, R.: Social tipping as a response to anticipated sea level rise, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19561, https://doi.org/10.5194/egusphere-egu2020-19561, 2020.
Social tipping, where minorities trigger large populations to engage in collective action, has been suggested as a key component to address contemporary global challenges, such as climate change or biodiversity loss. At the same time, certain climate tipping elements, such as the West Antarctic Ice Sheet, are already at risk of transgressing their critical thresholds, even within the aspired goals of the Paris Agreement to limit global temperature rise to 1.5° to 2°C. Consequently, recent studies suggest rapid societal transformations, i.e, wanted tipping, may be required to prevent the crossing of dangerous tipping points or critical thresholds in the climate system.
Here, we explore likelihoods for such social tipping in climate action as a response to anticipated climate impacts, particularly sea-level rise. We first propose a low-dimensional model for social tipping as a refined version of Granovetter's famous and well-established threshold model. This model assumes individuals to become active, e.g., to mitigate climate change, through social influence if a sufficient number of instigators in one’s social network initiate a considered action. We estimate the number of instigators as shares of per-country populations that will likely be impacted by sea-level rise within a given time-window of anticipation. Specifically, we consider sea-level contributions from thermal expansion, mountain glaciers, Greenland as well as Antarctica under different concentration pathways. Additionally, we use nationally aggregated social science survey data of climate change attitudes to estimate the proportion of the population that has the potential to be mobilized for climate action, thereby accounting for heterogeneities across countries as well.
Our model shows that social tipping, i.e., the majority of a population acting against climate change, becomes likely if the individuals' anticipation time horizon of climate impacts lies in the order of a century. This observation aligns well with ethical time horizons that are often assumed in the context of climate tipping points as they represent the expected lifetime of our children and grandchildren. We thus show that, even though sea-level rise is generally a very slow process, a small dedicated minority of anticipatory individuals – usually 10–20 percent of the population – has the potential to tip collective climate action and with it a whole ensemble of attitudes, behaviours and ultimately policies.
How to cite: Wiedermann, M., Smith, E. K., Donges, J. F., Heitzig, J., and Winkelmann, R.: Social tipping as a response to anticipated sea level rise, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19561, https://doi.org/10.5194/egusphere-egu2020-19561, 2020.
EGU2020-19967 | Displays | ITS3.2/NH10.7
Emerging heat extremes, adaptation and the speed of socio-economic developmentCarl-Friedrich Schleussner, Martha M. Vogel, Peter Pfleiderer, Marina Andrijevic, Friederike E. Otto, and Sonia I. Seneviratne
Heat extremes are among the most pertinent extreme weather hazards. At the same time, adaptation to the impacts of extreme heat can be very effective. The ability of societies to effectively adapt to climate change hazards such as extreme heat, however, critically depends on their level of socio-economic development. Examining the risks posed by future heat extremes to human societies requires to link socio-economic development trajectories with emerging heat extremes. Such an integrated assessment can also provide insights into whether or not it is indeed plausible for societies to “outgrow” climate change by increasing adaptive capacity faster than climate impacts emerge - a narrative that underlies many policy decisions that prioritize economic development over climate action still today.
Here we provide such an integrated assessment by combining a novel approach to project the continuous emergence of heat extremes over the 21st century under different concentration pathways and the pace of socio-economic development under the shared socio-economic pathways accounting for continuous autonomous adaptation. We find that even under the most optimistic scenarios of future development, countries may not be able to outpace unmitigated climate change. Only Paris-Agreement compatible concentration pathways allow for human development to keep up with or even outpace the emerging climate change signal in vulnerable countries in the near future. A similar picture emerges when comparing heat day emergence with future evolution of governance as a proxy for adaptive capacity. Our findings underscore the critical importance of achieving the Paris Agreement goals to enable climate-resilient, sustainable development.
How to cite: Schleussner, C.-F., Vogel, M. M., Pfleiderer, P., Andrijevic, M., Otto, F. E., and Seneviratne, S. I.: Emerging heat extremes, adaptation and the speed of socio-economic development , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19967, https://doi.org/10.5194/egusphere-egu2020-19967, 2020.
Heat extremes are among the most pertinent extreme weather hazards. At the same time, adaptation to the impacts of extreme heat can be very effective. The ability of societies to effectively adapt to climate change hazards such as extreme heat, however, critically depends on their level of socio-economic development. Examining the risks posed by future heat extremes to human societies requires to link socio-economic development trajectories with emerging heat extremes. Such an integrated assessment can also provide insights into whether or not it is indeed plausible for societies to “outgrow” climate change by increasing adaptive capacity faster than climate impacts emerge - a narrative that underlies many policy decisions that prioritize economic development over climate action still today.
Here we provide such an integrated assessment by combining a novel approach to project the continuous emergence of heat extremes over the 21st century under different concentration pathways and the pace of socio-economic development under the shared socio-economic pathways accounting for continuous autonomous adaptation. We find that even under the most optimistic scenarios of future development, countries may not be able to outpace unmitigated climate change. Only Paris-Agreement compatible concentration pathways allow for human development to keep up with or even outpace the emerging climate change signal in vulnerable countries in the near future. A similar picture emerges when comparing heat day emergence with future evolution of governance as a proxy for adaptive capacity. Our findings underscore the critical importance of achieving the Paris Agreement goals to enable climate-resilient, sustainable development.
How to cite: Schleussner, C.-F., Vogel, M. M., Pfleiderer, P., Andrijevic, M., Otto, F. E., and Seneviratne, S. I.: Emerging heat extremes, adaptation and the speed of socio-economic development , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19967, https://doi.org/10.5194/egusphere-egu2020-19967, 2020.
EGU2020-20707 | Displays | ITS3.2/NH10.7
Multi-model subseasonal forecasts of spring cold spells: potential value for the hazelnut agribusinessPaolo Ruggieri, Stefano Materia, Angel G. Muñoz, M.Carmen Alvarez Castro, Simon J. Mason, Frederic Vitart, and Silvio Gualdi
Producing probabilistic subseasonal forecasts of extreme events up to six weeks in advance is crucial for many economic sectors. In agribusiness, this time-scale is particularly critical because it allows for mitigation strategies to be adopted for counteracting weather hazards and taking advantage of opportunities.
For example, spring frosts are detrimental for many nut trees, resulting in dramatic losses at harvest time. To explore subseasonal forecast quality in boreal spring, identified as one of the most sensitive times of the year by agribusiness end-users, we build a multi-system ensemble using four models involved in the Subseasonal-to-Seasonal (S2S) Prediction Project. Two-meter temperature forecasts are used to analyze cold spell predictions in the coastal Black Sea region, an area that is a global leader in the production of hazelnuts. When analyzed at global scale, the multi-system ensemble probabilistic forecasts for near-surface temperature is better than climatological values for several regions, especially the Tropics, even many weeks in advance; however, in coastal Black Sea skill is low after the second forecast week. When cold spells are predicted instead of near-surface temperatures, skill improves for the region, and the forecasts prove to contain potentially useful information to stakeholders willing to put mitigation plans into effect. Using a cost-loss model approach for the first time in this context, we show that there is added value of having such a forecast system instead of a business-as-usual strategy, not only for predictions released one to two weeks ahead of the extreme event, but also at longer lead-times.
How to cite: Ruggieri, P., Materia, S., Muñoz, A. G., Alvarez Castro, M. C., Mason, S. J., Vitart, F., and Gualdi, S.: Multi-model subseasonal forecasts of spring cold spells: potential value for the hazelnut agribusiness, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20707, https://doi.org/10.5194/egusphere-egu2020-20707, 2020.
Producing probabilistic subseasonal forecasts of extreme events up to six weeks in advance is crucial for many economic sectors. In agribusiness, this time-scale is particularly critical because it allows for mitigation strategies to be adopted for counteracting weather hazards and taking advantage of opportunities.
For example, spring frosts are detrimental for many nut trees, resulting in dramatic losses at harvest time. To explore subseasonal forecast quality in boreal spring, identified as one of the most sensitive times of the year by agribusiness end-users, we build a multi-system ensemble using four models involved in the Subseasonal-to-Seasonal (S2S) Prediction Project. Two-meter temperature forecasts are used to analyze cold spell predictions in the coastal Black Sea region, an area that is a global leader in the production of hazelnuts. When analyzed at global scale, the multi-system ensemble probabilistic forecasts for near-surface temperature is better than climatological values for several regions, especially the Tropics, even many weeks in advance; however, in coastal Black Sea skill is low after the second forecast week. When cold spells are predicted instead of near-surface temperatures, skill improves for the region, and the forecasts prove to contain potentially useful information to stakeholders willing to put mitigation plans into effect. Using a cost-loss model approach for the first time in this context, we show that there is added value of having such a forecast system instead of a business-as-usual strategy, not only for predictions released one to two weeks ahead of the extreme event, but also at longer lead-times.
How to cite: Ruggieri, P., Materia, S., Muñoz, A. G., Alvarez Castro, M. C., Mason, S. J., Vitart, F., and Gualdi, S.: Multi-model subseasonal forecasts of spring cold spells: potential value for the hazelnut agribusiness, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20707, https://doi.org/10.5194/egusphere-egu2020-20707, 2020.
EGU2020-6375 | Displays | ITS3.2/NH10.7
Ecosystem-based approaches to disaster risk reduction in Japan: transdisciplinary research and actionsTakehito Yoshida
Natural disasters occur at an increasing rate probably due to the ongoing climate change, and adaptation to natural disaster risks is a key to the sustainability of local communities in Japan. At the same time, Japan is experiencing a rapid decline of human population and consequent aging. Ecosystem-based approaches to disaster risk reduction (Eco-DRR) takes advantage of the multi-functionality of ecosystems and biodiversity, including their capacity to mitigate natural disasters while providing multiple ecosystem services, and population decline provides ample opportunity for implementing Eco-DRR. We are developing practical solutions for implementation of Eco-DRR by visualizing natural disaster risks, evaluating multi-functionality of Eco-DRR solutions, conducting transdisciplinary approaches in collaboration with diverse stakeholders, and advocating traditional and local knowledge of disaster risk reduction. I will talk about some progress of our ongoing research project in RIHN (Research Institute for Humanity and Nature), Japan.
How to cite: Yoshida, T.: Ecosystem-based approaches to disaster risk reduction in Japan: transdisciplinary research and actions, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6375, https://doi.org/10.5194/egusphere-egu2020-6375, 2020.
Natural disasters occur at an increasing rate probably due to the ongoing climate change, and adaptation to natural disaster risks is a key to the sustainability of local communities in Japan. At the same time, Japan is experiencing a rapid decline of human population and consequent aging. Ecosystem-based approaches to disaster risk reduction (Eco-DRR) takes advantage of the multi-functionality of ecosystems and biodiversity, including their capacity to mitigate natural disasters while providing multiple ecosystem services, and population decline provides ample opportunity for implementing Eco-DRR. We are developing practical solutions for implementation of Eco-DRR by visualizing natural disaster risks, evaluating multi-functionality of Eco-DRR solutions, conducting transdisciplinary approaches in collaboration with diverse stakeholders, and advocating traditional and local knowledge of disaster risk reduction. I will talk about some progress of our ongoing research project in RIHN (Research Institute for Humanity and Nature), Japan.
How to cite: Yoshida, T.: Ecosystem-based approaches to disaster risk reduction in Japan: transdisciplinary research and actions, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6375, https://doi.org/10.5194/egusphere-egu2020-6375, 2020.
ITS4.1/NP4.2 – Big data and machine learning in geosciences
EGU2020-14241 | Displays | ITS4.1/NP4.2
Data management and analysis of the high-resolution multi-model climate dataset from the PRIMAVERA projectJon Seddon and Ag Stephens
The PRIMAVERA project aims to develop a new generation of advanced and well evaluated high-resolution global climate models. An integral component of PRIMAVERA is a new set of simulations at standard and high-resolution from seven different European climate models. The expected data volume is 1.6 petabytes, which is comparable to the total volume of data in CMIP5.
A comprehensive Data Management Plan (DMP) was developed to allow the distributed group of scientists to produce and analyse this volume of data during the project’s limited time duration. The DMP uses the approach of taking the analysis to the data. The simulations were run on HPCs across Europe and the data was transferred to the JASMIN super-data-cluster at the Rutherford Appleton Laboratory. A Data Management Tool (DMT) was developed to catalogue the available data and allow users to search through it using an intuitive web-based interface. The DMT allows users to request that the data they require is restored from tape to disk. The users are then able to perform all their analyses at JASMIN. The DMT also controls the publication of the data to the Earth System Grid Federation, making it available to the global community.
Here we introduce JASMIN and the PRIMAVERA data management plan. We describe how the DMT allowed the project’s scientists to analyse this multi-model dataset. We describe how the tools and techniques developed can help future projects.
How to cite: Seddon, J. and Stephens, A.: Data management and analysis of the high-resolution multi-model climate dataset from the PRIMAVERA project , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-14241, https://doi.org/10.5194/egusphere-egu2020-14241, 2020.
The PRIMAVERA project aims to develop a new generation of advanced and well evaluated high-resolution global climate models. An integral component of PRIMAVERA is a new set of simulations at standard and high-resolution from seven different European climate models. The expected data volume is 1.6 petabytes, which is comparable to the total volume of data in CMIP5.
A comprehensive Data Management Plan (DMP) was developed to allow the distributed group of scientists to produce and analyse this volume of data during the project’s limited time duration. The DMP uses the approach of taking the analysis to the data. The simulations were run on HPCs across Europe and the data was transferred to the JASMIN super-data-cluster at the Rutherford Appleton Laboratory. A Data Management Tool (DMT) was developed to catalogue the available data and allow users to search through it using an intuitive web-based interface. The DMT allows users to request that the data they require is restored from tape to disk. The users are then able to perform all their analyses at JASMIN. The DMT also controls the publication of the data to the Earth System Grid Federation, making it available to the global community.
Here we introduce JASMIN and the PRIMAVERA data management plan. We describe how the DMT allowed the project’s scientists to analyse this multi-model dataset. We describe how the tools and techniques developed can help future projects.
How to cite: Seddon, J. and Stephens, A.: Data management and analysis of the high-resolution multi-model climate dataset from the PRIMAVERA project , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-14241, https://doi.org/10.5194/egusphere-egu2020-14241, 2020.
EGU2020-12224 | Displays | ITS4.1/NP4.2
Data assimilation of subsurface flow via iterative ensemble smoother and physics-informed neural networkNanzhe Wang and Haibin Chang
Subsurface flow problems usually involve some degree of uncertainty. For reducing the uncertainty of subsurface flow prediction, data assimilation is usually necessary. Data assimilation is time consuming. In order to improve the efficiency of data assimilation, surrogate model of subsurface flow problem may be utilized. In this work, a physics-informed neural network (PINN) based surrogate model is proposed for subsurface flow with uncertain model parameters. Training data generated by solving stochastic partial differential equations (SPDEs) are utilized to train the neural network. Besides the data mismatch term, the term that incorporates physics laws is added in the loss function. The trained neural network can predict the solutions of the subsurface flow problem with new stochastic parameters, which can serve as a surrogate for approximating the relationship between model output and model input. By incorporating physics laws, PINN can achieve high accuracy. Then an iterative ensemble smoother (ES) is introduced to implement the data assimilation task based on the PINN surrogate. Several subsurface flow cases are designed to test the performance of the proposed paradigm. The results show that the PINN surrogate can significantly improve the efficiency of data assimilation task while maintaining a high accuracy.
How to cite: Wang, N. and Chang, H.: Data assimilation of subsurface flow via iterative ensemble smoother and physics-informed neural network, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12224, https://doi.org/10.5194/egusphere-egu2020-12224, 2020.
Subsurface flow problems usually involve some degree of uncertainty. For reducing the uncertainty of subsurface flow prediction, data assimilation is usually necessary. Data assimilation is time consuming. In order to improve the efficiency of data assimilation, surrogate model of subsurface flow problem may be utilized. In this work, a physics-informed neural network (PINN) based surrogate model is proposed for subsurface flow with uncertain model parameters. Training data generated by solving stochastic partial differential equations (SPDEs) are utilized to train the neural network. Besides the data mismatch term, the term that incorporates physics laws is added in the loss function. The trained neural network can predict the solutions of the subsurface flow problem with new stochastic parameters, which can serve as a surrogate for approximating the relationship between model output and model input. By incorporating physics laws, PINN can achieve high accuracy. Then an iterative ensemble smoother (ES) is introduced to implement the data assimilation task based on the PINN surrogate. Several subsurface flow cases are designed to test the performance of the proposed paradigm. The results show that the PINN surrogate can significantly improve the efficiency of data assimilation task while maintaining a high accuracy.
How to cite: Wang, N. and Chang, H.: Data assimilation of subsurface flow via iterative ensemble smoother and physics-informed neural network, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12224, https://doi.org/10.5194/egusphere-egu2020-12224, 2020.
EGU2020-3814 | Displays | ITS4.1/NP4.2
How to improve 3-m resolution land cover mapping from imperfect 10-m resolution land cover mapping product?Runmin Dong and Haohuan Fu
EGU2020-18670 | Displays | ITS4.1/NP4.2
Data Tailor: Integrate EUMETSAT's data into your datacubeDaniel Lee, Rodrigo Romero, Peter Miu, Fernando Jose Pereda Garcimartin, and Oscar Perez Navarro
EUMETSAT hosts a large collection of geophysical data sets that have been produced by over 35 years of operational meteorological satellites. This trove of remote sensing data products is complex, featuring observations from multiple generations of polar and geostationary satellites. Each mission has different primary objectives, resulting in different instrument payloads, resolutions, and variables observed. As EUMETSAT's next-generation core missions are launched and joined by smaller missions with narrower foci, both the size and complexity of these data will increase exponentially.
The data alone are a valuable resource for the geosciences, but the value that can be extracted from them increases greatly when they are combined with data from other disciplines. As EUMETSAT's primary missions are focused on observational meteorology, the potential synergies with e.g. numerical weather prediction data are readily apparent. However, EUMETSAT data is increasingly used in applications from other domains, e.g. oceanography, agriculture, and atmospheric composition, to name just a few.
New solutions are being implemented to unlock the potential of EUMETSAT's data, particularly in combination with data from other disciplines and leveraging emerging data-driven approaches such as data mining and machine learning. A particular challenge in this regard is the heterogeneity of the individual data products, each of which is optimised to accurately describe the observed variable and quality information associated with the observing instrument and platform. A further challenge is the heterogeneity of the potential users, all of whom have preferred toolsets and processing chains.
The EUMETSAT Data Tailor is part of a larger initiative at EUMETSAT to support users in taking full advantage of our data holdings. It addresses the problem that there is no single "best format" for all users by allowing users to tailor data products to fit their needs. With it, users can extract the data that is relevant for them by selecting by geospatial and spectral criteria, resample into the projection and resolution that they require, and reformat the data into a variety of popular formats. Tailoring workflows can be created graphically or written by hand in YAML and saved in a given Data Tailor deployment.
The Data Tailor is cloud-native, exposing its functionality as a microservice, via a web UI, on the command line, and as a Python package. Support for additional functions can be added easily via its plug-in architecture, which allows dynamically adding and removing functionality to an installation. It is released under an Apache v2 license, making it easy to deploy the software in any context. Whether data is in flight or at rest, the Data Tailor offers users easy access to EUMETSAT products in the format of their choice.
This presentation will showcase the Data Tailor and briefly address other exciting developments at EUMETSAT that the Data Tailor is integrated with that will support big data workflows with EUMETSAT's past, present, and future data.
How to cite: Lee, D., Romero, R., Miu, P., Jose Pereda Garcimartin, F., and Perez Navarro, O.: Data Tailor: Integrate EUMETSAT's data into your datacube, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18670, https://doi.org/10.5194/egusphere-egu2020-18670, 2020.
EUMETSAT hosts a large collection of geophysical data sets that have been produced by over 35 years of operational meteorological satellites. This trove of remote sensing data products is complex, featuring observations from multiple generations of polar and geostationary satellites. Each mission has different primary objectives, resulting in different instrument payloads, resolutions, and variables observed. As EUMETSAT's next-generation core missions are launched and joined by smaller missions with narrower foci, both the size and complexity of these data will increase exponentially.
The data alone are a valuable resource for the geosciences, but the value that can be extracted from them increases greatly when they are combined with data from other disciplines. As EUMETSAT's primary missions are focused on observational meteorology, the potential synergies with e.g. numerical weather prediction data are readily apparent. However, EUMETSAT data is increasingly used in applications from other domains, e.g. oceanography, agriculture, and atmospheric composition, to name just a few.
New solutions are being implemented to unlock the potential of EUMETSAT's data, particularly in combination with data from other disciplines and leveraging emerging data-driven approaches such as data mining and machine learning. A particular challenge in this regard is the heterogeneity of the individual data products, each of which is optimised to accurately describe the observed variable and quality information associated with the observing instrument and platform. A further challenge is the heterogeneity of the potential users, all of whom have preferred toolsets and processing chains.
The EUMETSAT Data Tailor is part of a larger initiative at EUMETSAT to support users in taking full advantage of our data holdings. It addresses the problem that there is no single "best format" for all users by allowing users to tailor data products to fit their needs. With it, users can extract the data that is relevant for them by selecting by geospatial and spectral criteria, resample into the projection and resolution that they require, and reformat the data into a variety of popular formats. Tailoring workflows can be created graphically or written by hand in YAML and saved in a given Data Tailor deployment.
The Data Tailor is cloud-native, exposing its functionality as a microservice, via a web UI, on the command line, and as a Python package. Support for additional functions can be added easily via its plug-in architecture, which allows dynamically adding and removing functionality to an installation. It is released under an Apache v2 license, making it easy to deploy the software in any context. Whether data is in flight or at rest, the Data Tailor offers users easy access to EUMETSAT products in the format of their choice.
This presentation will showcase the Data Tailor and briefly address other exciting developments at EUMETSAT that the Data Tailor is integrated with that will support big data workflows with EUMETSAT's past, present, and future data.
How to cite: Lee, D., Romero, R., Miu, P., Jose Pereda Garcimartin, F., and Perez Navarro, O.: Data Tailor: Integrate EUMETSAT's data into your datacube, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18670, https://doi.org/10.5194/egusphere-egu2020-18670, 2020.
EGU2020-10029 | Displays | ITS4.1/NP4.2
Bridging the gap between Big Earth data users and future (cloud-based) data systems - Towards a better understanding of user requirements of cloud-based data systemsJulia Wagemann, Stephan Siemen, Jörg Bendix, and Bernhard Seeger
The European Commission’s Earth Observation programme Copernicus produces an unprecedented amount of openly available multi-dimensional environmental data. However, data ‘accessibility’ remains one of the biggest obstacles for users of open Big Earth Data and hinders full data exploitation. Data services have to evolve from pure download services to offer an easier and more on-demand data access. There are currently different concepts explored to make Big Earth Data better accessible for users, e.g. virtual research infrastructures, data cube technologies, standardised web services or cloud processing services, such as the Google Earth Engine or the Copernicus Climate Data Store Toolbox. Each offering provides different types of data, tools and functionalities. Data services are often developed solely satisfying specific user requirements and needs.
For this reason, we conducted a user requirements survey between November 2018 and June 2019 among users of Big Earth Data (including users of Earth Observation data, meteorological and environmental forecasts and other geospatial data) to better understand user requirements of Big Earth Data. To reach an active data user community for this survey, we partnered with ECMWF, which has 40 years of experience in providing data services for weather forecast data and environmental data sets of the Copernicus Programme.
We were interested in which datasets users currently use, which datasets they would like to use in the future and the reasons why they have not yet explored certain datasets. We were interested in the tools and software they use to process the data and what challenges they face in accessing and handling Big Earth Data. Another part focused on future (cloud-based) data services and there, we were interested in the users’ motivation to migrate their data processing tasks to cloud-based data services and asked them what aspects of these services they consider being important.
While preliminary results of the study were released last year, this year the final study results are presented. A specific focus will be put on users’ expectation of future (cloud-based) data services aligned with recommendations for data users and data providers alike to ensure the full exploitation of Big Earth Data in the future.
How to cite: Wagemann, J., Siemen, S., Bendix, J., and Seeger, B.: Bridging the gap between Big Earth data users and future (cloud-based) data systems - Towards a better understanding of user requirements of cloud-based data systems, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10029, https://doi.org/10.5194/egusphere-egu2020-10029, 2020.
The European Commission’s Earth Observation programme Copernicus produces an unprecedented amount of openly available multi-dimensional environmental data. However, data ‘accessibility’ remains one of the biggest obstacles for users of open Big Earth Data and hinders full data exploitation. Data services have to evolve from pure download services to offer an easier and more on-demand data access. There are currently different concepts explored to make Big Earth Data better accessible for users, e.g. virtual research infrastructures, data cube technologies, standardised web services or cloud processing services, such as the Google Earth Engine or the Copernicus Climate Data Store Toolbox. Each offering provides different types of data, tools and functionalities. Data services are often developed solely satisfying specific user requirements and needs.
For this reason, we conducted a user requirements survey between November 2018 and June 2019 among users of Big Earth Data (including users of Earth Observation data, meteorological and environmental forecasts and other geospatial data) to better understand user requirements of Big Earth Data. To reach an active data user community for this survey, we partnered with ECMWF, which has 40 years of experience in providing data services for weather forecast data and environmental data sets of the Copernicus Programme.
We were interested in which datasets users currently use, which datasets they would like to use in the future and the reasons why they have not yet explored certain datasets. We were interested in the tools and software they use to process the data and what challenges they face in accessing and handling Big Earth Data. Another part focused on future (cloud-based) data services and there, we were interested in the users’ motivation to migrate their data processing tasks to cloud-based data services and asked them what aspects of these services they consider being important.
While preliminary results of the study were released last year, this year the final study results are presented. A specific focus will be put on users’ expectation of future (cloud-based) data services aligned with recommendations for data users and data providers alike to ensure the full exploitation of Big Earth Data in the future.
How to cite: Wagemann, J., Siemen, S., Bendix, J., and Seeger, B.: Bridging the gap between Big Earth data users and future (cloud-based) data systems - Towards a better understanding of user requirements of cloud-based data systems, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10029, https://doi.org/10.5194/egusphere-egu2020-10029, 2020.
EGU2020-11559 | Displays | ITS4.1/NP4.2
AI4GEO: An automatic 3D geospatial information capabilitySimon Baillarin, Pierre-Marie Brunet, Pierre Lassalle, Gwenael Souille, Laurent Gabet, Gilles Foulon, Gaelle Romeyer, Cedrik Ferrero, Thanh-Long Huynh, Antoine Masse, and Stan Assier
The availability of 3D Geospatial information is a key stake for many expanding sectors such as autonomous vehicles, business intelligence and urban planning.
The availability of huge volumes of satellite, airborne and in-situ data now makes this production feasible on a large scale. It needs nonetheless a certain level of skilled manual intervention to secure a certain level of quality, which prevents mass production.
New artificial intelligence and big data technologies are key in lifting these obstacles.
The AI4GEO project aims at developing an automatic solution for producing 3D geospatial information and offer new value-added services leveraging innovative methods adapted to 3D imagery.
The AI4GEO consortium consists of institutional partners (CNES, IGN, ONERA) and industrial groups (CS-SI, AIRBUS, CLS, GEOSAT, QWANT, QUANTCUBE) covering the whole value chain of Geospatial Information.
With a 4 years’ timeline, the project is structured around 2 R&D axes which will progress simultaneously and feed each other.
The first axis consists in developing a set of technological bricks allowing the automatic production of qualified 3D maps composed of 3D objects and associated semantics. This collaborative work benefits from the latest research from all partners in the field of AI and Big Data technologies as well as from an unprecedented database (satellite and airborne data (optics, radars, lidars) combined with cartographic and in-situ data).
The second axis consists in deriving from these technological bricks a variety of services for different fields: 3D semantic mapping of cities, macroeconomic indicators, decision support for water management, autonomous transport, consumer search engine.
Started in 2019, the first axis of the project has already produced very promising results. A first version of the platform and technological bricks are now available.
This paper will first introduce AI4GEO initiative: context and overall objectives.
It will then present the current status of the project and in particular it will focus on the innovative approach to handle big 3D datasets for analytics needs and it will present the first results of 3D semantic segmentations on various test sites and associated perspectives.
How to cite: Baillarin, S., Brunet, P.-M., Lassalle, P., Souille, G., Gabet, L., Foulon, G., Romeyer, G., Ferrero, C., Huynh, T.-L., Masse, A., and Assier, S.: AI4GEO: An automatic 3D geospatial information capability , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11559, https://doi.org/10.5194/egusphere-egu2020-11559, 2020.
The availability of 3D Geospatial information is a key stake for many expanding sectors such as autonomous vehicles, business intelligence and urban planning.
The availability of huge volumes of satellite, airborne and in-situ data now makes this production feasible on a large scale. It needs nonetheless a certain level of skilled manual intervention to secure a certain level of quality, which prevents mass production.
New artificial intelligence and big data technologies are key in lifting these obstacles.
The AI4GEO project aims at developing an automatic solution for producing 3D geospatial information and offer new value-added services leveraging innovative methods adapted to 3D imagery.
The AI4GEO consortium consists of institutional partners (CNES, IGN, ONERA) and industrial groups (CS-SI, AIRBUS, CLS, GEOSAT, QWANT, QUANTCUBE) covering the whole value chain of Geospatial Information.
With a 4 years’ timeline, the project is structured around 2 R&D axes which will progress simultaneously and feed each other.
The first axis consists in developing a set of technological bricks allowing the automatic production of qualified 3D maps composed of 3D objects and associated semantics. This collaborative work benefits from the latest research from all partners in the field of AI and Big Data technologies as well as from an unprecedented database (satellite and airborne data (optics, radars, lidars) combined with cartographic and in-situ data).
The second axis consists in deriving from these technological bricks a variety of services for different fields: 3D semantic mapping of cities, macroeconomic indicators, decision support for water management, autonomous transport, consumer search engine.
Started in 2019, the first axis of the project has already produced very promising results. A first version of the platform and technological bricks are now available.
This paper will first introduce AI4GEO initiative: context and overall objectives.
It will then present the current status of the project and in particular it will focus on the innovative approach to handle big 3D datasets for analytics needs and it will present the first results of 3D semantic segmentations on various test sites and associated perspectives.
How to cite: Baillarin, S., Brunet, P.-M., Lassalle, P., Souille, G., Gabet, L., Foulon, G., Romeyer, G., Ferrero, C., Huynh, T.-L., Masse, A., and Assier, S.: AI4GEO: An automatic 3D geospatial information capability , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11559, https://doi.org/10.5194/egusphere-egu2020-11559, 2020.
EGU2020-14325 | Displays | ITS4.1/NP4.2
Parquet Cube to store and process gridded dataelisabeth lambert, Jean-michel Zigna, Thomas Zilio, and Flavien Gouillon
The volume of data in the earth data observation domain grows considerably, especially with the emergence of new generations of satellites providing much more precise measures and thus voluminous data and files. The ‘big data’ field provides solutions for storing and processing huge amount of data. However, there is no established consensus, neither in the industrial market nor the open source community, on big data solutions adapted to the earth data observation domain. The main difficulty is that these multi-dimensional data are not naturally scalable. CNES and CLS, driven by a CLS business needs, carried out a study to address this difficulty and try to answer it.
Two use cases have been identified, these two being complementary because at different points in the value chain: 1) the development of an altimetric processing chain, storing low level altimetric measurements from multiple satellite missions, and 2) the extraction of oceanographic environmental data along animal and ships tracks. The original data format of these environmental variables is netCDF. We will first show the state of the art of big data technologies that are adapted to this problematic and their limitations. Then, we will describe the prototypes behind both use cases and in particular how the data is split into independent chunks that then can be processed in parallel. The storage format chosen is the Apache parquet and in the first use case, the manipulation of the data is made with the xarray library while all the parallel processes are implemented with the Dask framework. An implementation using Zarr library instead of Parquet has also been developed and results will also be shown. In the second use case, the enrichment of the track with METOC (Meteo/Oceanographic) data is developed using the Spark framework. Finally, results of this second use case, that runs operationally today for the extraction of oceanographic data along tracks, will be shown. This second solution is an alternative to Pangeo solution in the world of industrial and Java development. It extends the traditional THREDDS subsetter, delivered by the Open source Unidata Community, to a bigdata implementation. This Parquet storage and associated service implements a smoothed transition of gridded data in Big Data infrastructures.
How to cite: lambert, E., Zigna, J., Zilio, T., and Gouillon, F.: Parquet Cube to store and process gridded data, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-14325, https://doi.org/10.5194/egusphere-egu2020-14325, 2020.
The volume of data in the earth data observation domain grows considerably, especially with the emergence of new generations of satellites providing much more precise measures and thus voluminous data and files. The ‘big data’ field provides solutions for storing and processing huge amount of data. However, there is no established consensus, neither in the industrial market nor the open source community, on big data solutions adapted to the earth data observation domain. The main difficulty is that these multi-dimensional data are not naturally scalable. CNES and CLS, driven by a CLS business needs, carried out a study to address this difficulty and try to answer it.
Two use cases have been identified, these two being complementary because at different points in the value chain: 1) the development of an altimetric processing chain, storing low level altimetric measurements from multiple satellite missions, and 2) the extraction of oceanographic environmental data along animal and ships tracks. The original data format of these environmental variables is netCDF. We will first show the state of the art of big data technologies that are adapted to this problematic and their limitations. Then, we will describe the prototypes behind both use cases and in particular how the data is split into independent chunks that then can be processed in parallel. The storage format chosen is the Apache parquet and in the first use case, the manipulation of the data is made with the xarray library while all the parallel processes are implemented with the Dask framework. An implementation using Zarr library instead of Parquet has also been developed and results will also be shown. In the second use case, the enrichment of the track with METOC (Meteo/Oceanographic) data is developed using the Spark framework. Finally, results of this second use case, that runs operationally today for the extraction of oceanographic data along tracks, will be shown. This second solution is an alternative to Pangeo solution in the world of industrial and Java development. It extends the traditional THREDDS subsetter, delivered by the Open source Unidata Community, to a bigdata implementation. This Parquet storage and associated service implements a smoothed transition of gridded data in Big Data infrastructures.
How to cite: lambert, E., Zigna, J., Zilio, T., and Gouillon, F.: Parquet Cube to store and process gridded data, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-14325, https://doi.org/10.5194/egusphere-egu2020-14325, 2020.
EGU2020-6688 | Displays | ITS4.1/NP4.2
Applying LSTM and GAN to build a deep learning model (TGAN-TEC) for global ionospheric TECzhou chen, Yue deng, and Jing-Song wang
TEC is very important ionospheric parameter, which is commonly used observation for studying various ionospheric physical mechanism and other technological related to ionosphere (i.e. Global Positioning). However, the variation of global TEC is very dynamic, and its spatiotemporal variation is extremely complicated. So in this paper, we try to build a novel global ionospheric TEC (total electron content) predicting model based on two deep learning algorithms: generative adversarial network (GAN) and long short-term memory (LSTM). Training data is from 10-year IGS TEC data, which provide plenty of data for the GAN and LSTM algorithm to obtain the spatial and temporal variation of TEC respectively. The prediction accuracy of this model have been calculated under different levels of geomagnetic activity. The statistic result suggest that the proposed ionospheric model can be used as an efficient tool for ionospheric TEC short-time prediction.
How to cite: chen, Z., deng, Y., and wang, J.-S.: Applying LSTM and GAN to build a deep learning model (TGAN-TEC) for global ionospheric TEC, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6688, https://doi.org/10.5194/egusphere-egu2020-6688, 2020.
TEC is very important ionospheric parameter, which is commonly used observation for studying various ionospheric physical mechanism and other technological related to ionosphere (i.e. Global Positioning). However, the variation of global TEC is very dynamic, and its spatiotemporal variation is extremely complicated. So in this paper, we try to build a novel global ionospheric TEC (total electron content) predicting model based on two deep learning algorithms: generative adversarial network (GAN) and long short-term memory (LSTM). Training data is from 10-year IGS TEC data, which provide plenty of data for the GAN and LSTM algorithm to obtain the spatial and temporal variation of TEC respectively. The prediction accuracy of this model have been calculated under different levels of geomagnetic activity. The statistic result suggest that the proposed ionospheric model can be used as an efficient tool for ionospheric TEC short-time prediction.
How to cite: chen, Z., deng, Y., and wang, J.-S.: Applying LSTM and GAN to build a deep learning model (TGAN-TEC) for global ionospheric TEC, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6688, https://doi.org/10.5194/egusphere-egu2020-6688, 2020.
EGU2020-4434 | Displays | ITS4.1/NP4.2
A Deep Learning Method for Short-Range Point Forecasts of Wind SpeedPetrina Papazek and Irene Schicker
In this study, we present a deep learning-based method to provide short-range point-forecasts (1-2 days ahead) of the 10-meter wind speed for complex terrain. Gridded data with different horizontal resolutions from numeric weather prediction (NWP) models, gridded observations, and point data are used. An artificial neural network (ANN), able to process several differently structured inputs simultaneously, is developed.
The heterogeneous structure of inputs is targeted by the ANN by combining convolutional, long-short-term-memory (LSTM), fully connected (FC) layers, and others within a common network. Convolutional layers efficiently solve image processing tasks, however, they are applicable to any gridded data source. An LSTM layer models recurrent steps in the ANN and is, thus, useful for time-series, such as meteorological observations. Further key objectives of this research are to consider different spatial and temporal resolutions and different topographic characteristics of the selected sites.
Data from the Austrian TAWES system (Teilautomatische Wetterstationen, meteorological observations in 10-minute intervals), INCA's (Integrated Nowcasting through Comprehensive Analysis) gridded observation fields, and NWP data from the ECMWF IFS (European Center for Medium-Range Weather Forecast’s Integrated Forecasting System) model are used in this study. Hourly runs for 12 test locations (selected TAWES sites representing different topographic characteristics in Austria) and different seasons are conducted.
The ANN’s results yield, in general, high forecast-skills (MAE=1.13 m/s, RMSE=1.72 m/s), indicating a successful learning based on the used training data. Different combinations of the number of input field grid points were investigated centering around the target sites. It is shown that a small number of ECMWF IFS grid Points (e.g.: 5x5 grid points) and a higher number of INCA grid points (e.g.: 15x15) resulted in the best performing forecasts. The different number of grid points is directly related to the models' resolution. However, keeping the nowcasting-range in mind, it is shown that adding NWP data does not increase the model performance. Thus, for nowcasting a stronger weighting towards the observations is important. Beyond the nowcasting range, the deep learning-based ANN model outperforms the more basic machine learning algorithms as well as other alternative models.
How to cite: Papazek, P. and Schicker, I.: A Deep Learning Method for Short-Range Point Forecasts of Wind Speed, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4434, https://doi.org/10.5194/egusphere-egu2020-4434, 2020.
In this study, we present a deep learning-based method to provide short-range point-forecasts (1-2 days ahead) of the 10-meter wind speed for complex terrain. Gridded data with different horizontal resolutions from numeric weather prediction (NWP) models, gridded observations, and point data are used. An artificial neural network (ANN), able to process several differently structured inputs simultaneously, is developed.
The heterogeneous structure of inputs is targeted by the ANN by combining convolutional, long-short-term-memory (LSTM), fully connected (FC) layers, and others within a common network. Convolutional layers efficiently solve image processing tasks, however, they are applicable to any gridded data source. An LSTM layer models recurrent steps in the ANN and is, thus, useful for time-series, such as meteorological observations. Further key objectives of this research are to consider different spatial and temporal resolutions and different topographic characteristics of the selected sites.
Data from the Austrian TAWES system (Teilautomatische Wetterstationen, meteorological observations in 10-minute intervals), INCA's (Integrated Nowcasting through Comprehensive Analysis) gridded observation fields, and NWP data from the ECMWF IFS (European Center for Medium-Range Weather Forecast’s Integrated Forecasting System) model are used in this study. Hourly runs for 12 test locations (selected TAWES sites representing different topographic characteristics in Austria) and different seasons are conducted.
The ANN’s results yield, in general, high forecast-skills (MAE=1.13 m/s, RMSE=1.72 m/s), indicating a successful learning based on the used training data. Different combinations of the number of input field grid points were investigated centering around the target sites. It is shown that a small number of ECMWF IFS grid Points (e.g.: 5x5 grid points) and a higher number of INCA grid points (e.g.: 15x15) resulted in the best performing forecasts. The different number of grid points is directly related to the models' resolution. However, keeping the nowcasting-range in mind, it is shown that adding NWP data does not increase the model performance. Thus, for nowcasting a stronger weighting towards the observations is important. Beyond the nowcasting range, the deep learning-based ANN model outperforms the more basic machine learning algorithms as well as other alternative models.
How to cite: Papazek, P. and Schicker, I.: A Deep Learning Method for Short-Range Point Forecasts of Wind Speed, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4434, https://doi.org/10.5194/egusphere-egu2020-4434, 2020.
EGU2020-5372 | Displays | ITS4.1/NP4.2
Developing performance checks on machine-learning models in an automated system for developing hazard mapsAshleigh Massam, Ashley Barnes, Siân Lane, Robert Platt, and David Wood
JBA Risk Management (JBA) uses JFlow®, a two-dimensional hydraulic model, to simulate surface water, fluvial, and dam break flood risk. National flood maps are generated on a computer cluster that parallelises up to 20,000 model simulations , covering an area of up to 320,000 km3 and creating up to 10 GB of data per day.
JBA uses machine-learning models to identify artefacts in the flood simulations. The ability of machine-learning models to quickly process and detect these artefacts, combined with the use of an automated control system, means that hydraulic modelling throughput can be maximised with little user intervention. However, continual retraining of the model and application of software updates introduce the risk of a significant decrease in performance. This necessitates the use of a system to monitor the performance of the machine-learning model to ensure that a sufficient level of quality is maintained, and to allow drops in quality to be investigated.
We present an approach used to develop performance checks on a machine-learning model that identifies artificial depth differences between hydraulic model simulations. Performance checks are centred on the use of control charts, an approach commonly used in manufacturing processes to monitor the proportion of items produced with defects. In order to develop this approach for a geoscientific context, JBA has (i) built a database of randomly-sampled hydraulic model outputs currently totalling 200 GB of data; (ii) developed metrics to summarise key features across a modelled region, including geomorphology and hydrology; (iii) used a random forest regression model to identify feature dominance to determine the most robust relationships that contribute to depth differences in the flood map; and (iv) developed the performance check in an automated system that tests every nth hydraulic modelling output against data sampled based on common features.
The implementation of the performance checks allows JBA to assess potential changes in the quality of artificial feature identification following a training cycle in a development environment prior to release in a production environment.
How to cite: Massam, A., Barnes, A., Lane, S., Platt, R., and Wood, D.: Developing performance checks on machine-learning models in an automated system for developing hazard maps, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5372, https://doi.org/10.5194/egusphere-egu2020-5372, 2020.
JBA Risk Management (JBA) uses JFlow®, a two-dimensional hydraulic model, to simulate surface water, fluvial, and dam break flood risk. National flood maps are generated on a computer cluster that parallelises up to 20,000 model simulations , covering an area of up to 320,000 km3 and creating up to 10 GB of data per day.
JBA uses machine-learning models to identify artefacts in the flood simulations. The ability of machine-learning models to quickly process and detect these artefacts, combined with the use of an automated control system, means that hydraulic modelling throughput can be maximised with little user intervention. However, continual retraining of the model and application of software updates introduce the risk of a significant decrease in performance. This necessitates the use of a system to monitor the performance of the machine-learning model to ensure that a sufficient level of quality is maintained, and to allow drops in quality to be investigated.
We present an approach used to develop performance checks on a machine-learning model that identifies artificial depth differences between hydraulic model simulations. Performance checks are centred on the use of control charts, an approach commonly used in manufacturing processes to monitor the proportion of items produced with defects. In order to develop this approach for a geoscientific context, JBA has (i) built a database of randomly-sampled hydraulic model outputs currently totalling 200 GB of data; (ii) developed metrics to summarise key features across a modelled region, including geomorphology and hydrology; (iii) used a random forest regression model to identify feature dominance to determine the most robust relationships that contribute to depth differences in the flood map; and (iv) developed the performance check in an automated system that tests every nth hydraulic modelling output against data sampled based on common features.
The implementation of the performance checks allows JBA to assess potential changes in the quality of artificial feature identification following a training cycle in a development environment prior to release in a production environment.
How to cite: Massam, A., Barnes, A., Lane, S., Platt, R., and Wood, D.: Developing performance checks on machine-learning models in an automated system for developing hazard maps, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5372, https://doi.org/10.5194/egusphere-egu2020-5372, 2020.
EGU2020-7125 | Displays | ITS4.1/NP4.2 | Highlight
Connecting big data from climate and biology using statistics and machine learning techniquesSusanne Pfeifer, Katharina Bülow, and Lennart Marien
Within the Hamburg Cooperation project „HYBRIDS – Chances and Challenges of New Genomic Combinations“ (https://www.biologie.uni-hamburg.de/en/forschung/verbundvorhaben/hybride-mehr-infos.html), one subproject deals with the problem of finding relations between the existance of hybrid plant species and the climate and its variability at the same location. For this, biological and climatic data is brought together and statistical and machine learning techniques are applied to derive climatic differences between those regions where both parent species, but no hybrid species are found, and those regions where both parent species and the hybrid species were found.
Both the climate data (here daily gridded E-OBS temperature (mean, min, max) and precipitation on ~10 km grid resolution for the period of 1970 to 2006 (Haylock et al.,2008, Cornes et al., 2018)) and the plants data (Hybrid Flora of the British Isles, 700 taxa, 6 112 847 lines of data, (Stace et al., 2015)) can be considered as „big data“. However, the peculiarities of both data are very different and so are the issues to be considered when tackling the data.
We will present the first results of this interdisciplinary effort, discuss the methodological issues and elaborate on the chances and challenges of interpreting the findings.
Cornes, R., G. van der Schrier, E.J.M. van den Besselaar, and P.D. Jones. 2018: An Ensemble Version of the E-OBS Temperature and Precipitation Datasets, J. Geophys. Res. Atmos., 123.
Haylock, M. R., Hofstra, N., Klein Tank, A. M. G., Klok, E. J., Jones, P. D. & M. New (2008): A European daily high-resolution gridded data set of surface temperature and precipitation for 1950-2006. Journal of Geophysical Research Atmospheres, 113(20). https://doi.org/10.1029/2008JD010201
Stace, C.A., Preston, C.D. & D.A. Pearman (2015): Hybrid flora of the British Isles. Botanical Society of Britain & Ireland. 501pp.
How to cite: Pfeifer, S., Bülow, K., and Marien, L.: Connecting big data from climate and biology using statistics and machine learning techniques, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7125, https://doi.org/10.5194/egusphere-egu2020-7125, 2020.
Within the Hamburg Cooperation project „HYBRIDS – Chances and Challenges of New Genomic Combinations“ (https://www.biologie.uni-hamburg.de/en/forschung/verbundvorhaben/hybride-mehr-infos.html), one subproject deals with the problem of finding relations between the existance of hybrid plant species and the climate and its variability at the same location. For this, biological and climatic data is brought together and statistical and machine learning techniques are applied to derive climatic differences between those regions where both parent species, but no hybrid species are found, and those regions where both parent species and the hybrid species were found.
Both the climate data (here daily gridded E-OBS temperature (mean, min, max) and precipitation on ~10 km grid resolution for the period of 1970 to 2006 (Haylock et al.,2008, Cornes et al., 2018)) and the plants data (Hybrid Flora of the British Isles, 700 taxa, 6 112 847 lines of data, (Stace et al., 2015)) can be considered as „big data“. However, the peculiarities of both data are very different and so are the issues to be considered when tackling the data.
We will present the first results of this interdisciplinary effort, discuss the methodological issues and elaborate on the chances and challenges of interpreting the findings.
Cornes, R., G. van der Schrier, E.J.M. van den Besselaar, and P.D. Jones. 2018: An Ensemble Version of the E-OBS Temperature and Precipitation Datasets, J. Geophys. Res. Atmos., 123.
Haylock, M. R., Hofstra, N., Klein Tank, A. M. G., Klok, E. J., Jones, P. D. & M. New (2008): A European daily high-resolution gridded data set of surface temperature and precipitation for 1950-2006. Journal of Geophysical Research Atmospheres, 113(20). https://doi.org/10.1029/2008JD010201
Stace, C.A., Preston, C.D. & D.A. Pearman (2015): Hybrid flora of the British Isles. Botanical Society of Britain & Ireland. 501pp.
How to cite: Pfeifer, S., Bülow, K., and Marien, L.: Connecting big data from climate and biology using statistics and machine learning techniques, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7125, https://doi.org/10.5194/egusphere-egu2020-7125, 2020.
EGU2020-7067 | Displays | ITS4.1/NP4.2
Improving computational efficiency of forward modelling for ground-based time-domain electromagnetic data using neural networksMuhammad Rizwan Asif*, Thue Sylvester Bording, Adrian S. Barfod, Bo Zhang, Jakob Juul Larsen, and Esben Auken
Inversion of large-scale time-domain electromagnetic surveys is a time consuming and computationally expensive task. Probabilistic or deterministic methodologies, such as Monte Carlo inversion or Gauss-Newton methods, require repeated calculation of forward responses, and, dependent on methodology and survey size, the number of forward responses can reach from thousands to millions. In this study, we propose a machine learning based forward modelling approach in order to significantly decrease the time required to calculate the forward responses, and thus also inversion time. We employ a fully-connected feed-forward neural network to approximate the forward modelling process. For training of the network, we generated 93,500 forward responses using AarhusInv with resistivity models derived from 9 surveys at different locations in Denmark, representing a Quaternary geological setting. The resistivity models are discretized into 30 layers with logarithmically increasing thicknesses down to 300 m, and ranges from 1 to 1,000 Ω·m. The forward responses, were modelled with 14 gates/decade from 10-7 s to 10-2 s. To ensure better network convergence, the input resistivity models are normalized after logarithmically transforming them. Furthermore, the network target outputs, i.e. forward responses, are globally normalized, where each gate is normalized in relation to the maximum and minimum values of the respective gates. This ensures each gate is prioritized equally.
The network performance is evaluated on a test set derived from a separate survey containing 5,978 resistivity models, by directly comparing the neural network based forward responses to the AarhusInv forward responses. The performance is exceptionally good, with 99.32% of all gates accurate to within 3% relative error, which is comparable to data uncertainty. The time derivatives of the generated forward models, dB/dt, are also computed by convolving a transmitter waveform. The dB/dt performance is 86.2%, but is improved to an accuracy of 98.02% within 3% error by post-processing the forward responses using a local smoothing algorithm. The low dynamic range of the target outputs induces rounding/truncation errors, which leads to jagging, and therefore increasing the error when the waveform is applied to the un-processed forward responses. However, the 1.98% of the gates that exceed the 3% error after post-processing lie within typical data uncertainty, ensuring the suitability for use in inversion schemes.
The proposed forward modelling strategy is up to 17 times faster than commonly used accurate modelling methods, and may be incorporated into either deterministic or probabilistic inversion algorithms, allowing for significantly faster inversion of large datasets.
A TEM system having a 40 m × 40 m central loop configuration was selected for this study. However, in principle, any geometry can be applied. Additionally, the proposed scheme can be extended for other systems, such as airborne EM systems by considering the altitude as an extra input parameter.
How to cite: Asif*, M. R., Bording, T. S., Barfod, A. S., Zhang, B., Larsen, J. J., and Auken, E.: Improving computational efficiency of forward modelling for ground-based time-domain electromagnetic data using neural networks, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7067, https://doi.org/10.5194/egusphere-egu2020-7067, 2020.
Inversion of large-scale time-domain electromagnetic surveys is a time consuming and computationally expensive task. Probabilistic or deterministic methodologies, such as Monte Carlo inversion or Gauss-Newton methods, require repeated calculation of forward responses, and, dependent on methodology and survey size, the number of forward responses can reach from thousands to millions. In this study, we propose a machine learning based forward modelling approach in order to significantly decrease the time required to calculate the forward responses, and thus also inversion time. We employ a fully-connected feed-forward neural network to approximate the forward modelling process. For training of the network, we generated 93,500 forward responses using AarhusInv with resistivity models derived from 9 surveys at different locations in Denmark, representing a Quaternary geological setting. The resistivity models are discretized into 30 layers with logarithmically increasing thicknesses down to 300 m, and ranges from 1 to 1,000 Ω·m. The forward responses, were modelled with 14 gates/decade from 10-7 s to 10-2 s. To ensure better network convergence, the input resistivity models are normalized after logarithmically transforming them. Furthermore, the network target outputs, i.e. forward responses, are globally normalized, where each gate is normalized in relation to the maximum and minimum values of the respective gates. This ensures each gate is prioritized equally.
The network performance is evaluated on a test set derived from a separate survey containing 5,978 resistivity models, by directly comparing the neural network based forward responses to the AarhusInv forward responses. The performance is exceptionally good, with 99.32% of all gates accurate to within 3% relative error, which is comparable to data uncertainty. The time derivatives of the generated forward models, dB/dt, are also computed by convolving a transmitter waveform. The dB/dt performance is 86.2%, but is improved to an accuracy of 98.02% within 3% error by post-processing the forward responses using a local smoothing algorithm. The low dynamic range of the target outputs induces rounding/truncation errors, which leads to jagging, and therefore increasing the error when the waveform is applied to the un-processed forward responses. However, the 1.98% of the gates that exceed the 3% error after post-processing lie within typical data uncertainty, ensuring the suitability for use in inversion schemes.
The proposed forward modelling strategy is up to 17 times faster than commonly used accurate modelling methods, and may be incorporated into either deterministic or probabilistic inversion algorithms, allowing for significantly faster inversion of large datasets.
A TEM system having a 40 m × 40 m central loop configuration was selected for this study. However, in principle, any geometry can be applied. Additionally, the proposed scheme can be extended for other systems, such as airborne EM systems by considering the altitude as an extra input parameter.
How to cite: Asif*, M. R., Bording, T. S., Barfod, A. S., Zhang, B., Larsen, J. J., and Auken, E.: Improving computational efficiency of forward modelling for ground-based time-domain electromagnetic data using neural networks, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7067, https://doi.org/10.5194/egusphere-egu2020-7067, 2020.
EGU2020-12899 | Displays | ITS4.1/NP4.2
A new weighted MSE loss for wind speed forecasting based on deep learning modelsXi Chen, Ruyi Yu, Sajid Ullah, Dianming Wu, Min Liu, Yonggui Huang, Hongkai Gao, Jie Jiang, and Ning Nie
Wind speed forecasting is very important for a lot of real-life applications, especially for controlling and monitoring of wind power plants. Owing to the non-linearity of wind speed time series, it is hard to improve the accuracy of runoff forecasting, especially several days ahead. In order to improve the forecasting performance, many forecasting models have been proposed. Recently, deep learning models have been paid great attention, since they excel the conventional machine learning models. The majority of existing deep learning models take the mean squared error (MSE) loss as the loss function for forecasting. MSE loss is linear. Consequently, it hinders further improvement of forecasting performance over nonlinear wind speed time series data.
In this work, we propose a new weighted MSE loss function for wind speed forecasting based on deep learning. As is well known, the training procedure is dominated by easy-training samples in applications. The domination will cause the ineffectiveness and inefficiency of computation. In the new weighted MSE loss function, loss weights of samples can be automatically reduced, according to the contribution of easy-training samples. Thus, the total loss mainly focuses on hard-training samples. To verify the new loss function, Recurrent Neural Network (RNN), Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) have been used as base models.
A number of experiments have been carried out by using open wind speed time series data collected from China and Unites states to demonstrate the effectiveness of the new loss function with three popular models. The performances of the models have been evaluated through the statistical error measures, such as Mean Absolute Error (MAE). MAE of the proposed weighted MSE loss are at most 55% lower than traditional MSE loss. The experimental results indicate that the new weighted loss function can outperform the popular MSE loss function in wind speed forecasting.
How to cite: Chen, X., Yu, R., Ullah, S., Wu, D., Liu, M., Huang, Y., Gao, H., Jiang, J., and Nie, N.: A new weighted MSE loss for wind speed forecasting based on deep learning models, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12899, https://doi.org/10.5194/egusphere-egu2020-12899, 2020.
Wind speed forecasting is very important for a lot of real-life applications, especially for controlling and monitoring of wind power plants. Owing to the non-linearity of wind speed time series, it is hard to improve the accuracy of runoff forecasting, especially several days ahead. In order to improve the forecasting performance, many forecasting models have been proposed. Recently, deep learning models have been paid great attention, since they excel the conventional machine learning models. The majority of existing deep learning models take the mean squared error (MSE) loss as the loss function for forecasting. MSE loss is linear. Consequently, it hinders further improvement of forecasting performance over nonlinear wind speed time series data.
In this work, we propose a new weighted MSE loss function for wind speed forecasting based on deep learning. As is well known, the training procedure is dominated by easy-training samples in applications. The domination will cause the ineffectiveness and inefficiency of computation. In the new weighted MSE loss function, loss weights of samples can be automatically reduced, according to the contribution of easy-training samples. Thus, the total loss mainly focuses on hard-training samples. To verify the new loss function, Recurrent Neural Network (RNN), Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) have been used as base models.
A number of experiments have been carried out by using open wind speed time series data collected from China and Unites states to demonstrate the effectiveness of the new loss function with three popular models. The performances of the models have been evaluated through the statistical error measures, such as Mean Absolute Error (MAE). MAE of the proposed weighted MSE loss are at most 55% lower than traditional MSE loss. The experimental results indicate that the new weighted loss function can outperform the popular MSE loss function in wind speed forecasting.
How to cite: Chen, X., Yu, R., Ullah, S., Wu, D., Liu, M., Huang, Y., Gao, H., Jiang, J., and Nie, N.: A new weighted MSE loss for wind speed forecasting based on deep learning models, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12899, https://doi.org/10.5194/egusphere-egu2020-12899, 2020.
EGU2020-13686 | Displays | ITS4.1/NP4.2
Downscaling of surface wind speed over the North Atlantic using conditional Generative Adversarial NetworksMikhail Krinitskiy, Svyatoslav Elizarov, Alexander Gavrikov, and Sergey Gulev
Accurate simulation of the physics of the Earth`s atmosphere involves solving partial differential equations with a number of closures handling subgrid processes. In some cases, the parameterizations may approximate the physics well. However, there is always room for improvement, which is often known to be computationally expensive. Thus, at the moment, modeling of the atmosphere is a theatre for a number of compromises between the accuracy of physics representation and its computational costs.
At the same time, some of the parameterizations are naturally empirical. They can be improved further based on the data-driven approach, which may provide increased approximation quality, given the same or even lower computational costs. In this perspective, a statistical model that learns a data distribution may deliver exceptional results. Recently, Generative Adversarial Networks (GANs) were shown to be a very flexible model type for approximating distributions of hidden representations in the case of two-dimensional visual scenes, a.k.a. images. The same approach may provide an opportunity for the data-driven approximation of subgrid processes in case of atmosphere modeling.
In our study, we present a novel approach for approximating subgrid processes based on conditional GANs. As proof of concept, we present the preliminary results of the downscaling of surface wind over the ocean in North Atlantic. We explore the potential of the presented approach in terms of speedup of the downscaling procedure compared to the dynamic simulations such as WRF model runs. We also study the potential of additional regularizations applied to improve the cGAN learning procedure as well as the resulting generalization ability and accuracy.
How to cite: Krinitskiy, M., Elizarov, S., Gavrikov, A., and Gulev, S.: Downscaling of surface wind speed over the North Atlantic using conditional Generative Adversarial Networks, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13686, https://doi.org/10.5194/egusphere-egu2020-13686, 2020.
Accurate simulation of the physics of the Earth`s atmosphere involves solving partial differential equations with a number of closures handling subgrid processes. In some cases, the parameterizations may approximate the physics well. However, there is always room for improvement, which is often known to be computationally expensive. Thus, at the moment, modeling of the atmosphere is a theatre for a number of compromises between the accuracy of physics representation and its computational costs.
At the same time, some of the parameterizations are naturally empirical. They can be improved further based on the data-driven approach, which may provide increased approximation quality, given the same or even lower computational costs. In this perspective, a statistical model that learns a data distribution may deliver exceptional results. Recently, Generative Adversarial Networks (GANs) were shown to be a very flexible model type for approximating distributions of hidden representations in the case of two-dimensional visual scenes, a.k.a. images. The same approach may provide an opportunity for the data-driven approximation of subgrid processes in case of atmosphere modeling.
In our study, we present a novel approach for approximating subgrid processes based on conditional GANs. As proof of concept, we present the preliminary results of the downscaling of surface wind over the ocean in North Atlantic. We explore the potential of the presented approach in terms of speedup of the downscaling procedure compared to the dynamic simulations such as WRF model runs. We also study the potential of additional regularizations applied to improve the cGAN learning procedure as well as the resulting generalization ability and accuracy.
How to cite: Krinitskiy, M., Elizarov, S., Gavrikov, A., and Gulev, S.: Downscaling of surface wind speed over the North Atlantic using conditional Generative Adversarial Networks, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13686, https://doi.org/10.5194/egusphere-egu2020-13686, 2020.
EGU2020-14677 | Displays | ITS4.1/NP4.2
Advances in Gaussian Processes for Earth Sciences: Physics-aware, interpretability and consistencyGustau Camps-Valls, Daniel Svendsen, Luca Martino, Adrian Pérez-Suay, Maria Piles, and Jordi Muñoz-Marí
Earth observation from remote sensing satellites allows us to monitor the processes occurring on the land cover, water bodies and the atmosphere, as well as their interactions. In the last decade machine learning has impacted the field enormously due to the unprecedented data deluge and emergence of complex problems that need to be tackled (semi)automatically. One of the main problems is to perform estimation of bio-geo-physical parameters from remote sensing observations. In this model inversion setting, Gaussian processes (GPs) are one of the preferred choices for model inversion, emulation, gap filling and data assimilation. GPs do not only provide accurate predictions but also allow for feature ranking, deriving confidence intervals, and error propagation and uncertainty quantification in a principled Bayesian inference framework.
Here we introduce GPs for data analysis in general and to address the forward-inverse problem posed in remote sensing in particular. GPs are typically used for inverse modelling based on concurrent observations and in situ measurements only, or to invert model simulations. We often rely on forward radiative transfer model (RTM) encoding the well-understood physical relations to either perform model inversion with machine learning, or to replace the RTM model with machine learning models, a process known as emulation. We review four novel GP models that respect and learn the physics, and deploy useful machine learning models for remote sensing parameter retrieval and model emulation tasks. First, we will introduce a Joint GP (JGP) model that combines in situ measurements and simulated data in a single GP model for inversion. Second, we present a latent force model (LFM) for GP modelling that encodes ordinary differential equations to blend data and physical models of the system. The LFM performs multi-output regression, can cope with missing data in the time series, and provides explicit latent functions that allow system analysis, evaluation and understanding. Third, we present an Automatic Gaussian Process Emulator (AGAPE) that approximates the forward physical model via interpolation, reducing the number of necessary nodes. Finally, we introduce a new GP model for data-driven regression that respects fundamental laws of physics via dependence-regularization, and provides consistency estimates. All models attain data-driven physics-aware modeling. Empirical evidence of performance of these models will be presented through illustrative examples of vegetation/land monitoring involving multispectral (Landsat, MODIS) and passive microwave (SMOS, SMAP) observations, as well as blending data with radiative transfer models, such as PROSAIL, SCOPE and MODTRAN.
References
"A Perspective on Gaussian Processes for Earth Observation". Camps-Valls et al. National Science Review 6 (4) :616-618, 2019
"Physics-aware Gaussian processes in remote sensing". Camps-Valls et al. Applied Soft Computing 68 :69-82, 2018
"A Survey on Gaussian Processes for Earth Observation Data Analysis: A Comprehensive Investigation". Camps-Valls et al. IEEE Geoscience and Remote Sensing Magazine 2016
How to cite: Camps-Valls, G., Svendsen, D., Martino, L., Pérez-Suay, A., Piles, M., and Muñoz-Marí, J.: Advances in Gaussian Processes for Earth Sciences: Physics-aware, interpretability and consistency, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-14677, https://doi.org/10.5194/egusphere-egu2020-14677, 2020.
Earth observation from remote sensing satellites allows us to monitor the processes occurring on the land cover, water bodies and the atmosphere, as well as their interactions. In the last decade machine learning has impacted the field enormously due to the unprecedented data deluge and emergence of complex problems that need to be tackled (semi)automatically. One of the main problems is to perform estimation of bio-geo-physical parameters from remote sensing observations. In this model inversion setting, Gaussian processes (GPs) are one of the preferred choices for model inversion, emulation, gap filling and data assimilation. GPs do not only provide accurate predictions but also allow for feature ranking, deriving confidence intervals, and error propagation and uncertainty quantification in a principled Bayesian inference framework.
Here we introduce GPs for data analysis in general and to address the forward-inverse problem posed in remote sensing in particular. GPs are typically used for inverse modelling based on concurrent observations and in situ measurements only, or to invert model simulations. We often rely on forward radiative transfer model (RTM) encoding the well-understood physical relations to either perform model inversion with machine learning, or to replace the RTM model with machine learning models, a process known as emulation. We review four novel GP models that respect and learn the physics, and deploy useful machine learning models for remote sensing parameter retrieval and model emulation tasks. First, we will introduce a Joint GP (JGP) model that combines in situ measurements and simulated data in a single GP model for inversion. Second, we present a latent force model (LFM) for GP modelling that encodes ordinary differential equations to blend data and physical models of the system. The LFM performs multi-output regression, can cope with missing data in the time series, and provides explicit latent functions that allow system analysis, evaluation and understanding. Third, we present an Automatic Gaussian Process Emulator (AGAPE) that approximates the forward physical model via interpolation, reducing the number of necessary nodes. Finally, we introduce a new GP model for data-driven regression that respects fundamental laws of physics via dependence-regularization, and provides consistency estimates. All models attain data-driven physics-aware modeling. Empirical evidence of performance of these models will be presented through illustrative examples of vegetation/land monitoring involving multispectral (Landsat, MODIS) and passive microwave (SMOS, SMAP) observations, as well as blending data with radiative transfer models, such as PROSAIL, SCOPE and MODTRAN.
References
"A Perspective on Gaussian Processes for Earth Observation". Camps-Valls et al. National Science Review 6 (4) :616-618, 2019
"Physics-aware Gaussian processes in remote sensing". Camps-Valls et al. Applied Soft Computing 68 :69-82, 2018
"A Survey on Gaussian Processes for Earth Observation Data Analysis: A Comprehensive Investigation". Camps-Valls et al. IEEE Geoscience and Remote Sensing Magazine 2016
How to cite: Camps-Valls, G., Svendsen, D., Martino, L., Pérez-Suay, A., Piles, M., and Muñoz-Marí, J.: Advances in Gaussian Processes for Earth Sciences: Physics-aware, interpretability and consistency, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-14677, https://doi.org/10.5194/egusphere-egu2020-14677, 2020.
EGU2020-13963 | Displays | ITS4.1/NP4.2 | Highlight
Can we predict global patterns of long-term climate change from short-term simulations?Laura Mansfield, Peer Nowack, Matt Kasoar, Richard Everitt, William Collins, and Apostolos Voularakis
Furthering our understanding of regional climate change responses to different greenhouse gas and aerosol emission scenarios is pivotal to inform societal adaptation and mitigation measures. However, complex General Circulation Models (GCMs) used for decadal to centennial climate change projections are computationally expensive. Here we have utilised a unique dataset of existing global climate model simulations to show that a novel machine learning approach can learn relationships between short-term and long-term temperature responses to different climate forcings, which in turn can accelerate climate change projections. This approach could reduce the costs of additional scenario computations and uncover consistent early indicators of long-term climate responses.
We have explored several statistical techniques for this supervised learning task and here we present predictions made with Ridge regression and Gaussian process regression. We have compared the results to pattern scaling as a standard simplified approach for estimating regional surface temperature responses under varying climate forcing scenarios. In this research, we highlight key challenges and opportunities for data-driven climate model emulation, especially with regards to the use of even larger model datasets and different climate variables. We demonstrate the potential to apply our method for gaining new insights into how and where ongoing climate change can be best detected and extrapolated; proposing this as a blueprint for future studies and encouraging data collaborations among research institutes in order to build ever more accurate climate response emulators.
How to cite: Mansfield, L., Nowack, P., Kasoar, M., Everitt, R., Collins, W., and Voularakis, A.: Can we predict global patterns of long-term climate change from short-term simulations?, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13963, https://doi.org/10.5194/egusphere-egu2020-13963, 2020.
Furthering our understanding of regional climate change responses to different greenhouse gas and aerosol emission scenarios is pivotal to inform societal adaptation and mitigation measures. However, complex General Circulation Models (GCMs) used for decadal to centennial climate change projections are computationally expensive. Here we have utilised a unique dataset of existing global climate model simulations to show that a novel machine learning approach can learn relationships between short-term and long-term temperature responses to different climate forcings, which in turn can accelerate climate change projections. This approach could reduce the costs of additional scenario computations and uncover consistent early indicators of long-term climate responses.
We have explored several statistical techniques for this supervised learning task and here we present predictions made with Ridge regression and Gaussian process regression. We have compared the results to pattern scaling as a standard simplified approach for estimating regional surface temperature responses under varying climate forcing scenarios. In this research, we highlight key challenges and opportunities for data-driven climate model emulation, especially with regards to the use of even larger model datasets and different climate variables. We demonstrate the potential to apply our method for gaining new insights into how and where ongoing climate change can be best detected and extrapolated; proposing this as a blueprint for future studies and encouraging data collaborations among research institutes in order to build ever more accurate climate response emulators.
How to cite: Mansfield, L., Nowack, P., Kasoar, M., Everitt, R., Collins, W., and Voularakis, A.: Can we predict global patterns of long-term climate change from short-term simulations?, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13963, https://doi.org/10.5194/egusphere-egu2020-13963, 2020.
EGU2020-5367 | Displays | ITS4.1/NP4.2
Modeling of groundwater table depth anomalies using Long Short-Term Memory networks over EuropeYueling Ma, Carsten Montzka, Bagher Bayat, and Stefan Kollet
Groundwater is the dominant source of fresh water in many European countries. However, due to a lack of near-real-time water table depth (wtd) observations, monitoring of groundwater resources is not feasible at the continental scale. Thus, an alternative approach is required to produce wtd data from other available observations near-real-time. In this study, we propose Long Short-Term Memory (LSTM) networks to model monthly wtd anomalies over Europe utilizing monthly precipitation anomalies as input. LSTM networks are a special type of artificial neural networks, showing great promise in exploiting long-term dependencies between time series, which is expected in the response of groundwater to precipitation. To establish the methodology, spatially and temporally continuous data from terrestrial simulations at the continental scale were applied with a spatial resolution of 0.11°, ranging from the year 1996 to 2016 (Furusho-Percot et al., 2019). They were divided into a training set (1996 – 2012), a validation set (2012 – 2014) and a testing set (2015 -2016) to construct local models on selected pixels over eight PRUDENCE regions. The outputs of the LSTM networks showed good agreement with the simulation results in locations with a shallow wtd (~3m). It is important to note, the quality of the models was strongly affected by the amount of snow cover. Moreover, with the introduction of monthly evapotranspiration anomalies as additional input, pronounced improvements of the network performances were only obtained in more arid regions (i.e., Iberian Peninsula and Mediterranean). Our results demonstrate the potential of LSTM networks to produce high-quality wtd anomalies from hydrometeorological variables that are monitored at the large scale and part of operational forecasting systems potentially facilitating the implementation of an efficient groundwater monitoring system over Europe.
Reference:
Furusho-Percot, C., Goergen, K., Hartick, C., Kulkarni, K., Keune, J. and Kollet, S. (2019). Pan-European groundwater to atmosphere terrestrial systems climatology from a physically consistent simulation. Scientific Data, 6(1).
How to cite: Ma, Y., Montzka, C., Bayat, B., and Kollet, S.: Modeling of groundwater table depth anomalies using Long Short-Term Memory networks over Europe, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5367, https://doi.org/10.5194/egusphere-egu2020-5367, 2020.
Groundwater is the dominant source of fresh water in many European countries. However, due to a lack of near-real-time water table depth (wtd) observations, monitoring of groundwater resources is not feasible at the continental scale. Thus, an alternative approach is required to produce wtd data from other available observations near-real-time. In this study, we propose Long Short-Term Memory (LSTM) networks to model monthly wtd anomalies over Europe utilizing monthly precipitation anomalies as input. LSTM networks are a special type of artificial neural networks, showing great promise in exploiting long-term dependencies between time series, which is expected in the response of groundwater to precipitation. To establish the methodology, spatially and temporally continuous data from terrestrial simulations at the continental scale were applied with a spatial resolution of 0.11°, ranging from the year 1996 to 2016 (Furusho-Percot et al., 2019). They were divided into a training set (1996 – 2012), a validation set (2012 – 2014) and a testing set (2015 -2016) to construct local models on selected pixels over eight PRUDENCE regions. The outputs of the LSTM networks showed good agreement with the simulation results in locations with a shallow wtd (~3m). It is important to note, the quality of the models was strongly affected by the amount of snow cover. Moreover, with the introduction of monthly evapotranspiration anomalies as additional input, pronounced improvements of the network performances were only obtained in more arid regions (i.e., Iberian Peninsula and Mediterranean). Our results demonstrate the potential of LSTM networks to produce high-quality wtd anomalies from hydrometeorological variables that are monitored at the large scale and part of operational forecasting systems potentially facilitating the implementation of an efficient groundwater monitoring system over Europe.
Reference:
Furusho-Percot, C., Goergen, K., Hartick, C., Kulkarni, K., Keune, J. and Kollet, S. (2019). Pan-European groundwater to atmosphere terrestrial systems climatology from a physically consistent simulation. Scientific Data, 6(1).
How to cite: Ma, Y., Montzka, C., Bayat, B., and Kollet, S.: Modeling of groundwater table depth anomalies using Long Short-Term Memory networks over Europe, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5367, https://doi.org/10.5194/egusphere-egu2020-5367, 2020.
EGU2020-16162 | Displays | ITS4.1/NP4.2
Mars’ thermal evolution from machine-learning-based 1D surrogate modellingSiddhant Agarwal, Nicola Tosi, Doris Breuer, Sebastiano Padovan, and Pan Kessel
The parameters and initial conditions governing mantle convection in terrestrial planets like Mars are poorly known meaning that one often needs to randomly vary several parameters to test which ones satisfy observational constraints. However, running forward models in 2D or 3D is computationally intensive to the point that it might prohibit a thorough scan of the entire parameter space. We propose using Machine Learning to find a low-dimensional mapping from input parameters to outputs. We use about 10,000 thermal evolution simulations with Mars-like parameters run on a 2D quarter cylindrical grid to train a fully-connected Neural Network (NN). We use the code GAIA (Hüttig et al., 2013) to solve the conservation equations of mantle convection for a fluid with Newtonian rheology and infinite Prandtl number under the Extended Boussinesq Approximation. The viscosity is calculated according to the Arrhenius law of diffusion creep (Hirth & Kohlstedt, 2003). The model also considers the effects of partial melting on the energy balance, including mantle depletion of heat producing-elements (Padovan et., 2017), as well as major phase transitions in the olivine system.
To generate the dataset, we randomly vary 5 different parameters with respect to each other: thermal Rayleigh number, internal heating Rayleigh number, activation energy, activation volume and a depletion factor for heat-producing elements in the mantle. In order to train in time, we take the simplest possible approach, i.e., we treat time as another variable in our input vector. 80% of the dataset is used to train our NN, 10% is used to test different architectures and to avoid over-fitting, and the remaining 10% is used as test set to evaluate the error of the predictions. For given values of the five parameters, our NN can predict the resulting horizontally-averaged temperature profile at any time in the evolution, spanning 4.5 Ga with an average error under 0.3% on the test set. Tests indicate that with as few as 5% of the training samples (= simulations x time steps), one can achieve a test-error below 0.5%, suggesting that for this setup, one can potentially learn the mapping from fewer simulations.
Finally, we ran a fourth batch of GAIA simulations and compared them to the output of our NN. In almost all cases, the instantaneous predictions of the 1D temperature profiles from the NN match those of the computationally expensive simulations extremely well, with an error below 0.5%.
How to cite: Agarwal, S., Tosi, N., Breuer, D., Padovan, S., and Kessel, P.: Mars’ thermal evolution from machine-learning-based 1D surrogate modelling , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-16162, https://doi.org/10.5194/egusphere-egu2020-16162, 2020.
The parameters and initial conditions governing mantle convection in terrestrial planets like Mars are poorly known meaning that one often needs to randomly vary several parameters to test which ones satisfy observational constraints. However, running forward models in 2D or 3D is computationally intensive to the point that it might prohibit a thorough scan of the entire parameter space. We propose using Machine Learning to find a low-dimensional mapping from input parameters to outputs. We use about 10,000 thermal evolution simulations with Mars-like parameters run on a 2D quarter cylindrical grid to train a fully-connected Neural Network (NN). We use the code GAIA (Hüttig et al., 2013) to solve the conservation equations of mantle convection for a fluid with Newtonian rheology and infinite Prandtl number under the Extended Boussinesq Approximation. The viscosity is calculated according to the Arrhenius law of diffusion creep (Hirth & Kohlstedt, 2003). The model also considers the effects of partial melting on the energy balance, including mantle depletion of heat producing-elements (Padovan et., 2017), as well as major phase transitions in the olivine system.
To generate the dataset, we randomly vary 5 different parameters with respect to each other: thermal Rayleigh number, internal heating Rayleigh number, activation energy, activation volume and a depletion factor for heat-producing elements in the mantle. In order to train in time, we take the simplest possible approach, i.e., we treat time as another variable in our input vector. 80% of the dataset is used to train our NN, 10% is used to test different architectures and to avoid over-fitting, and the remaining 10% is used as test set to evaluate the error of the predictions. For given values of the five parameters, our NN can predict the resulting horizontally-averaged temperature profile at any time in the evolution, spanning 4.5 Ga with an average error under 0.3% on the test set. Tests indicate that with as few as 5% of the training samples (= simulations x time steps), one can achieve a test-error below 0.5%, suggesting that for this setup, one can potentially learn the mapping from fewer simulations.
Finally, we ran a fourth batch of GAIA simulations and compared them to the output of our NN. In almost all cases, the instantaneous predictions of the 1D temperature profiles from the NN match those of the computationally expensive simulations extremely well, with an error below 0.5%.
How to cite: Agarwal, S., Tosi, N., Breuer, D., Padovan, S., and Kessel, P.: Mars’ thermal evolution from machine-learning-based 1D surrogate modelling , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-16162, https://doi.org/10.5194/egusphere-egu2020-16162, 2020.
EGU2020-8823 | Displays | ITS4.1/NP4.2
Towards a generalized framework for missing value imputation of fragmented Earth observation dataVerena Bessenbacher, Lukas Gudmundsson, and Sonia I. Seneviratne
The past decades have seen massive advances in generating Earth System observations. A plethora of instruments is, at any point in time, taking remote measurements of the Earth’s surface aboard satellites. This birds-eye view of the land surface has become invaluable to the climate science and hydrology communities. However, the same variable is often observed by several platforms with contrasting results and satellite observations have non-trivial patterns of missing values. Consequently, mostly only one remote sensing product is used simultaneously. This and the inherent missingness of the datasets has led to a fragmentation of the observational record that limits the widespread use of remotely sensed land observations. We aim towards a generalized framework for gap-filling global, high-resolution remote sensing measurements relevant for the terrestrial water cycle, focusing on ESA microwave soil moisture, land surface temperature and GPM precipitation. To this end, we explore statistical imputation methods and benchmark them using a “perfect dataset approach”, in which we apply the missingness pattern of the remote sensing datasets onto their matching variables in the ERA5 reanalysis data. Original and imputed values are subsequently compared for benchmarking. Our highly modular approach iteratively produces estimates for the missing values and fits a model to the whole dataset, in an expectation-maximisation alike fashion. This procedure is repeated until the estimates for the missing data points converge. The method harnesses the highly-structured nature of gridded covarying observation datasets within the flexible function learning toolbox of data-driven approaches. The imputation utilises (1) the temporal autocorrelation and spatial neighborhood within one variable or dataset and (2) the different missingness patterns across different variables or datasets, i.e. the fact that if one variable at a given point in space and time is missing, another covarying variable might be observed and their local covariance could be learned. A method based on simple ridge regression has shown to perform best in terms of results and computational expensiveness and is able to outperform simple “ad-hoc” gapfilling procedures. This model, once thoroughly tested, will be applied to gapfill real satellite data and create an inherently consistent dataset that is based exclusively on observations.
How to cite: Bessenbacher, V., Gudmundsson, L., and Seneviratne, S. I.: Towards a generalized framework for missing value imputation of fragmented Earth observation data, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8823, https://doi.org/10.5194/egusphere-egu2020-8823, 2020.
The past decades have seen massive advances in generating Earth System observations. A plethora of instruments is, at any point in time, taking remote measurements of the Earth’s surface aboard satellites. This birds-eye view of the land surface has become invaluable to the climate science and hydrology communities. However, the same variable is often observed by several platforms with contrasting results and satellite observations have non-trivial patterns of missing values. Consequently, mostly only one remote sensing product is used simultaneously. This and the inherent missingness of the datasets has led to a fragmentation of the observational record that limits the widespread use of remotely sensed land observations. We aim towards a generalized framework for gap-filling global, high-resolution remote sensing measurements relevant for the terrestrial water cycle, focusing on ESA microwave soil moisture, land surface temperature and GPM precipitation. To this end, we explore statistical imputation methods and benchmark them using a “perfect dataset approach”, in which we apply the missingness pattern of the remote sensing datasets onto their matching variables in the ERA5 reanalysis data. Original and imputed values are subsequently compared for benchmarking. Our highly modular approach iteratively produces estimates for the missing values and fits a model to the whole dataset, in an expectation-maximisation alike fashion. This procedure is repeated until the estimates for the missing data points converge. The method harnesses the highly-structured nature of gridded covarying observation datasets within the flexible function learning toolbox of data-driven approaches. The imputation utilises (1) the temporal autocorrelation and spatial neighborhood within one variable or dataset and (2) the different missingness patterns across different variables or datasets, i.e. the fact that if one variable at a given point in space and time is missing, another covarying variable might be observed and their local covariance could be learned. A method based on simple ridge regression has shown to perform best in terms of results and computational expensiveness and is able to outperform simple “ad-hoc” gapfilling procedures. This model, once thoroughly tested, will be applied to gapfill real satellite data and create an inherently consistent dataset that is based exclusively on observations.
How to cite: Bessenbacher, V., Gudmundsson, L., and Seneviratne, S. I.: Towards a generalized framework for missing value imputation of fragmented Earth observation data, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8823, https://doi.org/10.5194/egusphere-egu2020-8823, 2020.
EGU2020-20208 | Displays | ITS4.1/NP4.2
Classifying Global Low-Cloud Morphology with a Deep Learning Model: Results and Potential UseTianle Yuan
Marine low clouds display rich mesoscale morphological types, distinct spatial patterns of cloud fields. Being able to differentiate low cloud morphology offers a tool for the research community to go one step beyond bulk cloud statistics such as cloud fraction and advance the understanding of low clouds. Here we report the progress of a NASA funded project that aims to create an observational record of low cloud mesoscale morphology at a near-global (60S-60N) scale. First, a training set is created by our team members manually labeling thousands of mesoscale (128x128) MODIS scenes into six different categories: stratus, closed cellular convection, disorganized convection, open cellular convection, clustered cumulus convection, and suppressed cumulus convection. Then we train a deep convolutional neural network model using this training set to classify individual MODIS scenes at 128x128 resolution, and test it on a test set. The trained model achieves a cross-type average precision of about 93%. We apply the trained model to 16 years of data over the Southeast Pacific. The resulting climatological distribution of low cloud morphology types show both expected and unexpected features and suggest promising potential for low cloud studies as a data product.
How to cite: Yuan, T.: Classifying Global Low-Cloud Morphology with a Deep Learning Model: Results and Potential Use, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20208, https://doi.org/10.5194/egusphere-egu2020-20208, 2020.
Marine low clouds display rich mesoscale morphological types, distinct spatial patterns of cloud fields. Being able to differentiate low cloud morphology offers a tool for the research community to go one step beyond bulk cloud statistics such as cloud fraction and advance the understanding of low clouds. Here we report the progress of a NASA funded project that aims to create an observational record of low cloud mesoscale morphology at a near-global (60S-60N) scale. First, a training set is created by our team members manually labeling thousands of mesoscale (128x128) MODIS scenes into six different categories: stratus, closed cellular convection, disorganized convection, open cellular convection, clustered cumulus convection, and suppressed cumulus convection. Then we train a deep convolutional neural network model using this training set to classify individual MODIS scenes at 128x128 resolution, and test it on a test set. The trained model achieves a cross-type average precision of about 93%. We apply the trained model to 16 years of data over the Southeast Pacific. The resulting climatological distribution of low cloud morphology types show both expected and unexpected features and suggest promising potential for low cloud studies as a data product.
How to cite: Yuan, T.: Classifying Global Low-Cloud Morphology with a Deep Learning Model: Results and Potential Use, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20208, https://doi.org/10.5194/egusphere-egu2020-20208, 2020.
EGU2020-20655 | Displays | ITS4.1/NP4.2 | Highlight
From remote sensing to bioeconomy: how big data can improve automatic map generationJonathan Rizzi, Ingvild Nystuen, Misganu Debella-Gilo, and Nils Egil Søvde
Recent years are experiencing an exponential increase of remote sensing datasets coming from different sources (satellites, airplanes, UAVs) at different resolutions (up to few cm) based on different sensors (single bands sensors, hyperspectral cameras, LIDAR, …). At the same time, IT developments are allowing for the storage of very large datasets (up to Petabytes) and their efficient processing (through HPC, distributed computing, use of GPUs). This allowed for the development and diffusion of many libraries and packages implementing machine learning algorithm in a very efficient way. It has become therefor possible to use machine learning (including deep learning methods such as convolutional neural networks) to spatial datasets with the aim of increase the level of automaticity of the creation of new maps or the update of existing maps.
Within this context, the Norwegian Institute of Bioeconomy Research (NIBIO), has started a project to test and apply big data methods and tools to support research activity transversally across its divisions. NIBIO is a research-based knowledge institution that utilizes its expertise and professional breadth for the development of the bioeconomy in Norway. Its social mission entails a national responsibility in the bioeconomy sector, focusing on several societal challenges including: i) Climate (emission reductions, carbon uptake and climate adaptation); ii) Sustainability (environment, resource management and production within nature and society's tolerance limits); iii) Transformation (circular economy, resource efficient production systems, innovation and technology development); iv) food; and v) economy.
The presentation will show obtained results focus on land cover mapping using different methods and different dataset, include satellite images and airborne hyperspectral images. Further, the presentation will focus related on the criticalities related to automatic mapping from remote sensing dataset and importance of the availability of large training datasets.
How to cite: Rizzi, J., Nystuen, I., Debella-Gilo, M., and Søvde, N. E.: From remote sensing to bioeconomy: how big data can improve automatic map generation, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20655, https://doi.org/10.5194/egusphere-egu2020-20655, 2020.
Recent years are experiencing an exponential increase of remote sensing datasets coming from different sources (satellites, airplanes, UAVs) at different resolutions (up to few cm) based on different sensors (single bands sensors, hyperspectral cameras, LIDAR, …). At the same time, IT developments are allowing for the storage of very large datasets (up to Petabytes) and their efficient processing (through HPC, distributed computing, use of GPUs). This allowed for the development and diffusion of many libraries and packages implementing machine learning algorithm in a very efficient way. It has become therefor possible to use machine learning (including deep learning methods such as convolutional neural networks) to spatial datasets with the aim of increase the level of automaticity of the creation of new maps or the update of existing maps.
Within this context, the Norwegian Institute of Bioeconomy Research (NIBIO), has started a project to test and apply big data methods and tools to support research activity transversally across its divisions. NIBIO is a research-based knowledge institution that utilizes its expertise and professional breadth for the development of the bioeconomy in Norway. Its social mission entails a national responsibility in the bioeconomy sector, focusing on several societal challenges including: i) Climate (emission reductions, carbon uptake and climate adaptation); ii) Sustainability (environment, resource management and production within nature and society's tolerance limits); iii) Transformation (circular economy, resource efficient production systems, innovation and technology development); iv) food; and v) economy.
The presentation will show obtained results focus on land cover mapping using different methods and different dataset, include satellite images and airborne hyperspectral images. Further, the presentation will focus related on the criticalities related to automatic mapping from remote sensing dataset and importance of the availability of large training datasets.
How to cite: Rizzi, J., Nystuen, I., Debella-Gilo, M., and Søvde, N. E.: From remote sensing to bioeconomy: how big data can improve automatic map generation, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20655, https://doi.org/10.5194/egusphere-egu2020-20655, 2020.
EGU2020-18163 | Displays | ITS4.1/NP4.2
Explainable deep learning to predict and understand crop yield estimatesAleksandra Wolanin, Gonzalo Mateo-García, Gustau Camps-Valls, Luis Gómez-Chova, Michele Meroni, Gregory Duveiller, You Liangzhi, and Luis Guanter
Estimating crop yields is becoming increasingly relevant under the current context of an expanding world population accompanied by rising incomes in a changing climate. Crop growth, crop development, and final grain yield are all determined by environmental conditions in a complex nonlinear manner. Machine learning (ML), and deep learning (DL) methods in particular, can account for such nonlinear relations between yield and its drivers. However, they typically lack transparency and interpretability, which in the context of yield forecasting is of great relevance. Here, we explore how to benefit from the increased predictive performance of DL methods without compromising the ability to interpret how the models achieve their results for an example of the wheat yield in the Indian Wheat Belt.
We applied a convolutional neural network to multivariate time series of meteorological and satellite-derived vegetation variables at a daily resolution to estimate the wheat yield in the Indian Wheat Belt. Afterwards, the features and yield drivers learned by the model were visualized and analyzed with the use of regression activation maps. The learned features were primarily related to the length of the growing season, temperature, and light conditions during the growing season. Our analysis showed that high yields in 2012 were associated with low temperatures accompanied by sunny conditions during the growing period. The proposed methodology can be used for other crops and regions in order to facilitate application of DL models in agriculture.
References:
Wolanin A., Mateo-Garcı́a G., Camps-Valls G., Gómez-Chova L. ,Meroni, M., Duveiller, G., You, L., Guanter L. (2020) Estimating and Understanding Crop Yields with Explainable Deep Learning in the Indian Wheat Belt. Environmental Research Letters.
How to cite: Wolanin, A., Mateo-García, G., Camps-Valls, G., Gómez-Chova, L., Meroni, M., Duveiller, G., Liangzhi, Y., and Guanter, L.: Explainable deep learning to predict and understand crop yield estimates, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18163, https://doi.org/10.5194/egusphere-egu2020-18163, 2020.
Estimating crop yields is becoming increasingly relevant under the current context of an expanding world population accompanied by rising incomes in a changing climate. Crop growth, crop development, and final grain yield are all determined by environmental conditions in a complex nonlinear manner. Machine learning (ML), and deep learning (DL) methods in particular, can account for such nonlinear relations between yield and its drivers. However, they typically lack transparency and interpretability, which in the context of yield forecasting is of great relevance. Here, we explore how to benefit from the increased predictive performance of DL methods without compromising the ability to interpret how the models achieve their results for an example of the wheat yield in the Indian Wheat Belt.
We applied a convolutional neural network to multivariate time series of meteorological and satellite-derived vegetation variables at a daily resolution to estimate the wheat yield in the Indian Wheat Belt. Afterwards, the features and yield drivers learned by the model were visualized and analyzed with the use of regression activation maps. The learned features were primarily related to the length of the growing season, temperature, and light conditions during the growing season. Our analysis showed that high yields in 2012 were associated with low temperatures accompanied by sunny conditions during the growing period. The proposed methodology can be used for other crops and regions in order to facilitate application of DL models in agriculture.
References:
Wolanin A., Mateo-Garcı́a G., Camps-Valls G., Gómez-Chova L. ,Meroni, M., Duveiller, G., You, L., Guanter L. (2020) Estimating and Understanding Crop Yields with Explainable Deep Learning in the Indian Wheat Belt. Environmental Research Letters.
How to cite: Wolanin, A., Mateo-García, G., Camps-Valls, G., Gómez-Chova, L., Meroni, M., Duveiller, G., Liangzhi, Y., and Guanter, L.: Explainable deep learning to predict and understand crop yield estimates, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18163, https://doi.org/10.5194/egusphere-egu2020-18163, 2020.
EGU2020-2874 | Displays | ITS4.1/NP4.2
Using Convolutional Neural Networks for the prediction of groundwater levelsMaximilian Nölscher, Hartmut Häntze, Stefan Broda, Lena Jäger, Paul Prasse, and Silvia Makowski
The temporal prediction of groundwater levels plays an important role in groundwater management, such as the estimation of anthropogenic impacts as well as consequences of climatic changes. Therefore, the modeling of groundwater levels using physics-based approaches is an integral part of hydrogeology. However, data-driven approaches have only recently been used, in particular for the prediction of groundwater levels using machine learning techniques (e.g., Random Forest and Neural Networks). For this purpose, one model per observation well or time series is always set up. In order to further develop this, an approach is presented which uses a single model for the prediction of groundwater levels at several observation wells (n > 200). The model is a three-dimensional Convolutional Neural Net (CNN).
In addition to the time series of groundwater levels meteorological data on precipitation (P) and temperature (T) serves as additional input channels. The CNN "sees" not only the P- or T-value of the grid cell in which the observation well lies, but also surrounding values. This has the advantage that even influences of meteorological patterns in the spatial vicinity of the observation well on the groundwater level can be learned. The forecasts are calculated for periods up to six months. In addition to the comparison with the measured values, a comparison of the error averaged over all observation wells compared to a baseline model is used for the validation. To further improve predictability, the hyperparameters are optimized and other area data (e.g., geology, soil properties, land use) used as input. This model should form the basis for a regionalized forecast of groundwater levels.
How to cite: Nölscher, M., Häntze, H., Broda, S., Jäger, L., Prasse, P., and Makowski, S.: Using Convolutional Neural Networks for the prediction of groundwater levels, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2874, https://doi.org/10.5194/egusphere-egu2020-2874, 2020.
The temporal prediction of groundwater levels plays an important role in groundwater management, such as the estimation of anthropogenic impacts as well as consequences of climatic changes. Therefore, the modeling of groundwater levels using physics-based approaches is an integral part of hydrogeology. However, data-driven approaches have only recently been used, in particular for the prediction of groundwater levels using machine learning techniques (e.g., Random Forest and Neural Networks). For this purpose, one model per observation well or time series is always set up. In order to further develop this, an approach is presented which uses a single model for the prediction of groundwater levels at several observation wells (n > 200). The model is a three-dimensional Convolutional Neural Net (CNN).
In addition to the time series of groundwater levels meteorological data on precipitation (P) and temperature (T) serves as additional input channels. The CNN "sees" not only the P- or T-value of the grid cell in which the observation well lies, but also surrounding values. This has the advantage that even influences of meteorological patterns in the spatial vicinity of the observation well on the groundwater level can be learned. The forecasts are calculated for periods up to six months. In addition to the comparison with the measured values, a comparison of the error averaged over all observation wells compared to a baseline model is used for the validation. To further improve predictability, the hyperparameters are optimized and other area data (e.g., geology, soil properties, land use) used as input. This model should form the basis for a regionalized forecast of groundwater levels.
How to cite: Nölscher, M., Häntze, H., Broda, S., Jäger, L., Prasse, P., and Makowski, S.: Using Convolutional Neural Networks for the prediction of groundwater levels, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2874, https://doi.org/10.5194/egusphere-egu2020-2874, 2020.
EGU2020-11450 | Displays | ITS4.1/NP4.2
A machine learning based monitoring framework for CO2 storageAnouar Romdhane, Scott Bunting, Jo Eidsvik, Susan Anyosa, and Per Bergmo
With increasingly visible effects of climate changes and a growing awareness of the possible consequences, Carbon Capture and Storage (CCS) technologies are gaining momentum. Currently preparations are being done in Norway for a full-scale CCS project where CO2 will be stored in a deep saline aquifer. A possible candidate for such storage is Smeaheia, located in the North Sea.
One of the main risks related to large scale storage projects is leakage of CO2 out of the storage complex. It is important to design measurement, monitoring and verification (MMV) plans addressing leakage risk together with other risks related to conformance and containment verification. In general, geophysical monitoring represents a significant part of storage monitoring costs. Tailored and cost- effective geophysical monitoring programs that consider the trade-off between value and cost are therefore required. A risk-based approach can be adopted to plan the monitoring, but another more quantitative approach coming from decision analysis is that of value of information (VOI) analysis. In such an analysis one can define a decision problem and measure the value of information as the additional value obtained by purchasing information before making the decision.
In this work, we study the VOI of seismic data in a context of CO2 storage decision making. Our goal is to evaluate when a seismic survey has the highest value when it comes to detecting a potential leakage of CO2, in a dynamic decision problem where we can either stop or continue the injection. We describe the proposed workflow and illustrate it through a constructed case study using a simplified Smeaheia model. We combine Monte Carlo and statistical regression techniques to estimate the VOI at different times. In a first stage, we define the decision problem. We then efficiently generate 10000 possible distributions of CO2 saturation using a reduced order-based reservoir simulation tool. We consider both leaking and non-leaking scenarios and account for uncertainties in petrophysical properties (porosity and permeability distributions). From the simulated saturations of CO2, we derive distributions of geophysical properties and model the corresponding seismic data. We then regress those values on the reference seismic data, to estimate the VOI. We evaluate the use of two machine learning based regression techniques- the k-nearest neighbours' regression with principal components and convolutional neural network (CNN). Both results are compared. We observe that VOI estimates obtained using the k-nearest neighbours' regressions were consistently lower than the estimates obtained using the CNN. Through bootstrapping, we show that the k-nearest neighbours approach produced more stable VOI estimates when compared to the neural networks' method. We analyse possible reasons of the high variability observed with neural networks and suggest means to mitigate them.
Acknowledgments
This publication has been produced with support from the NCCS Centre (NFR project number 257579/E20).
How to cite: Romdhane, A., Bunting, S., Eidsvik, J., Anyosa, S., and Bergmo, P.: A machine learning based monitoring framework for CO2 storage, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11450, https://doi.org/10.5194/egusphere-egu2020-11450, 2020.
With increasingly visible effects of climate changes and a growing awareness of the possible consequences, Carbon Capture and Storage (CCS) technologies are gaining momentum. Currently preparations are being done in Norway for a full-scale CCS project where CO2 will be stored in a deep saline aquifer. A possible candidate for such storage is Smeaheia, located in the North Sea.
One of the main risks related to large scale storage projects is leakage of CO2 out of the storage complex. It is important to design measurement, monitoring and verification (MMV) plans addressing leakage risk together with other risks related to conformance and containment verification. In general, geophysical monitoring represents a significant part of storage monitoring costs. Tailored and cost- effective geophysical monitoring programs that consider the trade-off between value and cost are therefore required. A risk-based approach can be adopted to plan the monitoring, but another more quantitative approach coming from decision analysis is that of value of information (VOI) analysis. In such an analysis one can define a decision problem and measure the value of information as the additional value obtained by purchasing information before making the decision.
In this work, we study the VOI of seismic data in a context of CO2 storage decision making. Our goal is to evaluate when a seismic survey has the highest value when it comes to detecting a potential leakage of CO2, in a dynamic decision problem where we can either stop or continue the injection. We describe the proposed workflow and illustrate it through a constructed case study using a simplified Smeaheia model. We combine Monte Carlo and statistical regression techniques to estimate the VOI at different times. In a first stage, we define the decision problem. We then efficiently generate 10000 possible distributions of CO2 saturation using a reduced order-based reservoir simulation tool. We consider both leaking and non-leaking scenarios and account for uncertainties in petrophysical properties (porosity and permeability distributions). From the simulated saturations of CO2, we derive distributions of geophysical properties and model the corresponding seismic data. We then regress those values on the reference seismic data, to estimate the VOI. We evaluate the use of two machine learning based regression techniques- the k-nearest neighbours' regression with principal components and convolutional neural network (CNN). Both results are compared. We observe that VOI estimates obtained using the k-nearest neighbours' regressions were consistently lower than the estimates obtained using the CNN. Through bootstrapping, we show that the k-nearest neighbours approach produced more stable VOI estimates when compared to the neural networks' method. We analyse possible reasons of the high variability observed with neural networks and suggest means to mitigate them.
Acknowledgments
This publication has been produced with support from the NCCS Centre (NFR project number 257579/E20).
How to cite: Romdhane, A., Bunting, S., Eidsvik, J., Anyosa, S., and Bergmo, P.: A machine learning based monitoring framework for CO2 storage, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11450, https://doi.org/10.5194/egusphere-egu2020-11450, 2020.
EGU2020-19667 | Displays | ITS4.1/NP4.2
Mineral interpretation results using deep learning with hyperspectral imageryAndrés Bell, Carlos Roberto Del-Blanco, Fernando Jaureguizar, Narciso García, and María José Jurado
Minerals are key resources for several industries, such as the manufacturing of high-performance components and the latest electronic devices. For the purpose of finding new mineral deposits, mineral interpretation is a task of great relevance in mining and metallurgy sectors. However, it is usually a long, costly, laborious, and manual procedure. It involves the characterization of mineral samples in laboratories far from the mineral deposits and it is subject to human interpretation mistakes. To address the previous problems, an automatic mineral recognition system is proposed that analyzes in real-time hyperspectral imagery acquired in different spectral ranges: VN-SWIR (Visible, Near and Short Wave Infrared) and LWIR (Long Wave Infrared). Thus, more efficient, faster, and more economic explorations are performed, by analyzing in-situ mineral deposits in the subsurface, instead of in laboratories. The developed system is based on a deep learning technique that implements a semantic segmentation neural network that considers spatial and spectral correlations. Two different databases composed by scanned drilled mineral cores from different mineral deposits have been used to evaluate the mineral interpretation capability. The first database contains hyperspectral images in the VN-SWIR range and the second one in the LWIR range. The obtained results show that the mineral recognition for the first database (VN-SWIR band) achieves an 86% in accuracy considering the following mineral classes: Actinolite, amphibole, biotite-chlorite, carbonate, epidote, saponite, whitemica and whitemica-chlorite. For the second database (LWIR band), a 90% in accuracy has been obtained with the following mineral classes: Albite, amphibole, apatite, carbonate, clinopyroxene, epidote, microcline, quartz, quartz-clay-feldspar and sulphide-oxide. The mineral recognition capability has been also compared between both spectral bands considering the common minerals in both databases. The results show a higher recognition performance in the LWIR band, achieving a 96% in accuracy, than in the VN-SWIR bands, which achieves an accuracy of 85%. However, the hyperspectral cameras covering VN-SWIR range are significantly more economic than those covering the LWIR range, and therefore making them a very interesting option for low-budget systems, but still with a good mineral recognition performance. On the other hand, there is a better recognition capability for those mineral categories with a higher number of samples in the databases, as expected. Acknowledgement: This research was funded the EIT Raw Materials through the Innovative geophysical logging tools for mineral exploration - 16350 InnoLOG Upscaling Project.
How to cite: Bell, A., Del-Blanco, C. R., Jaureguizar, F., García, N., and Jurado, M. J.: Mineral interpretation results using deep learning with hyperspectral imagery, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19667, https://doi.org/10.5194/egusphere-egu2020-19667, 2020.
Minerals are key resources for several industries, such as the manufacturing of high-performance components and the latest electronic devices. For the purpose of finding new mineral deposits, mineral interpretation is a task of great relevance in mining and metallurgy sectors. However, it is usually a long, costly, laborious, and manual procedure. It involves the characterization of mineral samples in laboratories far from the mineral deposits and it is subject to human interpretation mistakes. To address the previous problems, an automatic mineral recognition system is proposed that analyzes in real-time hyperspectral imagery acquired in different spectral ranges: VN-SWIR (Visible, Near and Short Wave Infrared) and LWIR (Long Wave Infrared). Thus, more efficient, faster, and more economic explorations are performed, by analyzing in-situ mineral deposits in the subsurface, instead of in laboratories. The developed system is based on a deep learning technique that implements a semantic segmentation neural network that considers spatial and spectral correlations. Two different databases composed by scanned drilled mineral cores from different mineral deposits have been used to evaluate the mineral interpretation capability. The first database contains hyperspectral images in the VN-SWIR range and the second one in the LWIR range. The obtained results show that the mineral recognition for the first database (VN-SWIR band) achieves an 86% in accuracy considering the following mineral classes: Actinolite, amphibole, biotite-chlorite, carbonate, epidote, saponite, whitemica and whitemica-chlorite. For the second database (LWIR band), a 90% in accuracy has been obtained with the following mineral classes: Albite, amphibole, apatite, carbonate, clinopyroxene, epidote, microcline, quartz, quartz-clay-feldspar and sulphide-oxide. The mineral recognition capability has been also compared between both spectral bands considering the common minerals in both databases. The results show a higher recognition performance in the LWIR band, achieving a 96% in accuracy, than in the VN-SWIR bands, which achieves an accuracy of 85%. However, the hyperspectral cameras covering VN-SWIR range are significantly more economic than those covering the LWIR range, and therefore making them a very interesting option for low-budget systems, but still with a good mineral recognition performance. On the other hand, there is a better recognition capability for those mineral categories with a higher number of samples in the databases, as expected. Acknowledgement: This research was funded the EIT Raw Materials through the Innovative geophysical logging tools for mineral exploration - 16350 InnoLOG Upscaling Project.
How to cite: Bell, A., Del-Blanco, C. R., Jaureguizar, F., García, N., and Jurado, M. J.: Mineral interpretation results using deep learning with hyperspectral imagery, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19667, https://doi.org/10.5194/egusphere-egu2020-19667, 2020.
EGU2020-20954 | Displays | ITS4.1/NP4.2
Using computer vision and deep learning for acquisition and processing of low-distortion sediment core imagesStephen Obrochta, Szilárd Fazekas, and Jan Morén
Imaging the split surface of sediment cores is standard procedure across a range of geoscience fields. However, obtaining high-resolution, continuous images with very little distortion has traditionally required expensive and fragile line-scanning systems that may be difficult or impossible to transport into the field. Thus many researchers take photographs of entire core sections, which may result in distortion, particularly at the upper and lower edges. Using computer vision techniques, we developed a set of open source tools for seamlessly stitching together a series of photographs, taken with any camera, of the split surface of a sediment core. The resulting composite image contains less distortion than a single photograph of the entire core section, particularly when combined with a simple camera sliding mechanism. The method allows for detection of and correction for variable camera tilt and rotation between adjacent pairs of images. We trained a deep neural network to post-processe the image to automate the tedious task of segmenting the sediment core from the background, while also detecting the location of the accompanying scale bar and cracks or other areas of coring-induced disturbance. A color reflectance record is then generated from the isolated core image, ignoring variations from e.g., cracks and voids.
How to cite: Obrochta, S., Fazekas, S., and Morén, J.: Using computer vision and deep learning for acquisition and processing of low-distortion sediment core images, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20954, https://doi.org/10.5194/egusphere-egu2020-20954, 2020.
Imaging the split surface of sediment cores is standard procedure across a range of geoscience fields. However, obtaining high-resolution, continuous images with very little distortion has traditionally required expensive and fragile line-scanning systems that may be difficult or impossible to transport into the field. Thus many researchers take photographs of entire core sections, which may result in distortion, particularly at the upper and lower edges. Using computer vision techniques, we developed a set of open source tools for seamlessly stitching together a series of photographs, taken with any camera, of the split surface of a sediment core. The resulting composite image contains less distortion than a single photograph of the entire core section, particularly when combined with a simple camera sliding mechanism. The method allows for detection of and correction for variable camera tilt and rotation between adjacent pairs of images. We trained a deep neural network to post-processe the image to automate the tedious task of segmenting the sediment core from the background, while also detecting the location of the accompanying scale bar and cracks or other areas of coring-induced disturbance. A color reflectance record is then generated from the isolated core image, ignoring variations from e.g., cracks and voids.
How to cite: Obrochta, S., Fazekas, S., and Morén, J.: Using computer vision and deep learning for acquisition and processing of low-distortion sediment core images, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20954, https://doi.org/10.5194/egusphere-egu2020-20954, 2020.
EGU2020-17149 | Displays | ITS4.1/NP4.2
Tree Species Identification in a Northern Temperate Forest in the United States Using Manifold Learning and Transfer Learning from Hyperspectral DataDonghui Ma and Yun Shi
EGU2020-10472 | Displays | ITS4.1/NP4.2
Using Machine Learning for processing Big Data of Copernicus Satellite Sensors at the Example of the TROPOMI / Sentinel-5 Precursor and Sentinel-4 Cloud ProductFabian Romahn, Athina Argyrouli, Ronny Lutz, Diego Loyola, and Victor Molina Garcia
The satellites of the Copernicus program show the increasing relevance of properly handling the huge amount of Earth observation data, nowadays common in remote sensing. This is further challenging if the processed data has to be provided in near real time (NRT), like the cloud product from TROPOMI / Sentinel-5 Precursor (S5P) or the upcoming Sentinel-4 (S4) mission.
In order to solve the inverse problems that arise in the retrieval of cloud products, as well as in similar remote sensing problems, usually complex radiative transfer models (RTMs) are used. These are very accurate, however also computationally very expensive and therefore often not feasible in combination with NRT requirements. With the recent significant breakthroughs in machine learning, easier application through better software and more powerful hardware, the methods of this field have become very interesting as a way to improve the classical remote sensing algorithms.
In this presentation we show how artificial neural networks (ANNs) can be used to replace the original RTM in the ROCINN (Retrieval Of Cloud Information using Neural Networks) algorithm with sufficient accuracy while increasing the computational performance at the same time by several orders of magnitude.
We developed a general procedure which consists of smart sampling, generation and scaling of the training data, as well as training, validation and finally deployment of the ANN into the operational processor. In order to minimize manual work, the procedure is highly automated and uses latest technologies such as TensorFlow. It is applicable for any kind of RTMs and thus can be used for many retrieval algorithms like it is already done for ROCINN in S5P and will be soon for ROCINN in the context of S4. Regarding the final performance of the generated ANN, there are several critical parameters which have a high impact (e.g. the structure of the ANN). These will be evaluated in detail. Furthermore, we also show general limitations of ANNs in comparison with RTMs, how this can lead to unexpected side effects and ways to cope with these issues.
With the example of ROCINN, as part of the operational S5P and upcoming S4 cloud product, we show the great potential of machine learning techniques in improving the performance of classical retrieval algorithms and thus increasing their capability to deal with much larger data quantities. However, we also highlight the importance of a proper configuration and possible limitations.
How to cite: Romahn, F., Argyrouli, A., Lutz, R., Loyola, D., and Molina Garcia, V.: Using Machine Learning for processing Big Data of Copernicus Satellite Sensors at the Example of the TROPOMI / Sentinel-5 Precursor and Sentinel-4 Cloud Product, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10472, https://doi.org/10.5194/egusphere-egu2020-10472, 2020.
The satellites of the Copernicus program show the increasing relevance of properly handling the huge amount of Earth observation data, nowadays common in remote sensing. This is further challenging if the processed data has to be provided in near real time (NRT), like the cloud product from TROPOMI / Sentinel-5 Precursor (S5P) or the upcoming Sentinel-4 (S4) mission.
In order to solve the inverse problems that arise in the retrieval of cloud products, as well as in similar remote sensing problems, usually complex radiative transfer models (RTMs) are used. These are very accurate, however also computationally very expensive and therefore often not feasible in combination with NRT requirements. With the recent significant breakthroughs in machine learning, easier application through better software and more powerful hardware, the methods of this field have become very interesting as a way to improve the classical remote sensing algorithms.
In this presentation we show how artificial neural networks (ANNs) can be used to replace the original RTM in the ROCINN (Retrieval Of Cloud Information using Neural Networks) algorithm with sufficient accuracy while increasing the computational performance at the same time by several orders of magnitude.
We developed a general procedure which consists of smart sampling, generation and scaling of the training data, as well as training, validation and finally deployment of the ANN into the operational processor. In order to minimize manual work, the procedure is highly automated and uses latest technologies such as TensorFlow. It is applicable for any kind of RTMs and thus can be used for many retrieval algorithms like it is already done for ROCINN in S5P and will be soon for ROCINN in the context of S4. Regarding the final performance of the generated ANN, there are several critical parameters which have a high impact (e.g. the structure of the ANN). These will be evaluated in detail. Furthermore, we also show general limitations of ANNs in comparison with RTMs, how this can lead to unexpected side effects and ways to cope with these issues.
With the example of ROCINN, as part of the operational S5P and upcoming S4 cloud product, we show the great potential of machine learning techniques in improving the performance of classical retrieval algorithms and thus increasing their capability to deal with much larger data quantities. However, we also highlight the importance of a proper configuration and possible limitations.
How to cite: Romahn, F., Argyrouli, A., Lutz, R., Loyola, D., and Molina Garcia, V.: Using Machine Learning for processing Big Data of Copernicus Satellite Sensors at the Example of the TROPOMI / Sentinel-5 Precursor and Sentinel-4 Cloud Product, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10472, https://doi.org/10.5194/egusphere-egu2020-10472, 2020.
EGU2020-817 | Displays | ITS4.1/NP4.2
A hybrid CNN-LSTM based model for the prediction of sea surface temperature using time-series satellite data.Pavan Kumar Jonnakuti and Udaya Bhaskar Tata Venkata Sai
Sea surface temperature (SST) is a key variable of the global ocean, which affects air-sea interaction processes. Forecasts based on statistics and machine learning techniques did not succeed in considering the spatial and temporal relationships of the time series data. Therefore, to achieve precision in SST prediction we propose a deep learning-based model, by which we can produce a more realistic and accurate account of SST ‘behavior’ as it focuses both on space and time. Our hybrid CNN-LSTM model uses multiple processing layers to learn hierarchical representations by implementing 3D and 2D convolution neural networks as a method to better understand the spatial features and additionally we use LSTM to examine the temporal sequence of relations in SST time-series satellite data. Widespread studies, based on the historical satellite datasets spanning from 1980 - present time, in Indian Ocean region shows that our proposed deep learning-based CNN-LSTM model is extremely capable for short and mid-term daily SST prediction accurately exclusive based on the error estimates (obtained from LSTM) of the forecasted data sets.
Keywords: Deep Learning, Sea Surface Temperature, CNN, LSTM, Prediction.
How to cite: Jonnakuti, P. K. and Tata Venkata Sai, U. B.: A hybrid CNN-LSTM based model for the prediction of sea surface temperature using time-series satellite data., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-817, https://doi.org/10.5194/egusphere-egu2020-817, 2020.
Sea surface temperature (SST) is a key variable of the global ocean, which affects air-sea interaction processes. Forecasts based on statistics and machine learning techniques did not succeed in considering the spatial and temporal relationships of the time series data. Therefore, to achieve precision in SST prediction we propose a deep learning-based model, by which we can produce a more realistic and accurate account of SST ‘behavior’ as it focuses both on space and time. Our hybrid CNN-LSTM model uses multiple processing layers to learn hierarchical representations by implementing 3D and 2D convolution neural networks as a method to better understand the spatial features and additionally we use LSTM to examine the temporal sequence of relations in SST time-series satellite data. Widespread studies, based on the historical satellite datasets spanning from 1980 - present time, in Indian Ocean region shows that our proposed deep learning-based CNN-LSTM model is extremely capable for short and mid-term daily SST prediction accurately exclusive based on the error estimates (obtained from LSTM) of the forecasted data sets.
Keywords: Deep Learning, Sea Surface Temperature, CNN, LSTM, Prediction.
How to cite: Jonnakuti, P. K. and Tata Venkata Sai, U. B.: A hybrid CNN-LSTM based model for the prediction of sea surface temperature using time-series satellite data., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-817, https://doi.org/10.5194/egusphere-egu2020-817, 2020.
EGU2020-1220 | Displays | ITS4.1/NP4.2
Dam Deformation Prediction Based on EEMD-ELM ModelTao Yan and Bo Chen
Establishing a reasonable and reliable dam deformation monitoring model is of great significance for effective analysis of dam deformation monitoring data and accurate assessment of dam working conditions. Firstly, the dam deformation is decomposed by the EEMD algorithm to obtain IMF components representing different characteristic scales, and different influencing factors are selected for different IMF components. Secondly, each IMF component is used as the ELM training sample to analyze, fit and predict the dam deformation component. Finally, the prediction results of each IMF component are accumulated to obtain the dam deformation prediction value. Taking a roller compacted concrete gravity dam as an example, the EEMD-ELM model is used to predict the deformation of the dam. At the same time, it is compared and analyzed with the prediction results of the BPNN model and the ELM model. The mean square error of the EMD-ELM model is 0.566, which is 54% and 14.8% lower than the BPNN model and the ELM model, indicating that the EEMD-ELM model has higher prediction accuracy and has certain application value.
Key words: dam deformation;prediction model; ensemble empirical mode decomposition; extreme learning machine
How to cite: Yan, T. and Chen, B.: Dam Deformation Prediction Based on EEMD-ELM Model, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1220, https://doi.org/10.5194/egusphere-egu2020-1220, 2020.
Establishing a reasonable and reliable dam deformation monitoring model is of great significance for effective analysis of dam deformation monitoring data and accurate assessment of dam working conditions. Firstly, the dam deformation is decomposed by the EEMD algorithm to obtain IMF components representing different characteristic scales, and different influencing factors are selected for different IMF components. Secondly, each IMF component is used as the ELM training sample to analyze, fit and predict the dam deformation component. Finally, the prediction results of each IMF component are accumulated to obtain the dam deformation prediction value. Taking a roller compacted concrete gravity dam as an example, the EEMD-ELM model is used to predict the deformation of the dam. At the same time, it is compared and analyzed with the prediction results of the BPNN model and the ELM model. The mean square error of the EMD-ELM model is 0.566, which is 54% and 14.8% lower than the BPNN model and the ELM model, indicating that the EEMD-ELM model has higher prediction accuracy and has certain application value.
Key words: dam deformation;prediction model; ensemble empirical mode decomposition; extreme learning machine
How to cite: Yan, T. and Chen, B.: Dam Deformation Prediction Based on EEMD-ELM Model, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1220, https://doi.org/10.5194/egusphere-egu2020-1220, 2020.
EGU2020-1242 | Displays | ITS4.1/NP4.2
Sub-pixel classification of anthropogenic features using deep-learning on Sentinel-2 dataNikolaos Ioannis Bountos, Melanie Brandmeier, and Mark Günter
Urban landscapes are characterized as the fastest changing areas on the planet. However, regularly monitoring of larger areas it is not feasible using UAVs or costly air borne data. In these situations, satellite data with a high temporal resolution and large field of view are more appropriate but suffer from the lower spatial resolution (deca-meters). In the present study we show that by using freely available Sentinel-2 data from the Copernicus program, we can extract anthropogenic features such as roads, railways and building footprints that are partly or completely on a sub-pixel level in this kind of data. Additionally, we propose a new metric for the evaluation of our methods on the sub-pixel objects. This metric measures the performance of the detection of an object while penalizing the false positive classification. Given that our training samples contain one class, we define two thresholds that represent the lower bound of accuracy for the object to be classified and the background. We thus avoid a good score in occasions where we classify correctly our object, but a wide area of the background has been included in our prediction. We investigate the performance of different deep-learning architectures for sub-pixel classification of the different infrastructure elements based on Sentinel-2 multispectral data and the labels derived from the UAV data. Our study area is located in the Rhone valley in Switzerland where very high-resolution UAV data was available from the University of Applied Sciences. Highly accurate labels for the respective classes were digitized in ArcGIS Pro and used as ground-truth for the Sentinel data. We trained different deep learning models based on state-of-the-art architectures for semantic segmentation, such as DeepLab and U-Net. Our approach focuses on the exploitation of the multi spectral information to increase the performance of the RGB channels. For that purpose, we make use of the NIR and SWIR 10m and 20m bands of the Sentinel-2 data. We investigate early and late fusion approaches and the behavior and contribution of each multi spectral band to improve the performance in comparison to only using the RGB channels. In the early fusion approach, we stack nine (RGB, NIR, SWIR) Sentinel-2 bands together, pass them from two convolutions followed by batch normalization and relu layers and then feed the tiles to DeepLab. In the late fusion approach, we create a CNN with two branches with the first branch processing the RGB channels and the second branch the NIR/SWIR bands. We use modified DeepLab layers for the two branches and then concatenate the outputs into a total output of 512 feature maps. We then reduce the dimensionality of the result into the finaloutput equal to the number of classes. The dimension reduction step happens in two convolution layers. We experiment on different settings for all of the mentioned architectures. In the best-case scenario, we achieve 89% overall accuracy. Moreover, we measure 60% building accuracy, streets accuracy 60%, railway accuracy 73%, river accuracy 92% and background accuracy 94%.
How to cite: Bountos, N. I., Brandmeier, M., and Günter, M.: Sub-pixel classification of anthropogenic features using deep-learning on Sentinel-2 data, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1242, https://doi.org/10.5194/egusphere-egu2020-1242, 2020.
Urban landscapes are characterized as the fastest changing areas on the planet. However, regularly monitoring of larger areas it is not feasible using UAVs or costly air borne data. In these situations, satellite data with a high temporal resolution and large field of view are more appropriate but suffer from the lower spatial resolution (deca-meters). In the present study we show that by using freely available Sentinel-2 data from the Copernicus program, we can extract anthropogenic features such as roads, railways and building footprints that are partly or completely on a sub-pixel level in this kind of data. Additionally, we propose a new metric for the evaluation of our methods on the sub-pixel objects. This metric measures the performance of the detection of an object while penalizing the false positive classification. Given that our training samples contain one class, we define two thresholds that represent the lower bound of accuracy for the object to be classified and the background. We thus avoid a good score in occasions where we classify correctly our object, but a wide area of the background has been included in our prediction. We investigate the performance of different deep-learning architectures for sub-pixel classification of the different infrastructure elements based on Sentinel-2 multispectral data and the labels derived from the UAV data. Our study area is located in the Rhone valley in Switzerland where very high-resolution UAV data was available from the University of Applied Sciences. Highly accurate labels for the respective classes were digitized in ArcGIS Pro and used as ground-truth for the Sentinel data. We trained different deep learning models based on state-of-the-art architectures for semantic segmentation, such as DeepLab and U-Net. Our approach focuses on the exploitation of the multi spectral information to increase the performance of the RGB channels. For that purpose, we make use of the NIR and SWIR 10m and 20m bands of the Sentinel-2 data. We investigate early and late fusion approaches and the behavior and contribution of each multi spectral band to improve the performance in comparison to only using the RGB channels. In the early fusion approach, we stack nine (RGB, NIR, SWIR) Sentinel-2 bands together, pass them from two convolutions followed by batch normalization and relu layers and then feed the tiles to DeepLab. In the late fusion approach, we create a CNN with two branches with the first branch processing the RGB channels and the second branch the NIR/SWIR bands. We use modified DeepLab layers for the two branches and then concatenate the outputs into a total output of 512 feature maps. We then reduce the dimensionality of the result into the finaloutput equal to the number of classes. The dimension reduction step happens in two convolution layers. We experiment on different settings for all of the mentioned architectures. In the best-case scenario, we achieve 89% overall accuracy. Moreover, we measure 60% building accuracy, streets accuracy 60%, railway accuracy 73%, river accuracy 92% and background accuracy 94%.
How to cite: Bountos, N. I., Brandmeier, M., and Günter, M.: Sub-pixel classification of anthropogenic features using deep-learning on Sentinel-2 data, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1242, https://doi.org/10.5194/egusphere-egu2020-1242, 2020.
EGU2020-9966 | Displays | ITS4.1/NP4.2
Exploring Earth Science Applications using Word EmbeddingsDerek Koehl, Carson Davis, Rahul Ramachandran, Udaysankar Nair, and Manil Maskey
Word embedding are numeric representations of text which capture meanings and semantic relationships in text. Embeddings can be constructed using different methods such as One Hot encoding, Frequency-based or Prediction-based approaches. Prediction-based approaches such as Word2Vec, can be used to generate word embeddings that can capture the underlying semantics and word relationships in a corpus. Word2Vec embeddings generated from domain specific corpus have been shown in studies to both predict relationships and augment word vectors to improve classifications. We describe results from two different experiments utilizing word embeddings for Earth science constructed from a corpus of over 20,000 journal papers using Word2Vec.
The first experiment explores the analogy prediction performance of word embeddings built from the Earth science journal corpus and trained using domain-specific vocabulary. Our results demonstrate that the accuracy of domain-specific word embeddings in predicting Earth science analogy questions outperforms the ability of general corpus embedding to predict general analogy questions. While the results are as anticipated, the substantial increase in accuracy, particularly in the lexicographical domain was encouraging. The results point to the need for developing a comprehensive Earth science analogy test set that covers the full breadth of lexicographical and encyclopedic categories for validating word embeddings.
The second experiment utilizes the word embeddings to augment metadata keyword classifications. Metadata describing NASA datasets have science keywords that are manually assigned which can lead to errors and inconsistencies. These science keywords are controlled vocabulary and are used to aid data discovery via faceted search and relevancy ranking. Given the small size of the number of metadata records with proper description and keywords, word embeddings were used for augmentation. A fully connected neural network was trained to suggest keywords given a description text. This approach provided the best accuracy at ~76% as compared to other methods tested.
How to cite: Koehl, D., Davis, C., Ramachandran, R., Nair, U., and Maskey, M.: Exploring Earth Science Applications using Word Embeddings, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9966, https://doi.org/10.5194/egusphere-egu2020-9966, 2020.
Word embedding are numeric representations of text which capture meanings and semantic relationships in text. Embeddings can be constructed using different methods such as One Hot encoding, Frequency-based or Prediction-based approaches. Prediction-based approaches such as Word2Vec, can be used to generate word embeddings that can capture the underlying semantics and word relationships in a corpus. Word2Vec embeddings generated from domain specific corpus have been shown in studies to both predict relationships and augment word vectors to improve classifications. We describe results from two different experiments utilizing word embeddings for Earth science constructed from a corpus of over 20,000 journal papers using Word2Vec.
The first experiment explores the analogy prediction performance of word embeddings built from the Earth science journal corpus and trained using domain-specific vocabulary. Our results demonstrate that the accuracy of domain-specific word embeddings in predicting Earth science analogy questions outperforms the ability of general corpus embedding to predict general analogy questions. While the results are as anticipated, the substantial increase in accuracy, particularly in the lexicographical domain was encouraging. The results point to the need for developing a comprehensive Earth science analogy test set that covers the full breadth of lexicographical and encyclopedic categories for validating word embeddings.
The second experiment utilizes the word embeddings to augment metadata keyword classifications. Metadata describing NASA datasets have science keywords that are manually assigned which can lead to errors and inconsistencies. These science keywords are controlled vocabulary and are used to aid data discovery via faceted search and relevancy ranking. Given the small size of the number of metadata records with proper description and keywords, word embeddings were used for augmentation. A fully connected neural network was trained to suggest keywords given a description text. This approach provided the best accuracy at ~76% as compared to other methods tested.
How to cite: Koehl, D., Davis, C., Ramachandran, R., Nair, U., and Maskey, M.: Exploring Earth Science Applications using Word Embeddings, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9966, https://doi.org/10.5194/egusphere-egu2020-9966, 2020.
EGU2020-6189 | Displays | ITS4.1/NP4.2
Application of GIS technology and natural language processing technology in automatic generation of marine weather bulletinXinping Bai, Zhongliang Lv, and Hui Wang
Marine Weather Bulletin is the main weather service product of China Central Meteorological Observatory. Based on five-kilometer grid forecast data, it comprehensively describes the forecast information of wind force, wind direction, sea fog level and visibility in eighteen offshore areas of China, issued three times a day. Its traditional production process is that the forecaster manually interprets the massive information from grid data, then manually describes in natural language, including the combined descriptions to highlight the overall trend, finally edits manually including inserting graphics and formatting, which causes low writing efficiency and quality deviation that cannot meet the timeliness, refinement and diversity. The automatic generation of marine weather bulletins has become an urgent business need.
This paper proposes a method of using GIS technology and natural language processing technology to develop a text feature extraction model for sea gales and sea fog, and finally using Aspose technology to automatically generate marine weather bulletins based on custom templates.
First, GIS technology is used to extract the spatiotemporal characteristics of meteorological information, which includes converting grid data into vector area data, performing GIS spatial overlay analysis and fusion analysis on the multi-level marine meteorological areas and Chinese sea areas to dig inside Information on the scale, Influence area, and time frequency of gale and fog in different geographic areas.
Next, natural language processing, as an important method of artificial intelligence, is performed on the spatiotemporal information of marine weather elements. Here, it is mainly based on statistical machine learning. By data mining from more than 1000 historical bulletins, Content planning focuses on putting large numbers of marine weather element words and cohesive words into automatic word segmentation, part-of-speech statistics and word extraction, then creating preliminarily classified text description templates of different elements. Through long machine learning processes, sentence planning refines sea area filtering and merging rules, wind force and wind direction merging rules, sea fog visibility describing rules, merging rules of different areas of the same sea area, merging rules of multiple forecast texts, etc. Based on these rules, omitting, referencing and merging methods are used to make the descriptions more smooth, natural and refined.
Finally, based on Aspose technology, a custom template is used to automatically generate marine weather bulletins. Through file conversion, data mining, data filtering and noise removal of historical bulletins, a document template is established in which the constant domains and variable domains are divided and general formats are customized. Then use the Aspose tool to call the template, fill in its variable fields with actual information, and finally export it as an actual document.
Results show that the automatically generated text has a precise spatial description, accurate merge and no scales missed, the text sentence is smooth, semantically and grammatically correct, and conforms to forecaster's writing habits. The automatically generated bulletin effectively avoids common mistakes in manual editing and reduces many tedious manual labor. This study has been put into operation in China Central Meteorological Observatory, which has greatly mproved the efficiency of marine weather services.
How to cite: Bai, X., Lv, Z., and Wang, H.: Application of GIS technology and natural language processing technology in automatic generation of marine weather bulletin, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6189, https://doi.org/10.5194/egusphere-egu2020-6189, 2020.
Marine Weather Bulletin is the main weather service product of China Central Meteorological Observatory. Based on five-kilometer grid forecast data, it comprehensively describes the forecast information of wind force, wind direction, sea fog level and visibility in eighteen offshore areas of China, issued three times a day. Its traditional production process is that the forecaster manually interprets the massive information from grid data, then manually describes in natural language, including the combined descriptions to highlight the overall trend, finally edits manually including inserting graphics and formatting, which causes low writing efficiency and quality deviation that cannot meet the timeliness, refinement and diversity. The automatic generation of marine weather bulletins has become an urgent business need.
This paper proposes a method of using GIS technology and natural language processing technology to develop a text feature extraction model for sea gales and sea fog, and finally using Aspose technology to automatically generate marine weather bulletins based on custom templates.
First, GIS technology is used to extract the spatiotemporal characteristics of meteorological information, which includes converting grid data into vector area data, performing GIS spatial overlay analysis and fusion analysis on the multi-level marine meteorological areas and Chinese sea areas to dig inside Information on the scale, Influence area, and time frequency of gale and fog in different geographic areas.
Next, natural language processing, as an important method of artificial intelligence, is performed on the spatiotemporal information of marine weather elements. Here, it is mainly based on statistical machine learning. By data mining from more than 1000 historical bulletins, Content planning focuses on putting large numbers of marine weather element words and cohesive words into automatic word segmentation, part-of-speech statistics and word extraction, then creating preliminarily classified text description templates of different elements. Through long machine learning processes, sentence planning refines sea area filtering and merging rules, wind force and wind direction merging rules, sea fog visibility describing rules, merging rules of different areas of the same sea area, merging rules of multiple forecast texts, etc. Based on these rules, omitting, referencing and merging methods are used to make the descriptions more smooth, natural and refined.
Finally, based on Aspose technology, a custom template is used to automatically generate marine weather bulletins. Through file conversion, data mining, data filtering and noise removal of historical bulletins, a document template is established in which the constant domains and variable domains are divided and general formats are customized. Then use the Aspose tool to call the template, fill in its variable fields with actual information, and finally export it as an actual document.
Results show that the automatically generated text has a precise spatial description, accurate merge and no scales missed, the text sentence is smooth, semantically and grammatically correct, and conforms to forecaster's writing habits. The automatically generated bulletin effectively avoids common mistakes in manual editing and reduces many tedious manual labor. This study has been put into operation in China Central Meteorological Observatory, which has greatly mproved the efficiency of marine weather services.
How to cite: Bai, X., Lv, Z., and Wang, H.: Application of GIS technology and natural language processing technology in automatic generation of marine weather bulletin, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6189, https://doi.org/10.5194/egusphere-egu2020-6189, 2020.
EGU2020-15048 | Displays | ITS4.1/NP4.2
Polytope: Serving ECMWFs Big Weather DataJames Hawkes, Nicolau Manubens, Emanuele Danovaro, John Hanley, Stephan Siemen, Baudouin Raoult, and Tiago Quintino
How to cite: Hawkes, J., Manubens, N., Danovaro, E., Hanley, J., Siemen, S., Raoult, B., and Quintino, T.: Polytope: Serving ECMWFs Big Weather Data, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-15048, https://doi.org/10.5194/egusphere-egu2020-15048, 2020.
How to cite: Hawkes, J., Manubens, N., Danovaro, E., Hanley, J., Siemen, S., Raoult, B., and Quintino, T.: Polytope: Serving ECMWFs Big Weather Data, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-15048, https://doi.org/10.5194/egusphere-egu2020-15048, 2020.
EGU2020-11553 | Displays | ITS4.1/NP4.2
Objective classification of changes in water regime types of the Russian Plain rivers utilizing machine learning approachesAlexander Ivanov, Timophey Samsonov, Natalia Frolova, Maria Kireeva, and Elena Povalishnikova
Hydrological regime classification of Russian Plain rivers was always done by hand and by using subjective analysis of various characteristics of a seasonal runoff. Last update to this classification was made in the early 1990s.
In this work we make an attempt at using different machine learning methods for objective classification. Both clustering (DBSCAN, K-Means) and classification (XGBoost) methods were used to establish 1) if an established runoff types can be inferred from the data using supervised approach 2) similar clusters can be inferred from data (unsupervised approach). Monthly runoff data for 237 rivers of Russian Plain since 1945 and until 2016 were used as a dataset.
In a first attempt dataset was divided into periods of 1945-1977 and 1978-2016 in attempt to detect changes in river water regimes due to climate change. Monthly data were transformed into following features: annual and seasonal runoff, runoff levels for different seasons, minimum and maximum values of monthly runoff, ratios of the minimum and maximum runoff compared to yearly average and others. Supervised classification using XGBoost method resulted in 90% accuracy in water regime type identification for 1945-1977 period. Shifts in water regime types for southern rivers of Russian Plain rivers in a Don region were identified by this classifier.
DBSCAN algorithm for clustering was able to identify 6 major clusters corresponding to existing water regime types: Kola peninsula, North-East part of Russian Plain and polar Urals, Central Russia, Southern Russia, arid South-East, foothills and separately higher altitudes of the Caucasus. Nonetheless a better approach was sought due to intersections of a clusters because of the continuous nature of data. Cosine similarity metric was used as an alternative way to separate river runoff types, this time for each year. Yearly cutoff also allows us to make a timeline of water regime changes over the course of 70 years. By using it as an objective ground truth we plan to remake classification and clusterization made earlier and establish an automated way to classify changes in water regime over time.
As a result, the following conclusions can be made
The study was supported by the Russian Science Foundation (grant No.19-77-10032) in methods and Russian Foundation for Basic Research (grant No.18-05-60021) for analyses in Arctic region
How to cite: Ivanov, A., Samsonov, T., Frolova, N., Kireeva, M., and Povalishnikova, E.: Objective classification of changes in water regime types of the Russian Plain rivers utilizing machine learning approaches, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11553, https://doi.org/10.5194/egusphere-egu2020-11553, 2020.
Hydrological regime classification of Russian Plain rivers was always done by hand and by using subjective analysis of various characteristics of a seasonal runoff. Last update to this classification was made in the early 1990s.
In this work we make an attempt at using different machine learning methods for objective classification. Both clustering (DBSCAN, K-Means) and classification (XGBoost) methods were used to establish 1) if an established runoff types can be inferred from the data using supervised approach 2) similar clusters can be inferred from data (unsupervised approach). Monthly runoff data for 237 rivers of Russian Plain since 1945 and until 2016 were used as a dataset.
In a first attempt dataset was divided into periods of 1945-1977 and 1978-2016 in attempt to detect changes in river water regimes due to climate change. Monthly data were transformed into following features: annual and seasonal runoff, runoff levels for different seasons, minimum and maximum values of monthly runoff, ratios of the minimum and maximum runoff compared to yearly average and others. Supervised classification using XGBoost method resulted in 90% accuracy in water regime type identification for 1945-1977 period. Shifts in water regime types for southern rivers of Russian Plain rivers in a Don region were identified by this classifier.
DBSCAN algorithm for clustering was able to identify 6 major clusters corresponding to existing water regime types: Kola peninsula, North-East part of Russian Plain and polar Urals, Central Russia, Southern Russia, arid South-East, foothills and separately higher altitudes of the Caucasus. Nonetheless a better approach was sought due to intersections of a clusters because of the continuous nature of data. Cosine similarity metric was used as an alternative way to separate river runoff types, this time for each year. Yearly cutoff also allows us to make a timeline of water regime changes over the course of 70 years. By using it as an objective ground truth we plan to remake classification and clusterization made earlier and establish an automated way to classify changes in water regime over time.
As a result, the following conclusions can be made
The study was supported by the Russian Science Foundation (grant No.19-77-10032) in methods and Russian Foundation for Basic Research (grant No.18-05-60021) for analyses in Arctic region
How to cite: Ivanov, A., Samsonov, T., Frolova, N., Kireeva, M., and Povalishnikova, E.: Objective classification of changes in water regime types of the Russian Plain rivers utilizing machine learning approaches, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11553, https://doi.org/10.5194/egusphere-egu2020-11553, 2020.
EGU2020-21911 | Displays | ITS4.1/NP4.2
Towards opensource LOD2 modelling of urban spaces using an optimised machine learning and rules-based approach.Tom Rowan and Adrian Butler
In order to enable community groups and other interested parties to evaluate the effects of flood management, water conservation and other hydrological issues, better localised mapping is required. Although some maps are publicly available many are behind paywalls, especially those with three dimensional features. In this study London is used as a test case to evaluate, machine learning and rules-based approaches with opensource maps and LiDAR data to create more accurate representations (LOD2) of small-scale areas. Machine learning is particularly well suited to the recognition of local repetitive features like building roofs and trees, while roads can be identified and mapped best using a faster rules-based approach.
In order to create a useful LOD2 representation, a user interface, processing rules manipulation and assumption editor have all been incorporated. Features like randomly assigning sub terrain features (basements) - using Monte-Carlo methods - and artificial sewage representation enable the user to grow these models from opensource data into useful model inputs. This project is aimed at local scale hydrological modelling, rainfall runoff analysis and other local planning applications.
The goal is to provide turn-key data processing for small scale modelling, which should help advance the installation of SuDs and other water management solutions, as well as having broader uses. The method is designed to enable fast and accurate representations of small-scale features (1 hectare to 1km2), with larger scale applications planned for future work. This work forms part of the CAMELLIA project (Community Water Management for a Liveable London) and aims to provide useful tools for local scale modeller and possibly the larger scale industry/scientific user.
How to cite: Rowan, T. and Butler, A.: Towards opensource LOD2 modelling of urban spaces using an optimised machine learning and rules-based approach., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21911, https://doi.org/10.5194/egusphere-egu2020-21911, 2020.
In order to enable community groups and other interested parties to evaluate the effects of flood management, water conservation and other hydrological issues, better localised mapping is required. Although some maps are publicly available many are behind paywalls, especially those with three dimensional features. In this study London is used as a test case to evaluate, machine learning and rules-based approaches with opensource maps and LiDAR data to create more accurate representations (LOD2) of small-scale areas. Machine learning is particularly well suited to the recognition of local repetitive features like building roofs and trees, while roads can be identified and mapped best using a faster rules-based approach.
In order to create a useful LOD2 representation, a user interface, processing rules manipulation and assumption editor have all been incorporated. Features like randomly assigning sub terrain features (basements) - using Monte-Carlo methods - and artificial sewage representation enable the user to grow these models from opensource data into useful model inputs. This project is aimed at local scale hydrological modelling, rainfall runoff analysis and other local planning applications.
The goal is to provide turn-key data processing for small scale modelling, which should help advance the installation of SuDs and other water management solutions, as well as having broader uses. The method is designed to enable fast and accurate representations of small-scale features (1 hectare to 1km2), with larger scale applications planned for future work. This work forms part of the CAMELLIA project (Community Water Management for a Liveable London) and aims to provide useful tools for local scale modeller and possibly the larger scale industry/scientific user.
How to cite: Rowan, T. and Butler, A.: Towards opensource LOD2 modelling of urban spaces using an optimised machine learning and rules-based approach., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21911, https://doi.org/10.5194/egusphere-egu2020-21911, 2020.
EGU2020-3885 | Displays | ITS4.1/NP4.2
Numerical Weather Forecast Post-processing with Ensemble Learning and Transfer LearningYuwen Chen and Xiaomeng Huang
Statistical approaches have been used for decades to augment and interpret numerical weather forecasts. The emergence of artificial intelligence algorithms has provided new perspectives in this field, but the extension of algorithms developed for station networks with rich historical records to include newly-built stations remains a challenge. To address this, we design a framework that combines two machine learning methods: temperature prediction based on ensemble of multiple machine learning models and transfer learning for newly-built stations. We then evaluate this framework by post-processing temperature forecasts provided by a leading weather forecast center and observations from 301 weather stations in China. Station clustering reduces forecast errors by 24.4% averagely, while transfer learning improves predictions by 13.4% for recently-built sites with only one year of data available. This work demonstrates how ensemble learning and transfer learning can be used to supplement weather forecasting.
How to cite: Chen, Y. and Huang, X.: Numerical Weather Forecast Post-processing with Ensemble Learning and Transfer Learning, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3885, https://doi.org/10.5194/egusphere-egu2020-3885, 2020.
Statistical approaches have been used for decades to augment and interpret numerical weather forecasts. The emergence of artificial intelligence algorithms has provided new perspectives in this field, but the extension of algorithms developed for station networks with rich historical records to include newly-built stations remains a challenge. To address this, we design a framework that combines two machine learning methods: temperature prediction based on ensemble of multiple machine learning models and transfer learning for newly-built stations. We then evaluate this framework by post-processing temperature forecasts provided by a leading weather forecast center and observations from 301 weather stations in China. Station clustering reduces forecast errors by 24.4% averagely, while transfer learning improves predictions by 13.4% for recently-built sites with only one year of data available. This work demonstrates how ensemble learning and transfer learning can be used to supplement weather forecasting.
How to cite: Chen, Y. and Huang, X.: Numerical Weather Forecast Post-processing with Ensemble Learning and Transfer Learning, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3885, https://doi.org/10.5194/egusphere-egu2020-3885, 2020.
EGU2020-7275 | Displays | ITS4.1/NP4.2
Image Regression Classification of Air Quality by Convolutional Neural NetworkPu-Yun Kow, Li-Chiu Chang, and Fi-John Chang
As living standards have improved, people have been increasingly concerned about air pollution problems. Taiwan also faces the same problem, especially in the southern region. Thus, it is a crucial task to rapidly provide reliable information of air quality. This study intends to classify air quality images into, for example, “high pollution”, “moderate pollution”, or “low pollution” categories in areas of interest. In this work, we consider achieving a finer classification of air quality, i.e., more categories like 5-6 categories. To achieve our goal, we propose a hybrid model (CNN-FC) that integrates the convolutional neural network (CNN) and a fully-connected neural network for classifying the concentrations of PM2.5 and PM10 as well as the air quality index (AQI). Despite being implemented in many fields, the regression classification has, however, been rarely applied to air pollution problems. The image regression classification is useful to air pollution research, especially when some of the (more sophisticated) air quality detectors are malfunctioning. The hourly air quality datasets collected at Station Linyuan of Kaohsiung City in southern Taiwan form the case study for evaluating the applicability and reliability of the proposed CNN-FC approach. A total of 3549 datasets that contain the images (photos) and monitored data of PM2.5, PM10, and AQI are used to train and validate the constructed model. The proposed CNN-FC approach is employed to perform image regression classification by extracting important characteristics from images. The results demonstrate that the proposed CNN-FC model can provide a practical and reliable approach to creating an accurate image regression classification. The main breakthrough of this study is the image classification of several pollutants only using a single shallow CNN-FC model.
Keywords: PM2.5 forecast; image classification; Deep learning; Convolutional neural network; Fully-connected neural network; Taiwan
How to cite: Kow, P.-Y., Chang, L.-C., and Chang, F.-J.: Image Regression Classification of Air Quality by Convolutional Neural Network, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7275, https://doi.org/10.5194/egusphere-egu2020-7275, 2020.
As living standards have improved, people have been increasingly concerned about air pollution problems. Taiwan also faces the same problem, especially in the southern region. Thus, it is a crucial task to rapidly provide reliable information of air quality. This study intends to classify air quality images into, for example, “high pollution”, “moderate pollution”, or “low pollution” categories in areas of interest. In this work, we consider achieving a finer classification of air quality, i.e., more categories like 5-6 categories. To achieve our goal, we propose a hybrid model (CNN-FC) that integrates the convolutional neural network (CNN) and a fully-connected neural network for classifying the concentrations of PM2.5 and PM10 as well as the air quality index (AQI). Despite being implemented in many fields, the regression classification has, however, been rarely applied to air pollution problems. The image regression classification is useful to air pollution research, especially when some of the (more sophisticated) air quality detectors are malfunctioning. The hourly air quality datasets collected at Station Linyuan of Kaohsiung City in southern Taiwan form the case study for evaluating the applicability and reliability of the proposed CNN-FC approach. A total of 3549 datasets that contain the images (photos) and monitored data of PM2.5, PM10, and AQI are used to train and validate the constructed model. The proposed CNN-FC approach is employed to perform image regression classification by extracting important characteristics from images. The results demonstrate that the proposed CNN-FC model can provide a practical and reliable approach to creating an accurate image regression classification. The main breakthrough of this study is the image classification of several pollutants only using a single shallow CNN-FC model.
Keywords: PM2.5 forecast; image classification; Deep learning; Convolutional neural network; Fully-connected neural network; Taiwan
How to cite: Kow, P.-Y., Chang, L.-C., and Chang, F.-J.: Image Regression Classification of Air Quality by Convolutional Neural Network, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7275, https://doi.org/10.5194/egusphere-egu2020-7275, 2020.
EGU2020-5429 | Displays | ITS4.1/NP4.2
Forecasting Model of Short-term PM2.5 Concentration based on Deep LearningWei Tang, Wen-fang Zhao, Runsheng Lin, and Yong Zhou
In order to improve the accuracy of PM2.5 concentration forecast in Beijing Meteorological Bureau, a deep learning prediction model based on convolutional neural network (CNN) and long short term memory neural network (LSTM) was proposed. Firstly, the feature vectors extraction was carried out by using the correlation analysis technique from meteorological data such as temperature, wind, relative humidity, precipitation, visibility and atmospheric pressure. Secondly, taking into account the fact that PM2.5 concentration was significantly affected by surrounding meteorological impact factors, meteorological grid analysis data was novel involved into the model, as well as the historical PM2.5 concentration data and meteorological observation data of the present station. Spatio-temporal sequence data was generated from these data after integrated processing. High level spatio-temporal features were extracted through the combination of the CNN and LSTM. Finally, future 24-hour prediction of PM2.5 concentration was made by the model. The comparison among the accuracy of this optimized model, support vector machine (SVM) and existing PM2.5 forecast system is performed to evaluate their performance. The results show that the proposed CNN-LSTM model performs better than SVM and current operational models in Beijing Meteorological Bureau, which has effectively improved the prediction accuracy of PM2.5 concentration for different time predictions scales in the next 24 hours.
How to cite: Tang, W., Zhao, W., Lin, R., and Zhou, Y.: Forecasting Model of Short-term PM2.5 Concentration based on Deep Learning, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5429, https://doi.org/10.5194/egusphere-egu2020-5429, 2020.
In order to improve the accuracy of PM2.5 concentration forecast in Beijing Meteorological Bureau, a deep learning prediction model based on convolutional neural network (CNN) and long short term memory neural network (LSTM) was proposed. Firstly, the feature vectors extraction was carried out by using the correlation analysis technique from meteorological data such as temperature, wind, relative humidity, precipitation, visibility and atmospheric pressure. Secondly, taking into account the fact that PM2.5 concentration was significantly affected by surrounding meteorological impact factors, meteorological grid analysis data was novel involved into the model, as well as the historical PM2.5 concentration data and meteorological observation data of the present station. Spatio-temporal sequence data was generated from these data after integrated processing. High level spatio-temporal features were extracted through the combination of the CNN and LSTM. Finally, future 24-hour prediction of PM2.5 concentration was made by the model. The comparison among the accuracy of this optimized model, support vector machine (SVM) and existing PM2.5 forecast system is performed to evaluate their performance. The results show that the proposed CNN-LSTM model performs better than SVM and current operational models in Beijing Meteorological Bureau, which has effectively improved the prediction accuracy of PM2.5 concentration for different time predictions scales in the next 24 hours.
How to cite: Tang, W., Zhao, W., Lin, R., and Zhou, Y.: Forecasting Model of Short-term PM2.5 Concentration based on Deep Learning, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5429, https://doi.org/10.5194/egusphere-egu2020-5429, 2020.
EGU2020-4659 | Displays | ITS4.1/NP4.2
Exploring the Applicability of Deep Learning Methods in Mid-infrared Spectroscopy for Soil Property PredictionsFranck Albinet, Amelia Lee Zhi Yi, Petra Schmitter, Romina Torres Astorga, and Gerd Dercon
The usage of mathematical models and mid-infrared (MIR) spectral databases to predict the elemental composition of soil allows for rapid and high-throughput characterization of soil properties. The Partial Least Square Regression (PLSR) is a pervasive statistical method that is used for such predictive mathematical models due to a large existing knowledge base paired with standardized best practices in model application. Despite its ability to transform data in the high-dimensional space (high spectral resolution) to a space of fewer dimensions that captures the correlation between the input space (spectra) and the response variables (elemental soil composition), this popular approach fails to capture non-linear patterns. Further, PLSR has poor prediction capacities for a wide range of soil analytes such as Potassium and Phosphorus, just to mention a few. In addition, prediction is highly sensitive to pre-processing steps in data derivation that can also be tainted by human biases based on the empirical selection of wavenumber regions. Thus, the usage of PLSR as a methodology for elemental prediction of soil remains time-consuming and limited in scope.
With major breakthroughs in the area of Deep Learning (DL) in the past decade, soil science researchers are increasingly shifting their focus from traditional techniques such as PLSR to using DL models such as Convolutional Neural Networks. Promising results of this shift have been showcased, including increased prediction accuracy, reduced needs for data pre-processing, and improved evaluation of explanatory factors. Increasingly, studies are also looking to expand beyond the regional scope and support higher resolution and more accurate databases for global modelling efforts. However, the setup of a DL model is notoriously data intensive and often said to be less applicable when there is limited data available. While a MIR spectra database has been recently publicly released by the Kellog Soil Survey Laboratory, United States Department of Agriculture, such large-scale initiative remains a niche and focus only on specific regions and/or ecosystem types.
This research is a first effort in applying DL techniques in a relative data scarce environment (approximately 1000 labelled spectra) using transfer learning and domain-specific data augmentation techniques. In particular, we assess the potential of unsupervised feature learning approaches as a key enabler for broader applicability of DL techniques in the context of MIR spectroscopy and soil sciences. A better understanding of potential for DL methods in soil composition prediction will greatly advance the work of soil sciences and natural resource management. Improvements to overcome its associated challenges will be a step forward in creating a universal soil modelling technique through reusable models and contribute to a large world-wide soil MIR spectral database.
How to cite: Albinet, F., Lee Zhi Yi, A., Schmitter, P., Torres Astorga, R., and Dercon, G.: Exploring the Applicability of Deep Learning Methods in Mid-infrared Spectroscopy for Soil Property Predictions, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4659, https://doi.org/10.5194/egusphere-egu2020-4659, 2020.
The usage of mathematical models and mid-infrared (MIR) spectral databases to predict the elemental composition of soil allows for rapid and high-throughput characterization of soil properties. The Partial Least Square Regression (PLSR) is a pervasive statistical method that is used for such predictive mathematical models due to a large existing knowledge base paired with standardized best practices in model application. Despite its ability to transform data in the high-dimensional space (high spectral resolution) to a space of fewer dimensions that captures the correlation between the input space (spectra) and the response variables (elemental soil composition), this popular approach fails to capture non-linear patterns. Further, PLSR has poor prediction capacities for a wide range of soil analytes such as Potassium and Phosphorus, just to mention a few. In addition, prediction is highly sensitive to pre-processing steps in data derivation that can also be tainted by human biases based on the empirical selection of wavenumber regions. Thus, the usage of PLSR as a methodology for elemental prediction of soil remains time-consuming and limited in scope.
With major breakthroughs in the area of Deep Learning (DL) in the past decade, soil science researchers are increasingly shifting their focus from traditional techniques such as PLSR to using DL models such as Convolutional Neural Networks. Promising results of this shift have been showcased, including increased prediction accuracy, reduced needs for data pre-processing, and improved evaluation of explanatory factors. Increasingly, studies are also looking to expand beyond the regional scope and support higher resolution and more accurate databases for global modelling efforts. However, the setup of a DL model is notoriously data intensive and often said to be less applicable when there is limited data available. While a MIR spectra database has been recently publicly released by the Kellog Soil Survey Laboratory, United States Department of Agriculture, such large-scale initiative remains a niche and focus only on specific regions and/or ecosystem types.
This research is a first effort in applying DL techniques in a relative data scarce environment (approximately 1000 labelled spectra) using transfer learning and domain-specific data augmentation techniques. In particular, we assess the potential of unsupervised feature learning approaches as a key enabler for broader applicability of DL techniques in the context of MIR spectroscopy and soil sciences. A better understanding of potential for DL methods in soil composition prediction will greatly advance the work of soil sciences and natural resource management. Improvements to overcome its associated challenges will be a step forward in creating a universal soil modelling technique through reusable models and contribute to a large world-wide soil MIR spectral database.
How to cite: Albinet, F., Lee Zhi Yi, A., Schmitter, P., Torres Astorga, R., and Dercon, G.: Exploring the Applicability of Deep Learning Methods in Mid-infrared Spectroscopy for Soil Property Predictions, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4659, https://doi.org/10.5194/egusphere-egu2020-4659, 2020.
EGU2020-7819 | Displays | ITS4.1/NP4.2
Detecting Synoptic Patterns related to Freezing Rain in Montréal using Deep LearningMagdalena Mittermeier, Émilie Bresson, Dominique Paquin, and Ralf Ludwig
Climate change is altering the Earth’s atmospheric circulation and the dynamic drivers of extreme events. Extreme weather events pose a great potential risk to infrastructure and human security. In Southern Québec, freezing rain is among the rare, yet high-impact events that remain particularly difficult to detect, describe or even predict.
Large climate model ensembles are instrumental for a profound analysis of extreme events, as they can be used to provide a sufficient number of model years. Due to the physical nature and the high spatiotemporal resolution of regional climate models (RCMs), large ensembles can not only be employed to investigate the intensity and frequency of extreme events, but they also allow to analyze the synoptic drivers of freezing rain events and to explore the respective dynamic alterations under climate change conditions. However, several challenges remain for the analysis of large RCM ensembles, mainly the high computational costs and the resulting data volume, which requires novel statistical methods for efficient screening and analysis, such as deep neural networks (DNN). Further, to date, only the Canadian Regional Climate Model version 5 (CRCM5) is simulating freezing rain in-line using a diagnostic method. For the analysis of freezing rain in other RCMs, computational intensive, off-line diagnostic schemes have to be applied to archived data. Another approach for freezing rain analysis focuses on the relation between synoptic drivers at 500 hPa resp. sea level pressure and the occurrence of freezing rain in the study area of Montréal.
Here, we explore the capability of training a deep neural network on the detection of the synoptic patterns associated with the occurrence of freezing rain in Montréal. This climate pattern detection task is a visual image classification problem that is addressed with supervised machine learning. Labels for the training set are derived from CRCM5 in-line simulations of freezing rain. This study aims to provide a trained network, which can be applied to large multi-model ensembles over the North American domain of the Coordinated Regional Climate Downscaling Experiment (CORDEX) in order to efficiently filter the climate datasets for the current and future large-scale drivers of freezing rain.
We present the setup of the deep learning approach including the network architecture, the training set statistics and the optimization and regularization methods. Additionally, we present the classification results of the deep neural network in the form of a single-number evaluation metric as well as confusion matrices. Furthermore, we show analysis of our training set regarding false positives and false negatives.
How to cite: Mittermeier, M., Bresson, É., Paquin, D., and Ludwig, R.: Detecting Synoptic Patterns related to Freezing Rain in Montréal using Deep Learning, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7819, https://doi.org/10.5194/egusphere-egu2020-7819, 2020.
Climate change is altering the Earth’s atmospheric circulation and the dynamic drivers of extreme events. Extreme weather events pose a great potential risk to infrastructure and human security. In Southern Québec, freezing rain is among the rare, yet high-impact events that remain particularly difficult to detect, describe or even predict.
Large climate model ensembles are instrumental for a profound analysis of extreme events, as they can be used to provide a sufficient number of model years. Due to the physical nature and the high spatiotemporal resolution of regional climate models (RCMs), large ensembles can not only be employed to investigate the intensity and frequency of extreme events, but they also allow to analyze the synoptic drivers of freezing rain events and to explore the respective dynamic alterations under climate change conditions. However, several challenges remain for the analysis of large RCM ensembles, mainly the high computational costs and the resulting data volume, which requires novel statistical methods for efficient screening and analysis, such as deep neural networks (DNN). Further, to date, only the Canadian Regional Climate Model version 5 (CRCM5) is simulating freezing rain in-line using a diagnostic method. For the analysis of freezing rain in other RCMs, computational intensive, off-line diagnostic schemes have to be applied to archived data. Another approach for freezing rain analysis focuses on the relation between synoptic drivers at 500 hPa resp. sea level pressure and the occurrence of freezing rain in the study area of Montréal.
Here, we explore the capability of training a deep neural network on the detection of the synoptic patterns associated with the occurrence of freezing rain in Montréal. This climate pattern detection task is a visual image classification problem that is addressed with supervised machine learning. Labels for the training set are derived from CRCM5 in-line simulations of freezing rain. This study aims to provide a trained network, which can be applied to large multi-model ensembles over the North American domain of the Coordinated Regional Climate Downscaling Experiment (CORDEX) in order to efficiently filter the climate datasets for the current and future large-scale drivers of freezing rain.
We present the setup of the deep learning approach including the network architecture, the training set statistics and the optimization and regularization methods. Additionally, we present the classification results of the deep neural network in the form of a single-number evaluation metric as well as confusion matrices. Furthermore, we show analysis of our training set regarding false positives and false negatives.
How to cite: Mittermeier, M., Bresson, É., Paquin, D., and Ludwig, R.: Detecting Synoptic Patterns related to Freezing Rain in Montréal using Deep Learning, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7819, https://doi.org/10.5194/egusphere-egu2020-7819, 2020.
EGU2020-3984 | Displays | ITS4.1/NP4.2
Artificial intelligence for discrimination of sediment facies based on high-resolution elemental and colour data from coastal sediments of the East Frisian Wadden Sea, GermanyAn-Sheng Lee, Dirk Enters, Sofia Ya Hsuan Liou, and Bernd Zolitschka
Sediment facies provide vital information for the reconstruction of past environmental variability. Due to rising interest for paleoclimate data, sediment surveys are continually growing in importance as well as the amount of sediments to be discriminated into different facies. The conventional approach is to macroscopically determine sediment structure and colour and combine them with physical and chemical information - a time-consuming task heavily relying on the experience of the scientist in charge. Today, rapidly generated and high-resolution multiproxy sediment parameters are readily available from down-core scanning techniques and provide qualitative or even quantitative physical and chemical sediment properties. In 2016, an interdisciplinary research project WASA (Wadden Sea Archive) was launched to investigate palaeo-landscapes and environments of the Wadden Sea. The project has recovered 92 up to 5 m long sediment cores from the tidal flats, channels and off-shore around the island of Norderney (East Frisian Wadden Sea, Germany). Their facies were described by the conventional approach into glacioflucial sands, moraine, peat, tidal deposits, shoreface sediments, etc. In this study, those sediments were scanned by a micro X-ray fluorescence (µ-XRF) core scanner to obtain high-resolution records of multi-elemental data (2000 µm) and optical images (47 µm). Here we propose a supervised machine-learning application for the discrimination of sediment facies using these scanning data. Thus, the invested time and the potential bias common for the conventional approach can be reduced considerably. We expect that our approach will contribute to developing a more comprehensive and time-efficient automatic sediment facies discrimination.
Keywords: the Wadden Sea, µ-XRF core scanning, machine-learning, sediment facies discrimination
How to cite: Lee, A.-S., Enters, D., Liou, S. Y. H., and Zolitschka, B.: Artificial intelligence for discrimination of sediment facies based on high-resolution elemental and colour data from coastal sediments of the East Frisian Wadden Sea, Germany, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3984, https://doi.org/10.5194/egusphere-egu2020-3984, 2020.
Sediment facies provide vital information for the reconstruction of past environmental variability. Due to rising interest for paleoclimate data, sediment surveys are continually growing in importance as well as the amount of sediments to be discriminated into different facies. The conventional approach is to macroscopically determine sediment structure and colour and combine them with physical and chemical information - a time-consuming task heavily relying on the experience of the scientist in charge. Today, rapidly generated and high-resolution multiproxy sediment parameters are readily available from down-core scanning techniques and provide qualitative or even quantitative physical and chemical sediment properties. In 2016, an interdisciplinary research project WASA (Wadden Sea Archive) was launched to investigate palaeo-landscapes and environments of the Wadden Sea. The project has recovered 92 up to 5 m long sediment cores from the tidal flats, channels and off-shore around the island of Norderney (East Frisian Wadden Sea, Germany). Their facies were described by the conventional approach into glacioflucial sands, moraine, peat, tidal deposits, shoreface sediments, etc. In this study, those sediments were scanned by a micro X-ray fluorescence (µ-XRF) core scanner to obtain high-resolution records of multi-elemental data (2000 µm) and optical images (47 µm). Here we propose a supervised machine-learning application for the discrimination of sediment facies using these scanning data. Thus, the invested time and the potential bias common for the conventional approach can be reduced considerably. We expect that our approach will contribute to developing a more comprehensive and time-efficient automatic sediment facies discrimination.
Keywords: the Wadden Sea, µ-XRF core scanning, machine-learning, sediment facies discrimination
How to cite: Lee, A.-S., Enters, D., Liou, S. Y. H., and Zolitschka, B.: Artificial intelligence for discrimination of sediment facies based on high-resolution elemental and colour data from coastal sediments of the East Frisian Wadden Sea, Germany, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3984, https://doi.org/10.5194/egusphere-egu2020-3984, 2020.
EGU2020-9870 | Displays | ITS4.1/NP4.2
Detecting Tropical Cyclones using Deep Learning TechniquesDaniel Galea, Bryan Lawrence, and Julian Kunkel
Finding and identifying important phenomena in large volumes of simulation data consumes time and resources. Deep Learning offers a route to improve speeds and costs. In this work we demonstrate the application of Deep Learning in identifying data which contains various classes of tropical cyclone. Our initial application is in re-analysis data, but the eventual goal is to use this system during numerical simulation to identify data of interest before writing it out.
A Deep Learning model has been developed to help identify data containing varying intensities of tropical cyclones. The model uses some convolutional layers to build up a pattern to look for, and a fully-connected classifier to predict whether a tropical cyclone is present in the input. Other techniques such as batch normalization and dropout were tested. The model was trained on a subset of the ERA-Interim dataset from the 1st of January 1979 until the 31st of July 2017, with the relevant labels obtained from the IBTrACS dataset. The model obtained an accuracy of 99.08% on a test set, which was a 20% subset of the original dataset.
An advantage of this model is that it does not rely on thresholds set a priori, such as a minimum of sea level pressure, a maximum of vorticity or a measure of the depth and strength of deep convection, making it more objective than previous detection methods. Also, given that current methods follow non-trivial algorithms, the Deep Learning model is expected to have the advantage of being able to get the required prediction much quicker, making it viable to be implemented into an existing numerical simulation.
Most current methods also apply different thresholds for different basins (planetary regions). In principle, the globally trained model should avoid the necessity for such differences, however, it was found that while differing thresholds were not required, training data for specific regions was required to get similar accuracy when only individual basins were examined.
The existing version, with greater than 99% accuracy globally and around 91% when trained only on cases from the Western Pacific and Western Atlantic basins, has been trained on ERA-Interim data. The next steps with this work will involve assessing the suitability of the pre-trained model for different data, and deploying it within a running numerical simulation.
How to cite: Galea, D., Lawrence, B., and Kunkel, J.: Detecting Tropical Cyclones using Deep Learning Techniques , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9870, https://doi.org/10.5194/egusphere-egu2020-9870, 2020.
Finding and identifying important phenomena in large volumes of simulation data consumes time and resources. Deep Learning offers a route to improve speeds and costs. In this work we demonstrate the application of Deep Learning in identifying data which contains various classes of tropical cyclone. Our initial application is in re-analysis data, but the eventual goal is to use this system during numerical simulation to identify data of interest before writing it out.
A Deep Learning model has been developed to help identify data containing varying intensities of tropical cyclones. The model uses some convolutional layers to build up a pattern to look for, and a fully-connected classifier to predict whether a tropical cyclone is present in the input. Other techniques such as batch normalization and dropout were tested. The model was trained on a subset of the ERA-Interim dataset from the 1st of January 1979 until the 31st of July 2017, with the relevant labels obtained from the IBTrACS dataset. The model obtained an accuracy of 99.08% on a test set, which was a 20% subset of the original dataset.
An advantage of this model is that it does not rely on thresholds set a priori, such as a minimum of sea level pressure, a maximum of vorticity or a measure of the depth and strength of deep convection, making it more objective than previous detection methods. Also, given that current methods follow non-trivial algorithms, the Deep Learning model is expected to have the advantage of being able to get the required prediction much quicker, making it viable to be implemented into an existing numerical simulation.
Most current methods also apply different thresholds for different basins (planetary regions). In principle, the globally trained model should avoid the necessity for such differences, however, it was found that while differing thresholds were not required, training data for specific regions was required to get similar accuracy when only individual basins were examined.
The existing version, with greater than 99% accuracy globally and around 91% when trained only on cases from the Western Pacific and Western Atlantic basins, has been trained on ERA-Interim data. The next steps with this work will involve assessing the suitability of the pre-trained model for different data, and deploying it within a running numerical simulation.
How to cite: Galea, D., Lawrence, B., and Kunkel, J.: Detecting Tropical Cyclones using Deep Learning Techniques , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9870, https://doi.org/10.5194/egusphere-egu2020-9870, 2020.
EGU2020-7276 | Displays | ITS4.1/NP4.2
Regional flood forecasting based on the spatio-temporal variation characteristics using hybrid SOM and dynamic neural networksTai-Chen Chen, Li-Chiu Chang, and Fi-John Chang
The frequency of extreme hydrological events caused by climate change has increased in recent years. Besides, most of the urban areas in various countries are located on low-lying and flood-prone alluvial plains such that the severity of flooding disasters and the number of affected people increase significantly. Therefore, it is imperative to explore the spatio-temporal variation characteristics of regional floods and apply them to real-time flood forecasting. Flash floods are common and difficult to control in Taiwan due to several geo-hydro-meteorological factors including drastic changes in topography, steep rivers, short concentration time, and heavy rain. In recent decades, the emergence of artificial intelligence (AI) and machine learning techniques have proven to be effective in tackling real-time climate-related disasters. This study combines an unsupervised and competitive neural network, the self-organizing map (SOM), and the dynamic neural networks to make regional flood inundation forecasts. The SOM can be used to cluster high-dimensional historical flooding events and map the events onto a two-dimensional topological feature map. The topological structure displayed in the output space is helpful to explore the characteristics of the spatio-temporal variation of different flood events in the investigative watershed. The dynamic neural networks are suitable for forecasting time-vary systems because its feedback mechanism can keep track the most recent tendency. The results demonstrate that the real-time regional flood inundation forecast model combining SOM and dynamic neural networks can more quickly extract the characteristics of regional flood inundation and more accurately produce multi-step ahead flood inundation forecasts than the traditional methods. The proposed methodology can provide spatio-temporal information of flood inundation to decision makers and residents for taking precautionary measures against flooding.
Keywords: Artificial neural network (ANN); Self-organizing map (SOM); Dynamic neural networks; Regional flood; Spatio-temporal distribution
How to cite: Chen, T.-C., Chang, L.-C., and Chang, F.-J.: Regional flood forecasting based on the spatio-temporal variation characteristics using hybrid SOM and dynamic neural networks, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7276, https://doi.org/10.5194/egusphere-egu2020-7276, 2020.
The frequency of extreme hydrological events caused by climate change has increased in recent years. Besides, most of the urban areas in various countries are located on low-lying and flood-prone alluvial plains such that the severity of flooding disasters and the number of affected people increase significantly. Therefore, it is imperative to explore the spatio-temporal variation characteristics of regional floods and apply them to real-time flood forecasting. Flash floods are common and difficult to control in Taiwan due to several geo-hydro-meteorological factors including drastic changes in topography, steep rivers, short concentration time, and heavy rain. In recent decades, the emergence of artificial intelligence (AI) and machine learning techniques have proven to be effective in tackling real-time climate-related disasters. This study combines an unsupervised and competitive neural network, the self-organizing map (SOM), and the dynamic neural networks to make regional flood inundation forecasts. The SOM can be used to cluster high-dimensional historical flooding events and map the events onto a two-dimensional topological feature map. The topological structure displayed in the output space is helpful to explore the characteristics of the spatio-temporal variation of different flood events in the investigative watershed. The dynamic neural networks are suitable for forecasting time-vary systems because its feedback mechanism can keep track the most recent tendency. The results demonstrate that the real-time regional flood inundation forecast model combining SOM and dynamic neural networks can more quickly extract the characteristics of regional flood inundation and more accurately produce multi-step ahead flood inundation forecasts than the traditional methods. The proposed methodology can provide spatio-temporal information of flood inundation to decision makers and residents for taking precautionary measures against flooding.
Keywords: Artificial neural network (ANN); Self-organizing map (SOM); Dynamic neural networks; Regional flood; Spatio-temporal distribution
How to cite: Chen, T.-C., Chang, L.-C., and Chang, F.-J.: Regional flood forecasting based on the spatio-temporal variation characteristics using hybrid SOM and dynamic neural networks, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7276, https://doi.org/10.5194/egusphere-egu2020-7276, 2020.
EGU2020-10297 | Displays | ITS4.1/NP4.2
Building Cloud-Based Data Services to Enable Earth Science WorkflowsJohn Hanley, Stephan Siemen, James Hawkes, Milana Vuckovic, Tiago Quintino, and Florian Pappenberger
Weather forecasts produced by ECMWF and environmental services by the Copernicus programme act as a vital input for many downstream simulations and applications. A variety of products, such as ECMWF reanalyses and archived forecasts, are additionally available to users via the MARS archive and the Copernicus data portal. Transferring, storing and locally modifying large volumes of such data prior to integration currently presents a significant challenge to users. The key aim for ECMWF within the H2020 HiDALGO project (https://hidalgo-project.eu/) is to migrate these tasks to the cloud, thereby facilitating fast and seamless application integration by enabling precise and efficient data delivery to the end-user. The required cloud infrastructure development will also feed into ECMWF's contribution to the European Weather Cloud pilot which is a collaborative cloud development project between ECMWF and EUMETSAT.
The HiDALGO project aims to implement a set of services and functionality to enable the simulation of complex global challenges which require massive high performance computing resources alongside state-of-the-art data analytics and visualization. The HiDALGO use-case workflows are comprised of four main components: pre-processing, numerical simulation, post-processing and visualization. The core simulations are ideally suited to running in a dedicated HPC environment, while the pre-/post-processing and visualisation tasks are generally well suited to running in a cloud environment. Enabling and efficiently managing and orchestrating the integration of both HPC and cloud environments to improve overall performance and functionality is the key goal of HiDALGO.
ECMWF's role in the project will be to enable seamless integration of two pilot applications with its meteorological data and services (such as data exploration, analysis and visualisation) delivered via ECMWF's cloud and orchestrated by bespoke HiDALGO workflows. The demonstrated workflows show the increased value which can be created from weather forecasts, but also derived forecasts for air quality which are provided by the Copernicus Atmospheric Monitoring Service (CAMS).
This poster will give a general overview of HiDALGO project and its main aims and objectives. It will present the two test pilot applications which will be used for integration, and an overview of the general workflows and services within HiDALGO. In particular, it will focus on how ECMWF's cloud data and services will couple with the test pilot applications thereby improving overall workflow performance and enabling access to new data and products for the pilot users.
How to cite: Hanley, J., Siemen, S., Hawkes, J., Vuckovic, M., Quintino, T., and Pappenberger, F.: Building Cloud-Based Data Services to Enable Earth Science Workflows , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10297, https://doi.org/10.5194/egusphere-egu2020-10297, 2020.
Weather forecasts produced by ECMWF and environmental services by the Copernicus programme act as a vital input for many downstream simulations and applications. A variety of products, such as ECMWF reanalyses and archived forecasts, are additionally available to users via the MARS archive and the Copernicus data portal. Transferring, storing and locally modifying large volumes of such data prior to integration currently presents a significant challenge to users. The key aim for ECMWF within the H2020 HiDALGO project (https://hidalgo-project.eu/) is to migrate these tasks to the cloud, thereby facilitating fast and seamless application integration by enabling precise and efficient data delivery to the end-user. The required cloud infrastructure development will also feed into ECMWF's contribution to the European Weather Cloud pilot which is a collaborative cloud development project between ECMWF and EUMETSAT.
The HiDALGO project aims to implement a set of services and functionality to enable the simulation of complex global challenges which require massive high performance computing resources alongside state-of-the-art data analytics and visualization. The HiDALGO use-case workflows are comprised of four main components: pre-processing, numerical simulation, post-processing and visualization. The core simulations are ideally suited to running in a dedicated HPC environment, while the pre-/post-processing and visualisation tasks are generally well suited to running in a cloud environment. Enabling and efficiently managing and orchestrating the integration of both HPC and cloud environments to improve overall performance and functionality is the key goal of HiDALGO.
ECMWF's role in the project will be to enable seamless integration of two pilot applications with its meteorological data and services (such as data exploration, analysis and visualisation) delivered via ECMWF's cloud and orchestrated by bespoke HiDALGO workflows. The demonstrated workflows show the increased value which can be created from weather forecasts, but also derived forecasts for air quality which are provided by the Copernicus Atmospheric Monitoring Service (CAMS).
This poster will give a general overview of HiDALGO project and its main aims and objectives. It will present the two test pilot applications which will be used for integration, and an overview of the general workflows and services within HiDALGO. In particular, it will focus on how ECMWF's cloud data and services will couple with the test pilot applications thereby improving overall workflow performance and enabling access to new data and products for the pilot users.
How to cite: Hanley, J., Siemen, S., Hawkes, J., Vuckovic, M., Quintino, T., and Pappenberger, F.: Building Cloud-Based Data Services to Enable Earth Science Workflows , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10297, https://doi.org/10.5194/egusphere-egu2020-10297, 2020.
EGU2020-8423 | Displays | ITS4.1/NP4.2
The selection of adaptive region of geomagnetic map based on PCA and GA-BP neural networkYang Chong, Dongqing Zhao, Guorui Xiao, Minzhi Xiang, Linyang Li, and Zuoping Gong
The selection of adaptive region of geomagnetic map is an important factor that affects the positioning accuracy of geomagnetic navigation. An automatic recognition and classification method of adaptive region of geomagnetic background field based on Principal Component Analysis (PCA) and GA-BP neural network is proposed. Firstly, PCA is used to analyze the geomagnetic characteristic parameters, and the independent characteristic parameters containing principal components are selected. Then, the GA-BP neural network model is constructed, and the correspondence between the geomagnetic characteristic parameters and matching performance is established, so as to realize the recognition and classification of adaptive region. Finally, Simulation results show that the method is feasible and efficient, and the positioning accuracy of geomagnetic navigation is improved.
How to cite: Chong, Y., Zhao, D., Xiao, G., Xiang, M., Li, L., and Gong, Z.: The selection of adaptive region of geomagnetic map based on PCA and GA-BP neural network, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8423, https://doi.org/10.5194/egusphere-egu2020-8423, 2020.
The selection of adaptive region of geomagnetic map is an important factor that affects the positioning accuracy of geomagnetic navigation. An automatic recognition and classification method of adaptive region of geomagnetic background field based on Principal Component Analysis (PCA) and GA-BP neural network is proposed. Firstly, PCA is used to analyze the geomagnetic characteristic parameters, and the independent characteristic parameters containing principal components are selected. Then, the GA-BP neural network model is constructed, and the correspondence between the geomagnetic characteristic parameters and matching performance is established, so as to realize the recognition and classification of adaptive region. Finally, Simulation results show that the method is feasible and efficient, and the positioning accuracy of geomagnetic navigation is improved.
How to cite: Chong, Y., Zhao, D., Xiao, G., Xiang, M., Li, L., and Gong, Z.: The selection of adaptive region of geomagnetic map based on PCA and GA-BP neural network, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8423, https://doi.org/10.5194/egusphere-egu2020-8423, 2020.
EGU2020-13191 | Displays | ITS4.1/NP4.2
Predicting forest fire in Indonesia using remote sensing dataSuwei Yang, Kuldeep S Meel, and Massimo Lupascu
Over the last decades we are seeing an increase in forest fires due to deforestation and climate change. In Southeast Asia, tropical peatland forest fires are a major environmental issue having a significant effect on the climate and causing extensive social, health and economical impacts. As a result, forest fire prediction has emerged as a key challenge in computational sustainability. Existing forest fire prediction systems, such as the Canadian Forest Fire Danger Rating System (Natural Resources Canada), are based on handcrafted features and use data from instruments on the ground. However, data from instruments on the ground may not always be available. In this work, we propose a novel machine learning approach that uses historical satellite images to predict forest fires in Indonesia. Our prediction model achieves more than 0.86 area under the receiver operator characteristic(ROC) curve. Further evaluations show that the model's prediction performance remains above 0.81 area under ROC curve even with reduced data. The results support our claim that machine learning based approaches can lead to reliable and cost-effective forest fire prediction systems.
How to cite: Yang, S., Meel, K. S., and Lupascu, M.: Predicting forest fire in Indonesia using remote sensing data, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13191, https://doi.org/10.5194/egusphere-egu2020-13191, 2020.
Over the last decades we are seeing an increase in forest fires due to deforestation and climate change. In Southeast Asia, tropical peatland forest fires are a major environmental issue having a significant effect on the climate and causing extensive social, health and economical impacts. As a result, forest fire prediction has emerged as a key challenge in computational sustainability. Existing forest fire prediction systems, such as the Canadian Forest Fire Danger Rating System (Natural Resources Canada), are based on handcrafted features and use data from instruments on the ground. However, data from instruments on the ground may not always be available. In this work, we propose a novel machine learning approach that uses historical satellite images to predict forest fires in Indonesia. Our prediction model achieves more than 0.86 area under the receiver operator characteristic(ROC) curve. Further evaluations show that the model's prediction performance remains above 0.81 area under ROC curve even with reduced data. The results support our claim that machine learning based approaches can lead to reliable and cost-effective forest fire prediction systems.
How to cite: Yang, S., Meel, K. S., and Lupascu, M.: Predicting forest fire in Indonesia using remote sensing data, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13191, https://doi.org/10.5194/egusphere-egu2020-13191, 2020.
EGU2020-13419 | Displays | ITS4.1/NP4.2
Machine learning as a tool for avalanche forecastingMartin Hendrick, Cristina Pérez-Guillén, Alec van Herwijnen, and Jürg Schweizer
Assessing and forecasting avalanche hazard is crucial for the safety of people and infrastructure in mountain areas. Over 20 years of data covering snow precipitation, snowpack properties, weather, on-site observations, and avalanche danger has been collected in the context of operational avalanche forecasting for the Swiss Alps. The quality and breadth of this dataset makes it suitable for machine learning techniques.
Forecasters mainly process a huge and redundant dataset "manually" to produce daily avalanche bulletins during the winter season. The purpose of this work is to provide the forecasters automated tools to support their work.
By combining clustering and classification algorithms, we are able to reduce the amount of information that needs to be processed and identify relevant weather and snow patterns that characterize a given avalanche situation.
How to cite: Hendrick, M., Pérez-Guillén, C., van Herwijnen, A., and Schweizer, J.: Machine learning as a tool for avalanche forecasting, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13419, https://doi.org/10.5194/egusphere-egu2020-13419, 2020.
Assessing and forecasting avalanche hazard is crucial for the safety of people and infrastructure in mountain areas. Over 20 years of data covering snow precipitation, snowpack properties, weather, on-site observations, and avalanche danger has been collected in the context of operational avalanche forecasting for the Swiss Alps. The quality and breadth of this dataset makes it suitable for machine learning techniques.
Forecasters mainly process a huge and redundant dataset "manually" to produce daily avalanche bulletins during the winter season. The purpose of this work is to provide the forecasters automated tools to support their work.
By combining clustering and classification algorithms, we are able to reduce the amount of information that needs to be processed and identify relevant weather and snow patterns that characterize a given avalanche situation.
How to cite: Hendrick, M., Pérez-Guillén, C., van Herwijnen, A., and Schweizer, J.: Machine learning as a tool for avalanche forecasting, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13419, https://doi.org/10.5194/egusphere-egu2020-13419, 2020.
EGU2020-1870 | Displays | ITS4.1/NP4.2
Prediction of Chlorophyll and Phosphorus in Lake Ontario by Ensemble of Neural Network ModelsYouyue Sun, Yu Li, Jinhui Jeanne Huang, and Edward McBean
Chlorophyll-a (CHLA) and total phosphorous (TP) are key indicators for water quality and eutrophication in lakes. It would be a great help to water management if CHLA and TP could be predicted with certain leading time to ensure water quality control measures could be implemented. Since eutrophication is the results of a complex bio-chemical-physical processes involving in pH, temperature, dissolved oxygen (DO) and many other water quality parameters, the discover of their internal correlations and relationships may help in the predication of CHLA and TP. In this study, a long term (20 years) water quality data including CHLA, TP, total nitrogen (TN), turbidity (TB), sulphate, pH, and DO collected in Lake Ontario by the Environment and Climate Change Canada agency were obtained. These data were analyzed by using a group of Neural Network (NN) models and ensemble strategies were evaluated in this study. One particular ensemble of the following NN models, namely, back propagation, Kohonen, probabilistic neural network (PNN), generalized regression neural network (GRNN), or group method of data handling (GMDH) were selected which has higher goodness of fit and shows robustness in model validation. Comparing with one single NN model, the ensemble model could provide more accurate predictions of CHLA and TP concentration in Lake Ontario and the predication of CHLA and TP would be helpful in lake management, eco-restoration and public health risk assessment.
How to cite: Sun, Y., Li, Y., Huang, J. J., and McBean, E.: Prediction of Chlorophyll and Phosphorus in Lake Ontario by Ensemble of Neural Network Models, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1870, https://doi.org/10.5194/egusphere-egu2020-1870, 2020.
Chlorophyll-a (CHLA) and total phosphorous (TP) are key indicators for water quality and eutrophication in lakes. It would be a great help to water management if CHLA and TP could be predicted with certain leading time to ensure water quality control measures could be implemented. Since eutrophication is the results of a complex bio-chemical-physical processes involving in pH, temperature, dissolved oxygen (DO) and many other water quality parameters, the discover of their internal correlations and relationships may help in the predication of CHLA and TP. In this study, a long term (20 years) water quality data including CHLA, TP, total nitrogen (TN), turbidity (TB), sulphate, pH, and DO collected in Lake Ontario by the Environment and Climate Change Canada agency were obtained. These data were analyzed by using a group of Neural Network (NN) models and ensemble strategies were evaluated in this study. One particular ensemble of the following NN models, namely, back propagation, Kohonen, probabilistic neural network (PNN), generalized regression neural network (GRNN), or group method of data handling (GMDH) were selected which has higher goodness of fit and shows robustness in model validation. Comparing with one single NN model, the ensemble model could provide more accurate predictions of CHLA and TP concentration in Lake Ontario and the predication of CHLA and TP would be helpful in lake management, eco-restoration and public health risk assessment.
How to cite: Sun, Y., Li, Y., Huang, J. J., and McBean, E.: Prediction of Chlorophyll and Phosphorus in Lake Ontario by Ensemble of Neural Network Models, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1870, https://doi.org/10.5194/egusphere-egu2020-1870, 2020.
EGU2020-2254 | Displays | ITS4.1/NP4.2
Retrieval of Arctic sea ice freeboard from passive microwave data using deep neural networkJunhwa Chi, Hyun-Cheol Kim, and Sung Jae Lee
Changes in Arctic sea ice cover represent one of the most visible indicators of climate change. While changes in sea ice extent affect the albedo, changes in sea ice volume explain changes in the heat budget and the exchange of fresh water between ice and the ocean. Global climate simulations predict that Arctic sea ice will exhibit a more significant change in volume than extent. Satellite observations show a long-term negative trend in Arctic sea ice during all seasons, particularly in summer. Sea ice volume has been estimated by ICESat and CryoSat-2 satellites, and then NASA’s second-generation spaceborne lidar mission, ICESat-2 has successfully been launched in 2018. Although these sensors can measure sea ice freeboard precisely, long revisit cycles and narrow swaths are problematic for monitoring of the freeboard in the entire of Arctic ocean effectively. Passive microwave sensors are widely used in retrieval of sea ice concentration. Because of the capability of high temporal resolution and wider swaths, these sensors enable to produce daily sea ice concentration maps over the entire Arctic ocean. Brightness temperatures from passive microwave sensors are often used to estimate sea ice freeboard for first-year ice, but it is difficult to associate with physical characteristics related to sea ice height of multi-year ice. In machine learning community, deep learning has gained attention and notable success in addressing more complicated decision making using multiple hidden layers. In this study, we propose a deep learning based Arctic sea ice freeboard retrieval algorithm incorporating the brightness temperature data from the AMSR2 passive microwave data and sea ice freeboard data from the ICESat-2. The proposed retrieval algorithm enables to estimate daily freeboard for both first- and multi-year ice over the entire Arctic ocean. The estimated freeboard values from the AMSR2 are then quantitatively and qualitatively compared with other sea ice freeboard or thickness products.
How to cite: Chi, J., Kim, H.-C., and Lee, S. J.: Retrieval of Arctic sea ice freeboard from passive microwave data using deep neural network, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2254, https://doi.org/10.5194/egusphere-egu2020-2254, 2020.
Changes in Arctic sea ice cover represent one of the most visible indicators of climate change. While changes in sea ice extent affect the albedo, changes in sea ice volume explain changes in the heat budget and the exchange of fresh water between ice and the ocean. Global climate simulations predict that Arctic sea ice will exhibit a more significant change in volume than extent. Satellite observations show a long-term negative trend in Arctic sea ice during all seasons, particularly in summer. Sea ice volume has been estimated by ICESat and CryoSat-2 satellites, and then NASA’s second-generation spaceborne lidar mission, ICESat-2 has successfully been launched in 2018. Although these sensors can measure sea ice freeboard precisely, long revisit cycles and narrow swaths are problematic for monitoring of the freeboard in the entire of Arctic ocean effectively. Passive microwave sensors are widely used in retrieval of sea ice concentration. Because of the capability of high temporal resolution and wider swaths, these sensors enable to produce daily sea ice concentration maps over the entire Arctic ocean. Brightness temperatures from passive microwave sensors are often used to estimate sea ice freeboard for first-year ice, but it is difficult to associate with physical characteristics related to sea ice height of multi-year ice. In machine learning community, deep learning has gained attention and notable success in addressing more complicated decision making using multiple hidden layers. In this study, we propose a deep learning based Arctic sea ice freeboard retrieval algorithm incorporating the brightness temperature data from the AMSR2 passive microwave data and sea ice freeboard data from the ICESat-2. The proposed retrieval algorithm enables to estimate daily freeboard for both first- and multi-year ice over the entire Arctic ocean. The estimated freeboard values from the AMSR2 are then quantitatively and qualitatively compared with other sea ice freeboard or thickness products.
How to cite: Chi, J., Kim, H.-C., and Lee, S. J.: Retrieval of Arctic sea ice freeboard from passive microwave data using deep neural network, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2254, https://doi.org/10.5194/egusphere-egu2020-2254, 2020.
EGU2020-9428 | Displays | ITS4.1/NP4.2
Riverscape classification by using machine learning in combination with satellite and UAV imagesHitoshi Miyamoto, Takuya Sato, Akito Momose, and Shuji Iwami
This presentation examined a new method for classifying riverine land covers by using the machine learning technique applied to both the satellite and UAV (Unmanned Aerial Vehicle) images in a Kurobe River channel. The method used Random Forests (RF) for the classification with RGBs and NDVIs (Normalized Difference Vegetation Index) of the images in combination. In the process, the high-resolution UAV images made it possible to create accurate training data for the land cover classification of the low-resolution satellite images. The results indicated that the combination of the high- and low-resolution images in the machine learning could effectively detect waters, gravel/sand beds, trees, and grasses from the satellite images with a certain degree of accuracy. In contrast, the usage of only low-resolution satellite images failed to detect the vegetation difference between trees and grasses. These results could actively support the effectiveness of the present machine learning method in the combination of satellite and UAV images to grasp the most critical areas in riparian vegetation management.
How to cite: Miyamoto, H., Sato, T., Momose, A., and Iwami, S.: Riverscape classification by using machine learning in combination with satellite and UAV images, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9428, https://doi.org/10.5194/egusphere-egu2020-9428, 2020.
This presentation examined a new method for classifying riverine land covers by using the machine learning technique applied to both the satellite and UAV (Unmanned Aerial Vehicle) images in a Kurobe River channel. The method used Random Forests (RF) for the classification with RGBs and NDVIs (Normalized Difference Vegetation Index) of the images in combination. In the process, the high-resolution UAV images made it possible to create accurate training data for the land cover classification of the low-resolution satellite images. The results indicated that the combination of the high- and low-resolution images in the machine learning could effectively detect waters, gravel/sand beds, trees, and grasses from the satellite images with a certain degree of accuracy. In contrast, the usage of only low-resolution satellite images failed to detect the vegetation difference between trees and grasses. These results could actively support the effectiveness of the present machine learning method in the combination of satellite and UAV images to grasp the most critical areas in riparian vegetation management.
How to cite: Miyamoto, H., Sato, T., Momose, A., and Iwami, S.: Riverscape classification by using machine learning in combination with satellite and UAV images, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9428, https://doi.org/10.5194/egusphere-egu2020-9428, 2020.
EGU2020-20430 | Displays | ITS4.1/NP4.2
Can we predict Dry Air Intrusions using an Artificial Neural Network?Stav Nahum, Shira Raveh-Rubin, Jonathan Shlomi, and Vered Silverman
Dry-air intrusions (DIs) descending from the upper troposphere toward the surface are often associated with abrupt modification of the atmospheric boundary layer,air-sea interface, and high impact weather events. Understanding the triggering mechanism of DIs is important to predict the likelihood of their occurrence in both weather forecasts and future climate projections.
The current identification method of DIs is based on a systematic costly Lagrangian method that requires high vertical resolution of the wind field at sub-daily intervals. Therefore, the accurate prediction of surface weather conditions is potentially limited. Moreover, large case to case variability of these events makes it challenging to compose an objective algorithm for predicting the timing and location of their initiation.
Here we test the ability of deep neural networks, originally designed for computer vision purposes, to identify the DI phenomenon based on instantaneous 2-dimensional maps of commonly available atmospheric parameters. Our trained neural network is able to successfully predict DI origins using three instantaneous 2-D maps of geopotential heights.
Our results demonstrate how machine learning can be used to overcome the limitations of the traditional identification method, introducing the possibility to evaluate and quantify the occurrence of DIs instantaneously, avoiding costly computations and the need for high resolution data sets which are not available for most atmospheric data sets. In particular, for the first time, it is possible to predict the occurrence of DI events up to two days before the actual descent is complete.
How to cite: Nahum, S., Raveh-Rubin, S., Shlomi, J., and Silverman, V.: Can we predict Dry Air Intrusions using an Artificial Neural Network?, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20430, https://doi.org/10.5194/egusphere-egu2020-20430, 2020.
Dry-air intrusions (DIs) descending from the upper troposphere toward the surface are often associated with abrupt modification of the atmospheric boundary layer,air-sea interface, and high impact weather events. Understanding the triggering mechanism of DIs is important to predict the likelihood of their occurrence in both weather forecasts and future climate projections.
The current identification method of DIs is based on a systematic costly Lagrangian method that requires high vertical resolution of the wind field at sub-daily intervals. Therefore, the accurate prediction of surface weather conditions is potentially limited. Moreover, large case to case variability of these events makes it challenging to compose an objective algorithm for predicting the timing and location of their initiation.
Here we test the ability of deep neural networks, originally designed for computer vision purposes, to identify the DI phenomenon based on instantaneous 2-dimensional maps of commonly available atmospheric parameters. Our trained neural network is able to successfully predict DI origins using three instantaneous 2-D maps of geopotential heights.
Our results demonstrate how machine learning can be used to overcome the limitations of the traditional identification method, introducing the possibility to evaluate and quantify the occurrence of DIs instantaneously, avoiding costly computations and the need for high resolution data sets which are not available for most atmospheric data sets. In particular, for the first time, it is possible to predict the occurrence of DI events up to two days before the actual descent is complete.
How to cite: Nahum, S., Raveh-Rubin, S., Shlomi, J., and Silverman, V.: Can we predict Dry Air Intrusions using an Artificial Neural Network?, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20430, https://doi.org/10.5194/egusphere-egu2020-20430, 2020.
EGU2020-22148 | Displays | ITS4.1/NP4.2
Hydrochemical Classification of Groundwater with Artificial Neural NetworksValentin Haselbeck, Jannes Kordilla, Florian Krause, and Martin Sauter
Growing datasets of inorganic hydrochemical analyses together with large differences in the measured concentrations raise the demand for data compression while maintaining critical information. The data should subsequently be displayed in an orderly and understandable way. Here, a type of artificial neural network, Kohonen’s self-organizing map (SOM), is trained on inorganic hydrochemical data. Based on this network, clusters are built and associated to the salinity source distribution of the spatial variation at a former potash mining site. This combined two-step clustering approach managed to assign the groundwater analyses automatically to five different clusters, three geogenic and two anthropogenic, according to their inorganic chemical composition. The spatial distribution of the SOM clusters helps to understand the large scale hydrogeological context. This approach provides the hydrogeologist with a tool to quickly and automatically analyze large datasets and present them in a clear and comprehensible format.
How to cite: Haselbeck, V., Kordilla, J., Krause, F., and Sauter, M.: Hydrochemical Classification of Groundwater with Artificial Neural Networks, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22148, https://doi.org/10.5194/egusphere-egu2020-22148, 2020.
Growing datasets of inorganic hydrochemical analyses together with large differences in the measured concentrations raise the demand for data compression while maintaining critical information. The data should subsequently be displayed in an orderly and understandable way. Here, a type of artificial neural network, Kohonen’s self-organizing map (SOM), is trained on inorganic hydrochemical data. Based on this network, clusters are built and associated to the salinity source distribution of the spatial variation at a former potash mining site. This combined two-step clustering approach managed to assign the groundwater analyses automatically to five different clusters, three geogenic and two anthropogenic, according to their inorganic chemical composition. The spatial distribution of the SOM clusters helps to understand the large scale hydrogeological context. This approach provides the hydrogeologist with a tool to quickly and automatically analyze large datasets and present them in a clear and comprehensible format.
How to cite: Haselbeck, V., Kordilla, J., Krause, F., and Sauter, M.: Hydrochemical Classification of Groundwater with Artificial Neural Networks, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22148, https://doi.org/10.5194/egusphere-egu2020-22148, 2020.
EGU2020-21947 | Displays | ITS4.1/NP4.2
Estimation of fine-scale relative humidity profiles: an issue for understanding the atmospheric water cycleVeronique Michot, Helene Brogniez, Mathieu Vrac, Soulivanh Thao, Helene Chepfer, Pascal Yiou, and Christophe Dufour
The multi-scale interactions at the origin of the links between clouds and water vapour are essential for the Earth's energy balance and thus the climate, from local to global. Knowledge of the distribution and variability of water vapour in the troposphere is indeed a major issue for the understanding of the atmospheric water cycle. At present, these interactions are poorly known at regional and local scales, i.e. within 100km, and are therefore poorly represented in numerical climate models. This is why we have sought to predict cloud scale relative humidity profiles in the intertropical zone, using a non-parametric statistical downscaling method called quantile regression forest. The procedure includes co-located data from 3 satellites: CALIPSO lidar and CloudSat radar, used as predictors and providing cloud properties at 90m and 1.4km horizontal resolution respectively; SAPHIR data used as a predictor and providing relative humidity at an initial horizontal resolution of 10km. Quantile regression forests were used to predict relative humidity profiles at the CALIPSO and CloudSat scales. These predictions are able to reproduce a relative humidity variability consistent with the cloud profiles and are confirmed by values of coefficients of determination greater than 0.7, relative to observed relative humidity, and Continuous Rank Probability Skill Score between 0 and 1, relative to climatology. Lidar measurements from the NARVAL 1&2 campaigns and radiosondes from the EUREC4A campaigns were also used to compare Relative Humidity profiles at the SAPHIR scale and at the scale of forest regression prediction by quantile regression.
How to cite: Michot, V., Brogniez, H., Vrac, M., Thao, S., Chepfer, H., Yiou, P., and Dufour, C.: Estimation of fine-scale relative humidity profiles: an issue for understanding the atmospheric water cycle, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21947, https://doi.org/10.5194/egusphere-egu2020-21947, 2020.
The multi-scale interactions at the origin of the links between clouds and water vapour are essential for the Earth's energy balance and thus the climate, from local to global. Knowledge of the distribution and variability of water vapour in the troposphere is indeed a major issue for the understanding of the atmospheric water cycle. At present, these interactions are poorly known at regional and local scales, i.e. within 100km, and are therefore poorly represented in numerical climate models. This is why we have sought to predict cloud scale relative humidity profiles in the intertropical zone, using a non-parametric statistical downscaling method called quantile regression forest. The procedure includes co-located data from 3 satellites: CALIPSO lidar and CloudSat radar, used as predictors and providing cloud properties at 90m and 1.4km horizontal resolution respectively; SAPHIR data used as a predictor and providing relative humidity at an initial horizontal resolution of 10km. Quantile regression forests were used to predict relative humidity profiles at the CALIPSO and CloudSat scales. These predictions are able to reproduce a relative humidity variability consistent with the cloud profiles and are confirmed by values of coefficients of determination greater than 0.7, relative to observed relative humidity, and Continuous Rank Probability Skill Score between 0 and 1, relative to climatology. Lidar measurements from the NARVAL 1&2 campaigns and radiosondes from the EUREC4A campaigns were also used to compare Relative Humidity profiles at the SAPHIR scale and at the scale of forest regression prediction by quantile regression.
How to cite: Michot, V., Brogniez, H., Vrac, M., Thao, S., Chepfer, H., Yiou, P., and Dufour, C.: Estimation of fine-scale relative humidity profiles: an issue for understanding the atmospheric water cycle, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21947, https://doi.org/10.5194/egusphere-egu2020-21947, 2020.
EGU2020-21593 | Displays | ITS4.1/NP4.2
A Machine Learning Approach to Cloud Masking in Sentinel-3 SLSTR DataSamuel Jackson, Jeyarajan Thiyagalingam, and Caroline Cox
Clouds appear ubiquitously in the Earth's atmosphere, and thus present a persistent problem for the accurate retrieval of remotely sensed information. The task of identifying which pixels are cloud, and which are not, is what we refer as the cloud masking problem. The task of cloud masking essentially boils down to assigning a binary label, representing either "cloud" or "clear", to each pixel.
Although this problem appears trivial, it is often complicated by a diverse number of issues that affect the imagery obtained from remote sensing instruments. For instance, snow, sea ice, dust, smoke, and sun glint can easily challenge the robustness and consistency of any cloud masking algorithm. The cloud masking problem is also further complicated by geographic and seasonal variation in acquired scenes.
In this work, we present a machine learning approach to handle the problem of cloud masking for the Sea and Land Surface Temperature Radiometer (SLSTR) on board the Sentinel-3 satellites. Our model uses Gradient Boosting Decision Trees (GBDTs), to perform pixel-wise segmentation of satellite images. The model is trained using a hand labelled dataset of ~12,000 individual pixels covering both the spatial and temporal domains of the SLSTR instrument and utilises the combined channels of the dual-view swaths. Pixel level annotations, while lacking spatial context, have the advantage of being cheaper to obtain compared to fully labelled images, a major problem in applying machine learning to remote sensing imagrey.
We validate the performance of our mask using cross validation and compare its performance with two baseline models provided in the SLSTR level 1 product. We show up to 10% improvement in binary classification accuracy compared with the baseline methods. Additionally, we show that our model has the ability to distinguish between different classes of cloud to reasonable accuracy.
How to cite: Jackson, S., Thiyagalingam, J., and Cox, C.: A Machine Learning Approach to Cloud Masking in Sentinel-3 SLSTR Data, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21593, https://doi.org/10.5194/egusphere-egu2020-21593, 2020.
Clouds appear ubiquitously in the Earth's atmosphere, and thus present a persistent problem for the accurate retrieval of remotely sensed information. The task of identifying which pixels are cloud, and which are not, is what we refer as the cloud masking problem. The task of cloud masking essentially boils down to assigning a binary label, representing either "cloud" or "clear", to each pixel.
Although this problem appears trivial, it is often complicated by a diverse number of issues that affect the imagery obtained from remote sensing instruments. For instance, snow, sea ice, dust, smoke, and sun glint can easily challenge the robustness and consistency of any cloud masking algorithm. The cloud masking problem is also further complicated by geographic and seasonal variation in acquired scenes.
In this work, we present a machine learning approach to handle the problem of cloud masking for the Sea and Land Surface Temperature Radiometer (SLSTR) on board the Sentinel-3 satellites. Our model uses Gradient Boosting Decision Trees (GBDTs), to perform pixel-wise segmentation of satellite images. The model is trained using a hand labelled dataset of ~12,000 individual pixels covering both the spatial and temporal domains of the SLSTR instrument and utilises the combined channels of the dual-view swaths. Pixel level annotations, while lacking spatial context, have the advantage of being cheaper to obtain compared to fully labelled images, a major problem in applying machine learning to remote sensing imagrey.
We validate the performance of our mask using cross validation and compare its performance with two baseline models provided in the SLSTR level 1 product. We show up to 10% improvement in binary classification accuracy compared with the baseline methods. Additionally, we show that our model has the ability to distinguish between different classes of cloud to reasonable accuracy.
How to cite: Jackson, S., Thiyagalingam, J., and Cox, C.: A Machine Learning Approach to Cloud Masking in Sentinel-3 SLSTR Data, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21593, https://doi.org/10.5194/egusphere-egu2020-21593, 2020.
EGU2020-21204 | Displays | ITS4.1/NP4.2
Analysis of research trends using Latent Dirichlet Allocation for geologic subdisciplines in South KoreaTaeyong Kim and Minjune Yang
Since the mid-twentieth century, geology in South Korea has considerably advanced as a scientific discipline. Over the past few decades, geology has interacted with physical or engineering viewpoints. So, modern geology needs to be interpreted with an interdisciplinary perspective. This study aimed to classify geology’s academic subdisciplines in Korean and analyze the evolutionary trend of each subdiscipline in South Korea for 54 years from 1964 through 2018. In preprocessing, we collected 13,266 titles from 10 of Korean geological journals and deleted the words that do not require. After that, we classified geologic subdisciplines by Latent Dirichlet Allocation (LDA), a good tool to find topics in text data. According to the result of this study, the optimal number of subdisciplines in LDA was nine (mineralogy, petrology, sedimentology, economic geology, geotechnical engineering, engineering geology, environmental geology, geophysics, seismology). We then calculated the annual proportion from each subdiscipline to investigate evolutionary trends using polynomial regression. Results showed that mineralogy, petrology, sedimentology, and economic geology proportions increased in 1980. Geotechnical engineering and engineering geology proportions increased in 1990. Environmental geology, geophysics, and seismology proportions increased in 1995. The results of this study fill an important gap in understanding the research trends of geologic subdisciplines in South Korea, showing their emergence, growth and diminution.
How to cite: Kim, T. and Yang, M.: Analysis of research trends using Latent Dirichlet Allocation for geologic subdisciplines in South Korea, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21204, https://doi.org/10.5194/egusphere-egu2020-21204, 2020.
Since the mid-twentieth century, geology in South Korea has considerably advanced as a scientific discipline. Over the past few decades, geology has interacted with physical or engineering viewpoints. So, modern geology needs to be interpreted with an interdisciplinary perspective. This study aimed to classify geology’s academic subdisciplines in Korean and analyze the evolutionary trend of each subdiscipline in South Korea for 54 years from 1964 through 2018. In preprocessing, we collected 13,266 titles from 10 of Korean geological journals and deleted the words that do not require. After that, we classified geologic subdisciplines by Latent Dirichlet Allocation (LDA), a good tool to find topics in text data. According to the result of this study, the optimal number of subdisciplines in LDA was nine (mineralogy, petrology, sedimentology, economic geology, geotechnical engineering, engineering geology, environmental geology, geophysics, seismology). We then calculated the annual proportion from each subdiscipline to investigate evolutionary trends using polynomial regression. Results showed that mineralogy, petrology, sedimentology, and economic geology proportions increased in 1980. Geotechnical engineering and engineering geology proportions increased in 1990. Environmental geology, geophysics, and seismology proportions increased in 1995. The results of this study fill an important gap in understanding the research trends of geologic subdisciplines in South Korea, showing their emergence, growth and diminution.
How to cite: Kim, T. and Yang, M.: Analysis of research trends using Latent Dirichlet Allocation for geologic subdisciplines in South Korea, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21204, https://doi.org/10.5194/egusphere-egu2020-21204, 2020.
EGU2020-22666 | Displays | ITS4.1/NP4.2
A machine learning approach to achieve accurate time series forecast of sea-wave conditionsGiulia Cremonini, Giovanni Besio, Daniele Lagomarsino, and Agnese Seminara
Reliable forecast of environmental variables is fundamental in managing
risk associated with hazard scenarios. In this work, we use state of the
art machine learning algorithms to build forecasting models and to get
accurate estimation of sea wave conditions. We exploit multivariate time
series of environmental variables, extracted either from hindcast
database (provided by MeteOcean Group at DICCA) or observed data from
sparse buoys. In this way, future values of sea wave height can be
predicted in order to evaluate the risk associated with incoming
scenarios. The aim is to provide new forecasting tools representing an
alternative to physically based models which have higher computational
cost.
How to cite: Cremonini, G., Besio, G., Lagomarsino, D., and Seminara, A.: A machine learning approach to achieve accurate time series forecast of sea-wave conditions, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22666, https://doi.org/10.5194/egusphere-egu2020-22666, 2020.
Reliable forecast of environmental variables is fundamental in managing
risk associated with hazard scenarios. In this work, we use state of the
art machine learning algorithms to build forecasting models and to get
accurate estimation of sea wave conditions. We exploit multivariate time
series of environmental variables, extracted either from hindcast
database (provided by MeteOcean Group at DICCA) or observed data from
sparse buoys. In this way, future values of sea wave height can be
predicted in order to evaluate the risk associated with incoming
scenarios. The aim is to provide new forecasting tools representing an
alternative to physically based models which have higher computational
cost.
How to cite: Cremonini, G., Besio, G., Lagomarsino, D., and Seminara, A.: A machine learning approach to achieve accurate time series forecast of sea-wave conditions, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22666, https://doi.org/10.5194/egusphere-egu2020-22666, 2020.
EGU2020-21634 | Displays | ITS4.1/NP4.2
The data cube system to EO datasets: the DCS4COP projectJoão Felipe Cardoso dos Santos, Dimitry Van Der Zande, and Nabil Youdjou
Earth Observation (EO) data availability is drastically increasing thanks to the Copernicus Sentinel missions. In 2014 Sentinel data volumes were approximately 200 TB (one operational mission) while in 2019 these volumes rose to 12 PB (nine operational missions) and will increase further with the planned launch of new Sentinel satellites. Dealing with this big data evolution has become an additional challenge in the development of downstream services next to algorithm development, product quality control, and data dissemination techniques.
The H2020 project ‘Data Cube Service for Copernicus (DCS4COP)’ addresses the downstream challenges of big data integrating Copernicus services in a data cube system. A data cube is typically a four-dimensions object, with a parameter dimension and three shared dimensions (time, latitude, longitude). The traditional geographical map data is transformed into a data cube based with user-defined spatial and temporal resolutions using tools such as mathematical operations, sub-setting, resampling, or gap filling to obtain a set of consistent parameters.
This work describes how different EO datasets are integrated in a data cube system to monitor the water quality in the Belgian Continental Shelf (BCS) for a period from 2017 to 2019. The EO data sources are divided in four groups: 1) high resolution data with low temporal coverage (i.e. Sentinel-2), 2) medium resolution data with daily coverage (i.e. Sentinel-3), 3) low resolution geostationary data with high coverage frequency (i.e. MSG-SEVIRI), and 4) merged EO data with different spatial and temporal information acquired from CMEMS. Each EO dataset from group 1 to 3 has its own thematic processor that is responsible for the acquisition of Level 1 data, the application of atmospheric corrections and a first quality control (QC) resulting in a Level 2 quality-controlled remote sensing reflectance (Rrs) product. The Level 2 Rrs is the main product used to generate other ocean colour products such as chlorophyll-a and suspended particulate matter. Each product generated from the Rrs passed by a second QC related to its characteristic and improvements (when applied) and organized in a common data format and structure to facilitate the direct integration into a product and sensor specific. At the end of the process, these products are defined as quality-controlled analysis ready data (ARD) and are ingested in the data cube system enabling fast and easy access to these big data volumes of multi-scale water quality products for further analysis (i.e. downstream service). The data cube system grants a fast and easy straightforward access converting netCDF data to Zarr and placing it on the server. In Zarr datasets, the object is divided into chunks and compressed while the metadata are stored in light weight .json files. Zarr works well on both local filesystems and cloud-based object stores which makes it possible to use through a variety of tools such as an interactive data viewer or jupyter notebooks.
How to cite: Cardoso dos Santos, J. F., Van Der Zande, D., and Youdjou, N.: The data cube system to EO datasets: the DCS4COP project, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21634, https://doi.org/10.5194/egusphere-egu2020-21634, 2020.
Earth Observation (EO) data availability is drastically increasing thanks to the Copernicus Sentinel missions. In 2014 Sentinel data volumes were approximately 200 TB (one operational mission) while in 2019 these volumes rose to 12 PB (nine operational missions) and will increase further with the planned launch of new Sentinel satellites. Dealing with this big data evolution has become an additional challenge in the development of downstream services next to algorithm development, product quality control, and data dissemination techniques.
The H2020 project ‘Data Cube Service for Copernicus (DCS4COP)’ addresses the downstream challenges of big data integrating Copernicus services in a data cube system. A data cube is typically a four-dimensions object, with a parameter dimension and three shared dimensions (time, latitude, longitude). The traditional geographical map data is transformed into a data cube based with user-defined spatial and temporal resolutions using tools such as mathematical operations, sub-setting, resampling, or gap filling to obtain a set of consistent parameters.
This work describes how different EO datasets are integrated in a data cube system to monitor the water quality in the Belgian Continental Shelf (BCS) for a period from 2017 to 2019. The EO data sources are divided in four groups: 1) high resolution data with low temporal coverage (i.e. Sentinel-2), 2) medium resolution data with daily coverage (i.e. Sentinel-3), 3) low resolution geostationary data with high coverage frequency (i.e. MSG-SEVIRI), and 4) merged EO data with different spatial and temporal information acquired from CMEMS. Each EO dataset from group 1 to 3 has its own thematic processor that is responsible for the acquisition of Level 1 data, the application of atmospheric corrections and a first quality control (QC) resulting in a Level 2 quality-controlled remote sensing reflectance (Rrs) product. The Level 2 Rrs is the main product used to generate other ocean colour products such as chlorophyll-a and suspended particulate matter. Each product generated from the Rrs passed by a second QC related to its characteristic and improvements (when applied) and organized in a common data format and structure to facilitate the direct integration into a product and sensor specific. At the end of the process, these products are defined as quality-controlled analysis ready data (ARD) and are ingested in the data cube system enabling fast and easy access to these big data volumes of multi-scale water quality products for further analysis (i.e. downstream service). The data cube system grants a fast and easy straightforward access converting netCDF data to Zarr and placing it on the server. In Zarr datasets, the object is divided into chunks and compressed while the metadata are stored in light weight .json files. Zarr works well on both local filesystems and cloud-based object stores which makes it possible to use through a variety of tools such as an interactive data viewer or jupyter notebooks.
How to cite: Cardoso dos Santos, J. F., Van Der Zande, D., and Youdjou, N.: The data cube system to EO datasets: the DCS4COP project, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21634, https://doi.org/10.5194/egusphere-egu2020-21634, 2020.
EGU2020-21489 | Displays | ITS4.1/NP4.2
GNSS data quality check in the EPN networkAndras Fabian, Carine Bruyninx, Juliette Legrand, and Anna .Miglio
Global Navigation Satellite Systems (GNSS) are a widely spread cost effective technique for geodetic applications and monitoring the Earth’s atmosphere. Therefore, the density of the GNSS networks have grown considerable since the last decade. Each of the networks collects huge amounts of data from permanently operating GNSS stations. The quality of the data is variable, depending on the evaluated time period and satellite system. Conventionally, the quality information is extracted from daily estimates of different types of GNSS parameters such as number of data gaps, multipath level, number of cycle slips, number of dual frequency observations with respect to the expected number, and from their combinations.
The EUREF Permanent GNSS Network Central Bureau (EPN CB, Bruyninx et al., 2019) is operationally collecting and analysing the quality of more than 300 GNSS stations and investigates the main reason of any quality degradation. EPN CB is currently operating a semi-automatic (followed by a manual) data-monitoring tool to detect the quality degradations and investigate the source of the problems. In the upcoming years, this data-monitoring tool will be used to also monitor the GNSS component of the European Plate Observing System (EPOS) expected to include more than 3000 GNSS stations. This anticipated inflation of GNSS stations to be monitored will make it increasingly challenging to select the high quality GNSS data. EPN CB’s current system requires time-consuming semi-automatic inspection of data quality and it is not designed to handle the larger amounts of data. In addition, the current system does not exploit correlations between the daily data quality, time series and the GNSS station metadata (such as equipment type and receiver firmware) often common to many stations.
In this poster, we will first present the currently used method of GNSS data quality checking and its limitations. Based on more than 20 years of GNSS observations collected in the EPN, we will show typical cases of correlations between the time series of data quality metrics and GNSS station metadata. Then, we will set up the requirements and design the new GNSS data quality monitoring system capable of handling more than 300 stations. Based on the collected EPN samples and the typical cases, we will introduce ongoing improvements taking advantage of artificial intelligence techniques, show the possible design of the neutral network, and present supervised training of the neutral network.
Bruyninx C., Legrand J., Fabian A., Pottiaux E. (2019) GNSS Metadata and Data Validation in the EUREF Permanent Network. GPS Sol., 23(4), https://doi: 10.1007/s10291-019-0880-9
How to cite: Fabian, A., Bruyninx, C., Legrand, J., and .Miglio, A.: GNSS data quality check in the EPN network, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21489, https://doi.org/10.5194/egusphere-egu2020-21489, 2020.
Global Navigation Satellite Systems (GNSS) are a widely spread cost effective technique for geodetic applications and monitoring the Earth’s atmosphere. Therefore, the density of the GNSS networks have grown considerable since the last decade. Each of the networks collects huge amounts of data from permanently operating GNSS stations. The quality of the data is variable, depending on the evaluated time period and satellite system. Conventionally, the quality information is extracted from daily estimates of different types of GNSS parameters such as number of data gaps, multipath level, number of cycle slips, number of dual frequency observations with respect to the expected number, and from their combinations.
The EUREF Permanent GNSS Network Central Bureau (EPN CB, Bruyninx et al., 2019) is operationally collecting and analysing the quality of more than 300 GNSS stations and investigates the main reason of any quality degradation. EPN CB is currently operating a semi-automatic (followed by a manual) data-monitoring tool to detect the quality degradations and investigate the source of the problems. In the upcoming years, this data-monitoring tool will be used to also monitor the GNSS component of the European Plate Observing System (EPOS) expected to include more than 3000 GNSS stations. This anticipated inflation of GNSS stations to be monitored will make it increasingly challenging to select the high quality GNSS data. EPN CB’s current system requires time-consuming semi-automatic inspection of data quality and it is not designed to handle the larger amounts of data. In addition, the current system does not exploit correlations between the daily data quality, time series and the GNSS station metadata (such as equipment type and receiver firmware) often common to many stations.
In this poster, we will first present the currently used method of GNSS data quality checking and its limitations. Based on more than 20 years of GNSS observations collected in the EPN, we will show typical cases of correlations between the time series of data quality metrics and GNSS station metadata. Then, we will set up the requirements and design the new GNSS data quality monitoring system capable of handling more than 300 stations. Based on the collected EPN samples and the typical cases, we will introduce ongoing improvements taking advantage of artificial intelligence techniques, show the possible design of the neutral network, and present supervised training of the neutral network.
Bruyninx C., Legrand J., Fabian A., Pottiaux E. (2019) GNSS Metadata and Data Validation in the EUREF Permanent Network. GPS Sol., 23(4), https://doi: 10.1007/s10291-019-0880-9
How to cite: Fabian, A., Bruyninx, C., Legrand, J., and .Miglio, A.: GNSS data quality check in the EPN network, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21489, https://doi.org/10.5194/egusphere-egu2020-21489, 2020.
EGU2020-1729 | Displays | ITS4.1/NP4.2
Resistivity-depth imaging of airborne transient electromagnetic method based on an artificial neural networkJifeng Zhang, Bing Feng, and Dong Li
An artificial neural network, which is an important part of artificial intelligence, has been widely used to many fields such as information processing, automation and economy, and geophysical data processing as one of the efficient tools. However, the application in geophysical electromagnetic method is still relatively few. In this paper, BP neural network was combined with airborne transient electromagnetic method for imaging subsurface geological structures.
We developed an artificial neural network code to map the distribution of geologic conductivity in the subsurface for the airborne transient electromagnetic method. It avoids complex derivation of electromagnetic field formula and only requires input and transfer functions to obtain the quasi-resistivity image section. First, training sample set, which is airborne transient electromagnetic response of homogeneous half-space models with the different resistivity, is formed and network model parameters include the flight altitude and the time constant, which were taken as input variables of the network, and pseudo-resistivity are taken as output variables. Then, a double hidden layer BP neural network is established in accordance with the mapping relationship between quasi-resistivity and airborne transient electromagnetic response. By analyzing mean square error curve, the training termination criterion of BP neural network is presented. Next, the trained BP neural network is used to interpret the airborne transient electromagnetic responses of various typical layered geo-electric models, and it is compared with those of the all-time apparent resistivity algorithm. After a lot of tests, reasonable BP neural network parameters were selected, and the mapping from airborne TEM quasi-resistivity was realized. The results show that the resistivity imaging from BP neural network approach is much closer to the true resistivity of model, and the response to anomalous bodies is better than that of all-time apparent resistivity numerical method. Finally, this imaging technique was use to process the field data acquired by the airborne transient method from Huayangchuan area. Quasi-resistivity depth section calculated by BP neural network and all-time apparent resistivity is in good agreement with the actual geological situation, which further verifies the effectiveness and practicability of this algorithm.
How to cite: Zhang, J., Feng, B., and Li, D.: Resistivity-depth imaging of airborne transient electromagnetic method based on an artificial neural network, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1729, https://doi.org/10.5194/egusphere-egu2020-1729, 2020.
An artificial neural network, which is an important part of artificial intelligence, has been widely used to many fields such as information processing, automation and economy, and geophysical data processing as one of the efficient tools. However, the application in geophysical electromagnetic method is still relatively few. In this paper, BP neural network was combined with airborne transient electromagnetic method for imaging subsurface geological structures.
We developed an artificial neural network code to map the distribution of geologic conductivity in the subsurface for the airborne transient electromagnetic method. It avoids complex derivation of electromagnetic field formula and only requires input and transfer functions to obtain the quasi-resistivity image section. First, training sample set, which is airborne transient electromagnetic response of homogeneous half-space models with the different resistivity, is formed and network model parameters include the flight altitude and the time constant, which were taken as input variables of the network, and pseudo-resistivity are taken as output variables. Then, a double hidden layer BP neural network is established in accordance with the mapping relationship between quasi-resistivity and airborne transient electromagnetic response. By analyzing mean square error curve, the training termination criterion of BP neural network is presented. Next, the trained BP neural network is used to interpret the airborne transient electromagnetic responses of various typical layered geo-electric models, and it is compared with those of the all-time apparent resistivity algorithm. After a lot of tests, reasonable BP neural network parameters were selected, and the mapping from airborne TEM quasi-resistivity was realized. The results show that the resistivity imaging from BP neural network approach is much closer to the true resistivity of model, and the response to anomalous bodies is better than that of all-time apparent resistivity numerical method. Finally, this imaging technique was use to process the field data acquired by the airborne transient method from Huayangchuan area. Quasi-resistivity depth section calculated by BP neural network and all-time apparent resistivity is in good agreement with the actual geological situation, which further verifies the effectiveness and practicability of this algorithm.
How to cite: Zhang, J., Feng, B., and Li, D.: Resistivity-depth imaging of airborne transient electromagnetic method based on an artificial neural network, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1729, https://doi.org/10.5194/egusphere-egu2020-1729, 2020.
EGU2020-1869 | Displays | ITS4.1/NP4.2
Retrieval of Water Quality Parameters in Lake Ontario Based on Hyperspectral Remote Sensing Data and Intelligent AlgorithmsYu Li, Youyue Sun, Jinhui Jeanne Huang, and Edward McBean
With the increasingly prominent ecological and environmental problems in lakes, the monitoring water quality in lakes by satellite remote sensing is becoming more and more high demanding. Traditional water quality sampling is normally conducted manually and are time-consuming and labor-costly. It could not provide a full picture of the waterbodies over time due to limited sampling points and low sampling frequency. A novel attempt is proposed to use hyperspectral remote sensing in conjunction with machine learning technologies to retrieve water quality parameters and provide mapping for these parameters in a lake. The retrieval of both optically active parameters: Chlorophyll-a (CHLA) and dissolved oxygen concentration (DO), as well as non-optically active parameters: total phosphorous (TP), total nitrogen (TN), turbidity (TB), pH were studied in this research. A comparison of three machine learning algorithms including Random Forests (RF), Support Vector Regression (SVR) and Artificial Neural Networks were conducted. These water parameters collected by the Environment and Climate Change Canada agency for 20 years were used as the ground truth for model training and validation. Two set of remote sensing data from MODIS and Sentinel-2 were utilized and evaluated. This research proposed a new approach to retrieve both optically active parameters and non-optically active parameters for water body and provide new strategy for water quality monitoring.
How to cite: Li, Y., Sun, Y., Huang, J. J., and McBean, E.: Retrieval of Water Quality Parameters in Lake Ontario Based on Hyperspectral Remote Sensing Data and Intelligent Algorithms, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1869, https://doi.org/10.5194/egusphere-egu2020-1869, 2020.
With the increasingly prominent ecological and environmental problems in lakes, the monitoring water quality in lakes by satellite remote sensing is becoming more and more high demanding. Traditional water quality sampling is normally conducted manually and are time-consuming and labor-costly. It could not provide a full picture of the waterbodies over time due to limited sampling points and low sampling frequency. A novel attempt is proposed to use hyperspectral remote sensing in conjunction with machine learning technologies to retrieve water quality parameters and provide mapping for these parameters in a lake. The retrieval of both optically active parameters: Chlorophyll-a (CHLA) and dissolved oxygen concentration (DO), as well as non-optically active parameters: total phosphorous (TP), total nitrogen (TN), turbidity (TB), pH were studied in this research. A comparison of three machine learning algorithms including Random Forests (RF), Support Vector Regression (SVR) and Artificial Neural Networks were conducted. These water parameters collected by the Environment and Climate Change Canada agency for 20 years were used as the ground truth for model training and validation. Two set of remote sensing data from MODIS and Sentinel-2 were utilized and evaluated. This research proposed a new approach to retrieve both optically active parameters and non-optically active parameters for water body and provide new strategy for water quality monitoring.
How to cite: Li, Y., Sun, Y., Huang, J. J., and McBean, E.: Retrieval of Water Quality Parameters in Lake Ontario Based on Hyperspectral Remote Sensing Data and Intelligent Algorithms, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1869, https://doi.org/10.5194/egusphere-egu2020-1869, 2020.
EGU2020-9269 | Displays | ITS4.1/NP4.2 | Highlight
Comparing Causal Discovery Methods using Synthetic and Real DataChristoph Käding and Jakob Runge
Unveiling causal structures, i.e., distinguishing cause from effect, from observational data plays a key role in climate science as well as in other fields like medicine or economics. Hence, a number of approaches has been developed to approach this. Recent decades have seen methods like Granger causality or causal network learning algorithms, which are, however, not generally applicable in every scenario. When given two variables X and Y, it is still a challenging problem to decide whether X causes Y, or Y causes X. Recently, there has been progress in the framework of structural causal models, which enable the discovery of causal relationships by making use of functional dependencies (e.g., only linear) and noise models (e.g., only non-Gaussian noise). However, each of them is coming with its own requirements and constraints. While the corresponding conditions are usually unknown in real scenarios, it is quite hard to choose the right method for every application in general.
The goal of this work is to evaluate and to compare a number of state-of-the-art techniques in a joint benchmark. To do so, we employ synthetic data, where we can control for the dataset conditions precisely, and hence can give detailed reasoning about the resulting performance of the individual methods given their underlying assumptions. Further, we utilize real-world data to shed light on their capabilities in actual applications in a comparative manner. We concentrate on the case considering two uni-variate variables due to the large number of possible application scenarios. A profound study, comparing even the latest developments, is, to the best of our knowledge, so far not available in the literature.
How to cite: Käding, C. and Runge, J.: Comparing Causal Discovery Methods using Synthetic and Real Data, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9269, https://doi.org/10.5194/egusphere-egu2020-9269, 2020.
Unveiling causal structures, i.e., distinguishing cause from effect, from observational data plays a key role in climate science as well as in other fields like medicine or economics. Hence, a number of approaches has been developed to approach this. Recent decades have seen methods like Granger causality or causal network learning algorithms, which are, however, not generally applicable in every scenario. When given two variables X and Y, it is still a challenging problem to decide whether X causes Y, or Y causes X. Recently, there has been progress in the framework of structural causal models, which enable the discovery of causal relationships by making use of functional dependencies (e.g., only linear) and noise models (e.g., only non-Gaussian noise). However, each of them is coming with its own requirements and constraints. While the corresponding conditions are usually unknown in real scenarios, it is quite hard to choose the right method for every application in general.
The goal of this work is to evaluate and to compare a number of state-of-the-art techniques in a joint benchmark. To do so, we employ synthetic data, where we can control for the dataset conditions precisely, and hence can give detailed reasoning about the resulting performance of the individual methods given their underlying assumptions. Further, we utilize real-world data to shed light on their capabilities in actual applications in a comparative manner. We concentrate on the case considering two uni-variate variables due to the large number of possible application scenarios. A profound study, comparing even the latest developments, is, to the best of our knowledge, so far not available in the literature.
How to cite: Käding, C. and Runge, J.: Comparing Causal Discovery Methods using Synthetic and Real Data, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9269, https://doi.org/10.5194/egusphere-egu2020-9269, 2020.
EGU2020-9604 | Displays | ITS4.1/NP4.2
Spatiotemporal model for benchmarking causal discovery algorithmsXavier-Andoni Tibau, Christian Reimers, Veronika Eyring, Joachim Denzler, Markus Reichstein, and Jakob Runge
We propose a spatiotemporal model system to evaluate methods of causal discovery. The use of causal discovery to improve our understanding of the spatiotemporal complex system Earth has become widespread in recent years (Runge et al., Nature Comm. 2019). A widespread application example are the complex teleconnections among major climate modes of variability.
The challenges in estimating such causal teleconnection networks are given by (1) the requirement to reconstruct the climate modes from gridded climate fields (dimensionality reduction) and (2) by general challenges for causal discovery, for instance, high dimensionality and nonlinearity. Both challenges are currently being tackled independently. Both dimensionality reduction methods and causal discovery have made strong progress in recent years, but the interaction between the two has not yet been much tackled so far. Thanks to projects like CMIP a vast amount of climate data is available. In climate models climate modes of variability emerge as macroscale features and it is challenging to objectively benchmark both dimension reduction and causal discovery methods since there is no ground truth for such emergent properties.
We propose a spatiotemporal model system that encodes causal relationships among well-defined modes of variability. The model can be thought of as an extension of vector-autoregressive models well-known in time series analysis. This model provides a framework for experimenting with causal discovery in large spatiotemporal models. For example, researchers can analyze how the performance of an algorithm is affected under different methods of dimensionality reduction and algorithms for causal discovery. Also challenging features such as non-stationarity and regime-dependence can be modelled and evaluated. Such a model will help the scientific community to improve methods of causal discovery for climate science.
Runge, J., S. Bathiany, E. Bollt, G. Camps-Valls, D. Coumou, E. Deyle, C. Glymour, M. Kretschmer, M. D. Mahecha, J. Muñoz-Marı́, E. H. van Nes, J. Peters, R. Quax, M. Reichstein, M. Scheffer, B. Schölkopf, P. Spirtes, G. Sugihara, J. Sun, K. Zhang, and J. Zscheischler (2019). Inferring causation from time series in earth system sciences. Nature Communications 10 (1), 2553.
How to cite: Tibau, X.-A., Reimers, C., Eyring, V., Denzler, J., Reichstein, M., and Runge, J.: Spatiotemporal model for benchmarking causal discovery algorithms, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9604, https://doi.org/10.5194/egusphere-egu2020-9604, 2020.
We propose a spatiotemporal model system to evaluate methods of causal discovery. The use of causal discovery to improve our understanding of the spatiotemporal complex system Earth has become widespread in recent years (Runge et al., Nature Comm. 2019). A widespread application example are the complex teleconnections among major climate modes of variability.
The challenges in estimating such causal teleconnection networks are given by (1) the requirement to reconstruct the climate modes from gridded climate fields (dimensionality reduction) and (2) by general challenges for causal discovery, for instance, high dimensionality and nonlinearity. Both challenges are currently being tackled independently. Both dimensionality reduction methods and causal discovery have made strong progress in recent years, but the interaction between the two has not yet been much tackled so far. Thanks to projects like CMIP a vast amount of climate data is available. In climate models climate modes of variability emerge as macroscale features and it is challenging to objectively benchmark both dimension reduction and causal discovery methods since there is no ground truth for such emergent properties.
We propose a spatiotemporal model system that encodes causal relationships among well-defined modes of variability. The model can be thought of as an extension of vector-autoregressive models well-known in time series analysis. This model provides a framework for experimenting with causal discovery in large spatiotemporal models. For example, researchers can analyze how the performance of an algorithm is affected under different methods of dimensionality reduction and algorithms for causal discovery. Also challenging features such as non-stationarity and regime-dependence can be modelled and evaluated. Such a model will help the scientific community to improve methods of causal discovery for climate science.
Runge, J., S. Bathiany, E. Bollt, G. Camps-Valls, D. Coumou, E. Deyle, C. Glymour, M. Kretschmer, M. D. Mahecha, J. Muñoz-Marı́, E. H. van Nes, J. Peters, R. Quax, M. Reichstein, M. Scheffer, B. Schölkopf, P. Spirtes, G. Sugihara, J. Sun, K. Zhang, and J. Zscheischler (2019). Inferring causation from time series in earth system sciences. Nature Communications 10 (1), 2553.
How to cite: Tibau, X.-A., Reimers, C., Eyring, V., Denzler, J., Reichstein, M., and Runge, J.: Spatiotemporal model for benchmarking causal discovery algorithms, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9604, https://doi.org/10.5194/egusphere-egu2020-9604, 2020.
EGU2020-10385 | Displays | ITS4.1/NP4.2
Using AutoRegressive Integrated Moving Average and Gaussian Processes with LSTM neural networks to predict discrete geomagnetic signalsLaurentiu Asimopolos, Alexandru Stanciu, Natalia-Silvia Asimopolos, Bogdan Balea, Andreea Dinu, and Adrian-Aristide Asimopolos
In this paper, we present the results obtained for the geomagnetic data acquired at the Surlari Observatory, located about 30 Km North of Bucharest - Romania. The observatory database contains records from the last seven solar cycles, with different sampling rates.
We used AR, MA, ARMA and ARIMA (AutoRegressive Integrated Moving Average) type models for time series forecasting and phenomenological extrapolation. ARIMA model is a generalization of an autoregressive moving average (ARMA) model, fitted to time series data to predict future points in the series
We made spectral analysis using Fourier Transform, that gives us a relevant picture of the frequency spectrum of the signal component, but without locating it in time, while the wavelet analysis provides us with information regarding the time of occurrence of these frequencies.
Wavelet allows local analysis of magnetic field components through variable frequency windows. Windows with longer time intervals allow us to extract low-frequency information, medium-sized intervals of different sizes lead to medium-frequency information extraction, and very narrow windows highlight the high-frequencies or details of the analysed signals.
We extend the study of geomagnetic data analysis and predictive modelling by implementing a Long Short-Term Memory (LSTM) recurrent neural network that is capable of modelling long-term dependencies and is suitable for time series forecasting. This method includes a Gaussian process (GP) model in order to obtain probabilistic forecasts based on the LSTM outputs.
The evaluation of the proposed hybrid model is conducted using the Receiver Operating Characteristic (ROC) Curve that provides a probabilistic forecast of geomagnetic storm events.
In addition, reliability diagrams are provided in order to support the analysis of the probabilistic forecasting models.
The implementation of the solution for predicting certain geomagnetic parameters is implemented in the MATLAB language, using the Toolbox Deep Learning Toolbox, which provides a framework for the design and implementation of deep learning models.
Also, in addition to using the MATLAB environment, the solution can be accessed, modified, or improved in the Jupyter Notebook computing environment.
How to cite: Asimopolos, L., Stanciu, A., Asimopolos, N.-S., Balea, B., Dinu, A., and Asimopolos, A.-A.: Using AutoRegressive Integrated Moving Average and Gaussian Processes with LSTM neural networks to predict discrete geomagnetic signals, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10385, https://doi.org/10.5194/egusphere-egu2020-10385, 2020.
In this paper, we present the results obtained for the geomagnetic data acquired at the Surlari Observatory, located about 30 Km North of Bucharest - Romania. The observatory database contains records from the last seven solar cycles, with different sampling rates.
We used AR, MA, ARMA and ARIMA (AutoRegressive Integrated Moving Average) type models for time series forecasting and phenomenological extrapolation. ARIMA model is a generalization of an autoregressive moving average (ARMA) model, fitted to time series data to predict future points in the series
We made spectral analysis using Fourier Transform, that gives us a relevant picture of the frequency spectrum of the signal component, but without locating it in time, while the wavelet analysis provides us with information regarding the time of occurrence of these frequencies.
Wavelet allows local analysis of magnetic field components through variable frequency windows. Windows with longer time intervals allow us to extract low-frequency information, medium-sized intervals of different sizes lead to medium-frequency information extraction, and very narrow windows highlight the high-frequencies or details of the analysed signals.
We extend the study of geomagnetic data analysis and predictive modelling by implementing a Long Short-Term Memory (LSTM) recurrent neural network that is capable of modelling long-term dependencies and is suitable for time series forecasting. This method includes a Gaussian process (GP) model in order to obtain probabilistic forecasts based on the LSTM outputs.
The evaluation of the proposed hybrid model is conducted using the Receiver Operating Characteristic (ROC) Curve that provides a probabilistic forecast of geomagnetic storm events.
In addition, reliability diagrams are provided in order to support the analysis of the probabilistic forecasting models.
The implementation of the solution for predicting certain geomagnetic parameters is implemented in the MATLAB language, using the Toolbox Deep Learning Toolbox, which provides a framework for the design and implementation of deep learning models.
Also, in addition to using the MATLAB environment, the solution can be accessed, modified, or improved in the Jupyter Notebook computing environment.
How to cite: Asimopolos, L., Stanciu, A., Asimopolos, N.-S., Balea, B., Dinu, A., and Asimopolos, A.-A.: Using AutoRegressive Integrated Moving Average and Gaussian Processes with LSTM neural networks to predict discrete geomagnetic signals, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10385, https://doi.org/10.5194/egusphere-egu2020-10385, 2020.
EGU2020-11799 | Displays | ITS4.1/NP4.2
Relationships in semantic data cubesAlexandr Mansurov and Olga Majlingova
Linked data is a method for publishing structured data in a way that also expresses its semantics. This semantic description is implemented by the use of vocabularies, which are usually specified by the W3C as web standards. However, anyone can create and register their vocabulary and register it in an open catalogue like LOV.
There are many situations where it would be useful to be able to publish multi-dimensional data, such as statistics, on the web in such a way that it can be linked to related data sets and concepts. The Data Cube vocabulary provides a means to do this using the W3C RDF (Resource Description Framework) standard. The model underpinning the Data Cube vocabulary is compatible with the cube model that underlies SDMX (Statistical Data and Metadata eXchange), an ISO standard for exchanging and sharing statistical data and metadata among organizations [1].
Given the dispersed nature of linked data, we want to infer relationships between Linked Open Data datasets based on their semantic description. In particular we are interested in geospatial relationships.
We show a generic approach for relationships in semantic data cubes using the same taxonomies, related dimensions, as well as through structured geographical datasets. Good results were achieved using structural geographical ontologies in combination with the generic approach for taxonomies.
[1] Cyganiak, Reynolds, Tennison: The RDF Data Cube Vocabulary, W3C Recommendation, 16 January 2014,
How to cite: Mansurov, A. and Majlingova, O.: Relationships in semantic data cubes, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11799, https://doi.org/10.5194/egusphere-egu2020-11799, 2020.
Linked data is a method for publishing structured data in a way that also expresses its semantics. This semantic description is implemented by the use of vocabularies, which are usually specified by the W3C as web standards. However, anyone can create and register their vocabulary and register it in an open catalogue like LOV.
There are many situations where it would be useful to be able to publish multi-dimensional data, such as statistics, on the web in such a way that it can be linked to related data sets and concepts. The Data Cube vocabulary provides a means to do this using the W3C RDF (Resource Description Framework) standard. The model underpinning the Data Cube vocabulary is compatible with the cube model that underlies SDMX (Statistical Data and Metadata eXchange), an ISO standard for exchanging and sharing statistical data and metadata among organizations [1].
Given the dispersed nature of linked data, we want to infer relationships between Linked Open Data datasets based on their semantic description. In particular we are interested in geospatial relationships.
We show a generic approach for relationships in semantic data cubes using the same taxonomies, related dimensions, as well as through structured geographical datasets. Good results were achieved using structural geographical ontologies in combination with the generic approach for taxonomies.
[1] Cyganiak, Reynolds, Tennison: The RDF Data Cube Vocabulary, W3C Recommendation, 16 January 2014,
How to cite: Mansurov, A. and Majlingova, O.: Relationships in semantic data cubes, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11799, https://doi.org/10.5194/egusphere-egu2020-11799, 2020.
EGU2020-13035 | Displays | ITS4.1/NP4.2
A test development of a data driven model to simulate chlorophyll data at Tongyeong bay in KoreaSung Dae Kim and Sang Hwa Choi
A pilot machine learning(ML) program was developed to test ML technique for simulation of biochemical parameters at the coastal area in Korea. Temperature, chlorophyll, solar radiation, daylight time, humidity, nutrient data were collected as training dataset from the public domain and in-house projects of KIOST(Korea Institute of Ocean Science & Technology). Daily satellite chlorophyll data of MODIS(Moderate Resolution Imaging Spectroradiometer) and GOCI(Geostationary Ocean Color Imager) were retrieved from the public services. Daily SST(Sea Surface Temperature) data and ECMWF solar radiation data were retrieved from GHRSST service and Copernicus service. Meteorological observation data and marine observation data were collected from KMA (Korea Meteorological Agency) and KIOST. The output of marine biochemical numerical model of KIOST were also prepared to validate ML model. ML program was configured using LSTM network and TensorFlow. During the data processing process, some chlorophyll data were interpolated because there were many missing data exist in satellite dataset. ML training were conducted repeatedly under varying combinations of sequence length, learning rate, number of hidden layer and iterations. The 75% of training dataset were used for training and 25% were used for prediction. The maximum correlation between training data and predicted data was 0.995 in case that model output data were used as training dataset. When satellite data and observation data were used, correlations were around 0.55. Though the latter corelation is relatively low, the model simulated periodic variation well and some differences were found at peak values. It is thought that ML model can be applied for simulation of chlorophyll data if preparation of sufficient reliable observation data were possible.
How to cite: Kim, S. D. and Choi, S. H.: A test development of a data driven model to simulate chlorophyll data at Tongyeong bay in Korea, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13035, https://doi.org/10.5194/egusphere-egu2020-13035, 2020.
A pilot machine learning(ML) program was developed to test ML technique for simulation of biochemical parameters at the coastal area in Korea. Temperature, chlorophyll, solar radiation, daylight time, humidity, nutrient data were collected as training dataset from the public domain and in-house projects of KIOST(Korea Institute of Ocean Science & Technology). Daily satellite chlorophyll data of MODIS(Moderate Resolution Imaging Spectroradiometer) and GOCI(Geostationary Ocean Color Imager) were retrieved from the public services. Daily SST(Sea Surface Temperature) data and ECMWF solar radiation data were retrieved from GHRSST service and Copernicus service. Meteorological observation data and marine observation data were collected from KMA (Korea Meteorological Agency) and KIOST. The output of marine biochemical numerical model of KIOST were also prepared to validate ML model. ML program was configured using LSTM network and TensorFlow. During the data processing process, some chlorophyll data were interpolated because there were many missing data exist in satellite dataset. ML training were conducted repeatedly under varying combinations of sequence length, learning rate, number of hidden layer and iterations. The 75% of training dataset were used for training and 25% were used for prediction. The maximum correlation between training data and predicted data was 0.995 in case that model output data were used as training dataset. When satellite data and observation data were used, correlations were around 0.55. Though the latter corelation is relatively low, the model simulated periodic variation well and some differences were found at peak values. It is thought that ML model can be applied for simulation of chlorophyll data if preparation of sufficient reliable observation data were possible.
How to cite: Kim, S. D. and Choi, S. H.: A test development of a data driven model to simulate chlorophyll data at Tongyeong bay in Korea, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13035, https://doi.org/10.5194/egusphere-egu2020-13035, 2020.
EGU2020-13133 | Displays | ITS4.1/NP4.2
Distributed Earth-Observation satellite data processing with Pytroll/SatpySalomon Eliasson, Martin Raspaud, and Adam Dybbroe
How to cite: Eliasson, S., Raspaud, M., and Dybbroe, A.: Distributed Earth-Observation satellite data processing with Pytroll/Satpy, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13133, https://doi.org/10.5194/egusphere-egu2020-13133, 2020.
How to cite: Eliasson, S., Raspaud, M., and Dybbroe, A.: Distributed Earth-Observation satellite data processing with Pytroll/Satpy, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13133, https://doi.org/10.5194/egusphere-egu2020-13133, 2020.
EGU2020-17007 | Displays | ITS4.1/NP4.2
Machine learning technique for the quantitative evaluation of tight sandstone reservoirs using high-pressure mercury-injection dataJingJing Liu and JianChao Liu
In recent years, China's unconventional oil and gas exploration and development has developed rapidly and has entered a strategic breakthrough period. At the same time, tight sandstone reservoirs have become a highlight of unconventional oil and gas development in the Ordos Basin in China due to its industrial and strategic value. As a digital representation of storage capacity, reservoir evaluation is a vital component of tight-oil exploration and development. Previous work on reservoir evaluation indicated that achieving satisfactory results is difficult because of reservoir heterogeneity and considerable risk of subjective or technical errors. In the data-driven era, this paper proposes a machine learning quantitative evaluation method for tight sandstone reservoirs based on K-means and random forests using high-pressure mercury-injection data. This method can not only provide new ideas for reservoir evaluation, but also be used for prediction and evaluation of other aspects in the field of oil and gas exploration and production, and then provide a more comprehensive parameter basis for “intelligent oil fields”. The results show that the reservoirs could be divided into three types, and the quantitative reservoir-evaluation criteria were established. This method has strong applicability, evident reservoir characteristics, and observable discrimination. The implications of these findings regarding ultra-low permeability and complex pore structures are practical.
How to cite: Liu, J. and Liu, J.: Machine learning technique for the quantitative evaluation of tight sandstone reservoirs using high-pressure mercury-injection data, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-17007, https://doi.org/10.5194/egusphere-egu2020-17007, 2020.
In recent years, China's unconventional oil and gas exploration and development has developed rapidly and has entered a strategic breakthrough period. At the same time, tight sandstone reservoirs have become a highlight of unconventional oil and gas development in the Ordos Basin in China due to its industrial and strategic value. As a digital representation of storage capacity, reservoir evaluation is a vital component of tight-oil exploration and development. Previous work on reservoir evaluation indicated that achieving satisfactory results is difficult because of reservoir heterogeneity and considerable risk of subjective or technical errors. In the data-driven era, this paper proposes a machine learning quantitative evaluation method for tight sandstone reservoirs based on K-means and random forests using high-pressure mercury-injection data. This method can not only provide new ideas for reservoir evaluation, but also be used for prediction and evaluation of other aspects in the field of oil and gas exploration and production, and then provide a more comprehensive parameter basis for “intelligent oil fields”. The results show that the reservoirs could be divided into three types, and the quantitative reservoir-evaluation criteria were established. This method has strong applicability, evident reservoir characteristics, and observable discrimination. The implications of these findings regarding ultra-low permeability and complex pore structures are practical.
How to cite: Liu, J. and Liu, J.: Machine learning technique for the quantitative evaluation of tight sandstone reservoirs using high-pressure mercury-injection data, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-17007, https://doi.org/10.5194/egusphere-egu2020-17007, 2020.
EGU2020-17319 | Displays | ITS4.1/NP4.2
Producing solar flare predictions using support vector machine (SVM) applied with ionospheric total electron content (TEC) global mapsSaed Asaly, Lee-Ad Gottlieb, and Yuval Reuveni
Ground and space-based remote sensing technology is one of the most useful tools for near-space environment studies and space weather research. During the last decade, a considerable amount of efforts in space weather research is being devoted for developing the ability to predict the exact time and location of space weather events such as solar flares and X-rays bursts. Despite the fact that most of the natural factors of such events can be modeled numerically, it is still a challenging task to produce accurate predications due to insufficient detailed and real‐time data. Hence, space weather scientists are trying to learn patterns of previous data distribution using data mining and machine learning (ML) tools in order to accurately predict future space weather events. Here, we present a new methodology based on support vector machines (SVM) approach applied with ionospheric Total Electron Content (TEC) data, derived from worldwide GPS geodetic receiver network that predict B, C, M and X-class solar flare events. Experimental results indicate that the proposed method has the ability to predict solar flare events of X and M-class with 80-94% and 78-93% accuracy, respectively. However, it does not succeed in producing similar promising results for the small-size C and B-class flares.
How to cite: Asaly, S., Gottlieb, L.-A., and Reuveni, Y.: Producing solar flare predictions using support vector machine (SVM) applied with ionospheric total electron content (TEC) global maps, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-17319, https://doi.org/10.5194/egusphere-egu2020-17319, 2020.
Ground and space-based remote sensing technology is one of the most useful tools for near-space environment studies and space weather research. During the last decade, a considerable amount of efforts in space weather research is being devoted for developing the ability to predict the exact time and location of space weather events such as solar flares and X-rays bursts. Despite the fact that most of the natural factors of such events can be modeled numerically, it is still a challenging task to produce accurate predications due to insufficient detailed and real‐time data. Hence, space weather scientists are trying to learn patterns of previous data distribution using data mining and machine learning (ML) tools in order to accurately predict future space weather events. Here, we present a new methodology based on support vector machines (SVM) approach applied with ionospheric Total Electron Content (TEC) data, derived from worldwide GPS geodetic receiver network that predict B, C, M and X-class solar flare events. Experimental results indicate that the proposed method has the ability to predict solar flare events of X and M-class with 80-94% and 78-93% accuracy, respectively. However, it does not succeed in producing similar promising results for the small-size C and B-class flares.
How to cite: Asaly, S., Gottlieb, L.-A., and Reuveni, Y.: Producing solar flare predictions using support vector machine (SVM) applied with ionospheric total electron content (TEC) global maps, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-17319, https://doi.org/10.5194/egusphere-egu2020-17319, 2020.
EGU2020-17431 | Displays | ITS4.1/NP4.2
Recognition of marine seismic data features using convolutional neural networksMassimiliano Iurcev, Paolo Diviacco, Simone Scardapane, and Federico Muciaccia
Exploration seismics is the branch of geophysics that aims to explore the underground using the propagation, reflection and refraction of elastic waves generated by artificial sources. Seismic signals cannot be straightforward read as geological layers and features but need to be interpreted by experienced analysts that contextualize the possible meaning of a signal with the geologic model under development. It goes without saying that interpretation is an activity that is biased by background and tacit knowledge, perceptive and even sociological factors. Applications of artificial intelligence in this field gained space especially within the oil Exploration and Production (E&P) industry while less has been done in the academic sector. The main target of the E&P is the detection of Direct Hydrocarbon Indicators (DHI) highlighted as anomalies in the attribute space using mainly Principal Component Analysis (PCA) and Self-Organizing Maps (SOM) methods.
There are, however, seismic signals that can be detected in the image space that can be associated with specific geological feature. Among these we started to concentrate to the simplest forms such as seismic diffractions that can be associated with faults. The diffractor has the property that it scatters energy in all directions and plots on a seismic section as an hyperbola. It can be hard to detect the diffraction hyperbola especially when the data are contaminated with noise or if data is not homogeneous such as when they are integrated from different teams, practices or vintage.
To overcome these difficulties, a large compilation of data has been gathered and submitted to experts in order to train a prediction system. Data have been gathered from the SDLS (Antarctic Seismic Data Library System), which is a geoportal maintained by INOGS, providing open access to a big collection of multichannel seismic reflection data collected south of 60°S. An interactive application (written in Processing for GUI, open-source and multi-platform requirements) allowed a pool of geophysical researchers to mark individually the hyperbolic features onto the seismic traces, by simple mouse dragging. Further processing in Python of the collected information, based on geometric algorithms, helped to build a rich training dataset, with about 10000 classified images.
In order to investigate a first proof-of-concept for this application, we leverage recent results in deep learning and neural networks to train a predictive model for the automatic detection of the presence of the hyperbola from the image. A convolutional neural network (CNN) is trained to map the small pictures extracted beforehand to a probability describing the eventual presence of a hyperbola. We explore different designs for the CNN, using several state-of-the-art guidelines for its architecture, regularization, and optimization. Furthermore, we augment in real-time the original dataset with noise and jittering to improve the overall performance. Using the trained CNN we built heatmaps over a set of testing images, highlighting the regions with high probability of containing a feature.
How to cite: Iurcev, M., Diviacco, P., Scardapane, S., and Muciaccia, F.: Recognition of marine seismic data features using convolutional neural networks, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-17431, https://doi.org/10.5194/egusphere-egu2020-17431, 2020.
Exploration seismics is the branch of geophysics that aims to explore the underground using the propagation, reflection and refraction of elastic waves generated by artificial sources. Seismic signals cannot be straightforward read as geological layers and features but need to be interpreted by experienced analysts that contextualize the possible meaning of a signal with the geologic model under development. It goes without saying that interpretation is an activity that is biased by background and tacit knowledge, perceptive and even sociological factors. Applications of artificial intelligence in this field gained space especially within the oil Exploration and Production (E&P) industry while less has been done in the academic sector. The main target of the E&P is the detection of Direct Hydrocarbon Indicators (DHI) highlighted as anomalies in the attribute space using mainly Principal Component Analysis (PCA) and Self-Organizing Maps (SOM) methods.
There are, however, seismic signals that can be detected in the image space that can be associated with specific geological feature. Among these we started to concentrate to the simplest forms such as seismic diffractions that can be associated with faults. The diffractor has the property that it scatters energy in all directions and plots on a seismic section as an hyperbola. It can be hard to detect the diffraction hyperbola especially when the data are contaminated with noise or if data is not homogeneous such as when they are integrated from different teams, practices or vintage.
To overcome these difficulties, a large compilation of data has been gathered and submitted to experts in order to train a prediction system. Data have been gathered from the SDLS (Antarctic Seismic Data Library System), which is a geoportal maintained by INOGS, providing open access to a big collection of multichannel seismic reflection data collected south of 60°S. An interactive application (written in Processing for GUI, open-source and multi-platform requirements) allowed a pool of geophysical researchers to mark individually the hyperbolic features onto the seismic traces, by simple mouse dragging. Further processing in Python of the collected information, based on geometric algorithms, helped to build a rich training dataset, with about 10000 classified images.
In order to investigate a first proof-of-concept for this application, we leverage recent results in deep learning and neural networks to train a predictive model for the automatic detection of the presence of the hyperbola from the image. A convolutional neural network (CNN) is trained to map the small pictures extracted beforehand to a probability describing the eventual presence of a hyperbola. We explore different designs for the CNN, using several state-of-the-art guidelines for its architecture, regularization, and optimization. Furthermore, we augment in real-time the original dataset with noise and jittering to improve the overall performance. Using the trained CNN we built heatmaps over a set of testing images, highlighting the regions with high probability of containing a feature.
How to cite: Iurcev, M., Diviacco, P., Scardapane, S., and Muciaccia, F.: Recognition of marine seismic data features using convolutional neural networks, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-17431, https://doi.org/10.5194/egusphere-egu2020-17431, 2020.
EGU2020-17346 | Displays | ITS4.1/NP4.2
Visual Understanding in Semantic Segmentation of Soil Erosion Sites in Swiss Alpine GrasslandsMaxim Samarin, Monika Nagy-Huber, Lauren Zweifel, Katrin Meusburger, Christine Alewell, and Volker Roth
Understanding the occurrence of soil erosion phenomena is of vital importance for ecology and agriculture, especially under changing climate conditions. In Alpine grasslands, susceptibility to soil erosion is predominately due to the prevailing geological, morphological and climate conditions but is also affected by anthropogenic aspects such as agricultural land use. Climate change is expected to have a relevant impact on the driving factors of soil erosion like strong precipitation events and altered snow dynamics. In order to assess spatial and temporal changes of soil erosion phenomena and investigate possible reasons for their occurrence, large-scale methods to identify different soil erosion sites and quantify their extent are desirable.
In the field of remote sensing, one such semi-automatic method for (semantic) image segmentation is Object-based Image Analysis (OBIA), which makes use of spectral and spatial properties of image objects. In a recent study (Zweifel et al.), we successfully employed OBIA on high-resolution orthoimages (RGB spectral bands, 0.25 to 0.5 m pixel resolution) and derivatives of digital elevation models (DEM) of a study site in the Swiss Alps (Urseren Valley). The method provides high-quality segmentation results and an increasing trend of total area affected by soil erosion (+156 +/- 18%) is shown over a period from 2000 to 2016. However, using OBIA requires expert knowledge, manual adjustments, and is time-intensive in order to achieve satisfying segmentation results. In addition, the parameter settings of the method cannot be easily transferred from one image to another.
To allow for large-scale semantic segmentation of erosion sites, we make use of fully convolutional neural networks (CNNs). In recent years, CNNs proved to be very performant tools for a variety of image recognition tasks. While training CNNs might be more time demanding, predicting segmentations for new images and previously unseen regions is usually fast. For this study, we train a U-Net with high-quality segmentation masks provided by OBIA and DEM derivatives. The U-Net segmentation results are not only in good agreement with the OBIA results, but also a similar trend for the increase of total area affected by soil erosion is observed.
In order to have a natural understanding of what in the input is “relevant” for the segmentation result, we make use of methods which highlight different regions of the input image, thereby providing a visually interpretable result. We use different approaches to identify these relevant regions which are based on perturbation of the input image and relevance propagation of the output signal to the input image. While the former approach identifies the relevant regions by modifying the input image and considering the changes in the output, the latter approach tracks the dominant signal from the segmentation output back to the input image, highlighting the relevant regions. Although both approaches attempt to attain the same goal, differences in the relevant regions of the input images for the segmentation results can be observed.
Zweifel, L., Meusburger, K., and Alewell, C. Spatio-temporal pattern of soil degradation in a Swiss Alpine grassland catchment. Remote Sensing of Environment, 235, 2019.
How to cite: Samarin, M., Nagy-Huber, M., Zweifel, L., Meusburger, K., Alewell, C., and Roth, V.: Visual Understanding in Semantic Segmentation of Soil Erosion Sites in Swiss Alpine Grasslands , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-17346, https://doi.org/10.5194/egusphere-egu2020-17346, 2020.
Understanding the occurrence of soil erosion phenomena is of vital importance for ecology and agriculture, especially under changing climate conditions. In Alpine grasslands, susceptibility to soil erosion is predominately due to the prevailing geological, morphological and climate conditions but is also affected by anthropogenic aspects such as agricultural land use. Climate change is expected to have a relevant impact on the driving factors of soil erosion like strong precipitation events and altered snow dynamics. In order to assess spatial and temporal changes of soil erosion phenomena and investigate possible reasons for their occurrence, large-scale methods to identify different soil erosion sites and quantify their extent are desirable.
In the field of remote sensing, one such semi-automatic method for (semantic) image segmentation is Object-based Image Analysis (OBIA), which makes use of spectral and spatial properties of image objects. In a recent study (Zweifel et al.), we successfully employed OBIA on high-resolution orthoimages (RGB spectral bands, 0.25 to 0.5 m pixel resolution) and derivatives of digital elevation models (DEM) of a study site in the Swiss Alps (Urseren Valley). The method provides high-quality segmentation results and an increasing trend of total area affected by soil erosion (+156 +/- 18%) is shown over a period from 2000 to 2016. However, using OBIA requires expert knowledge, manual adjustments, and is time-intensive in order to achieve satisfying segmentation results. In addition, the parameter settings of the method cannot be easily transferred from one image to another.
To allow for large-scale semantic segmentation of erosion sites, we make use of fully convolutional neural networks (CNNs). In recent years, CNNs proved to be very performant tools for a variety of image recognition tasks. While training CNNs might be more time demanding, predicting segmentations for new images and previously unseen regions is usually fast. For this study, we train a U-Net with high-quality segmentation masks provided by OBIA and DEM derivatives. The U-Net segmentation results are not only in good agreement with the OBIA results, but also a similar trend for the increase of total area affected by soil erosion is observed.
In order to have a natural understanding of what in the input is “relevant” for the segmentation result, we make use of methods which highlight different regions of the input image, thereby providing a visually interpretable result. We use different approaches to identify these relevant regions which are based on perturbation of the input image and relevance propagation of the output signal to the input image. While the former approach identifies the relevant regions by modifying the input image and considering the changes in the output, the latter approach tracks the dominant signal from the segmentation output back to the input image, highlighting the relevant regions. Although both approaches attempt to attain the same goal, differences in the relevant regions of the input images for the segmentation results can be observed.
Zweifel, L., Meusburger, K., and Alewell, C. Spatio-temporal pattern of soil degradation in a Swiss Alpine grassland catchment. Remote Sensing of Environment, 235, 2019.
How to cite: Samarin, M., Nagy-Huber, M., Zweifel, L., Meusburger, K., Alewell, C., and Roth, V.: Visual Understanding in Semantic Segmentation of Soil Erosion Sites in Swiss Alpine Grasslands , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-17346, https://doi.org/10.5194/egusphere-egu2020-17346, 2020.
EGU2020-17917 | Displays | ITS4.1/NP4.2
Insect Damaged Tree Detection with Drone Data and Deep Learning Technique, Case Study: Abies Mariesii Forest, Zao Mountain, JapanNguyen Ha Trang, Yago Diez, and Larry Lopez
The outbreak of fir bark beetles (Polygraphus proximus Blandford) in natural Abies Mariesii forest on Zao Mountain were reported in 2016. With the recent development of deep learning and drones, it is possible to automatically detect trees in both man-made and natural forests including damaged tree detection. However there are still some challenges in using deep learning and drones for sick tree detection in mountainous area that we want to address: (i) mixed forest structure with overlapping canopies, (ii) heterogeneous distribution of species in different sites, (iii) high slope of mountainous area and (iv) variation of mountainous climate condition. The current work can be summarized into three stages: data collection, data preparation and data processing. All the data were collected by DJI Mavic 2 pro at 60-70m flying height from the take off point with ground sampling distance (GSD) are ranging from1.23 cm to 2.54 cm depending on the slope of the sites. To prepare the data to be processed using a Convolutional Neural Network (CNN), all images were stitched together using Agisoft’s metashape software to create five orthomosaics of five study sites. Every site has different percentage of fir according to the change of elevation. We then manually annotated all the mosaics with GIMP to categorize all the forest cover into 6 classes: dead fir, sick fir, healthy fir, deciduous trees, grass and uncovered (pathway, building and soil). The mosaics are automatically divided into small patches with the assigned categories by our algorithm with first trial window size of 200 pixel x 200 pixel, which we temporally see can cover the medium fir trees. We will also try different window sizes and evaluate how this parameter affects results. The resulting patches were finally used as the input for CNN architecture to detect the damaged trees. The work is still on going and we expect to achieve the results with high classification accuracy in terms of deep learning algorithm allowing us to build maps regarding health status of all fir trees.
Keywords: Deep learning, CNN, drones, UAVs, tree detection, sick trees, insect damaged trees, forest
How to cite: Ha Trang, N., Diez, Y., and Lopez, L.: Insect Damaged Tree Detection with Drone Data and Deep Learning Technique, Case Study: Abies Mariesii Forest, Zao Mountain, Japan, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-17917, https://doi.org/10.5194/egusphere-egu2020-17917, 2020.
The outbreak of fir bark beetles (Polygraphus proximus Blandford) in natural Abies Mariesii forest on Zao Mountain were reported in 2016. With the recent development of deep learning and drones, it is possible to automatically detect trees in both man-made and natural forests including damaged tree detection. However there are still some challenges in using deep learning and drones for sick tree detection in mountainous area that we want to address: (i) mixed forest structure with overlapping canopies, (ii) heterogeneous distribution of species in different sites, (iii) high slope of mountainous area and (iv) variation of mountainous climate condition. The current work can be summarized into three stages: data collection, data preparation and data processing. All the data were collected by DJI Mavic 2 pro at 60-70m flying height from the take off point with ground sampling distance (GSD) are ranging from1.23 cm to 2.54 cm depending on the slope of the sites. To prepare the data to be processed using a Convolutional Neural Network (CNN), all images were stitched together using Agisoft’s metashape software to create five orthomosaics of five study sites. Every site has different percentage of fir according to the change of elevation. We then manually annotated all the mosaics with GIMP to categorize all the forest cover into 6 classes: dead fir, sick fir, healthy fir, deciduous trees, grass and uncovered (pathway, building and soil). The mosaics are automatically divided into small patches with the assigned categories by our algorithm with first trial window size of 200 pixel x 200 pixel, which we temporally see can cover the medium fir trees. We will also try different window sizes and evaluate how this parameter affects results. The resulting patches were finally used as the input for CNN architecture to detect the damaged trees. The work is still on going and we expect to achieve the results with high classification accuracy in terms of deep learning algorithm allowing us to build maps regarding health status of all fir trees.
Keywords: Deep learning, CNN, drones, UAVs, tree detection, sick trees, insect damaged trees, forest
How to cite: Ha Trang, N., Diez, Y., and Lopez, L.: Insect Damaged Tree Detection with Drone Data and Deep Learning Technique, Case Study: Abies Mariesii Forest, Zao Mountain, Japan, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-17917, https://doi.org/10.5194/egusphere-egu2020-17917, 2020.
EGU2020-18696 | Displays | ITS4.1/NP4.2
Tracking of mesoscale atmospheric phenomena in satellite mosaics using deep neural networksKirill Grashchenkov, Mikhail Krinitskiy, Polina Verezemskaya, Natalia Tilinina, and Sergey Gulev
Polar Lows (PLs) are intense atmospheric vortices that form mostly over the ocean. Due to their strong impact on the deep ocean convection and also on engineering infrastructure, their accurate detection and tracking is a very important task that is demanded by industrial end-users as well as academic researchers of various fields. While there are a few PL detection algorithms, there are no examples of successful automatic PL tracking methods that would be applicable to satellite mosaics or other data, which would as reliably represent PLs as remote sensing products. The only reliable way for the tracking of PLs at the moment is the manual tracking which is highly time-consuming and requires exhaustive examination of source data by an expert.
At the same time, visual object tracking (VOT) is a well-known problem in computer vision. In our study, we present the novel method for the tracking of PLs in satellite mosaics based upon Deep Convolutional Neural Networks (DCNNs) of a specific architecture. Using the Southern Ocean Mesocyclones database gathered in the Shirshov Institute of Oceanology, we trained our model to perform the assignment task, which is an essential part of our tracking algorithm. As a proof of concept, we will present preliminary results of our approach for PL tracking for the summer period of 2004 in the Southern Ocean.
How to cite: Grashchenkov, K., Krinitskiy, M., Verezemskaya, P., Tilinina, N., and Gulev, S.: Tracking of mesoscale atmospheric phenomena in satellite mosaics using deep neural networks, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18696, https://doi.org/10.5194/egusphere-egu2020-18696, 2020.
Polar Lows (PLs) are intense atmospheric vortices that form mostly over the ocean. Due to their strong impact on the deep ocean convection and also on engineering infrastructure, their accurate detection and tracking is a very important task that is demanded by industrial end-users as well as academic researchers of various fields. While there are a few PL detection algorithms, there are no examples of successful automatic PL tracking methods that would be applicable to satellite mosaics or other data, which would as reliably represent PLs as remote sensing products. The only reliable way for the tracking of PLs at the moment is the manual tracking which is highly time-consuming and requires exhaustive examination of source data by an expert.
At the same time, visual object tracking (VOT) is a well-known problem in computer vision. In our study, we present the novel method for the tracking of PLs in satellite mosaics based upon Deep Convolutional Neural Networks (DCNNs) of a specific architecture. Using the Southern Ocean Mesocyclones database gathered in the Shirshov Institute of Oceanology, we trained our model to perform the assignment task, which is an essential part of our tracking algorithm. As a proof of concept, we will present preliminary results of our approach for PL tracking for the summer period of 2004 in the Southern Ocean.
How to cite: Grashchenkov, K., Krinitskiy, M., Verezemskaya, P., Tilinina, N., and Gulev, S.: Tracking of mesoscale atmospheric phenomena in satellite mosaics using deep neural networks, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18696, https://doi.org/10.5194/egusphere-egu2020-18696, 2020.
EGU2020-10849 | Displays | ITS4.1/NP4.2 | Highlight
United in Variety: The EarthServer Datacube FederationPeter Baumann
Datacubes form an accepted cornerstone for analysis (and visualization) ready spatio-temporal data offerings. Beyond the multi-dimensional data structure, the paradigm also suggests rich services, abstracting away from the untractable zillions of files and products - actionable datacubes as established by Array Databases enable users to ask "any query, any time" without programming. The principle of location-transparent federations establishes a single, coherent information space.
The EarthServer federation is a large, growing data center network offering Petabytes of a critical variety, such as radar and optical satellite data, atmospheric data, elevation data, and thematic cubes like global sea ice. Around CODE-DE and DIASs an ecosystem of data has been established that is available to users as a single pool, in particular for efficient distributed data fusion irrespective of data location.
In our talk we present technology, services, and governance of this unique intercontinental line-up of data centers. A live demo will show dist
ributed datacube fusion.
How to cite: Baumann, P.: United in Variety: The EarthServer Datacube Federation, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10849, https://doi.org/10.5194/egusphere-egu2020-10849, 2020.
Datacubes form an accepted cornerstone for analysis (and visualization) ready spatio-temporal data offerings. Beyond the multi-dimensional data structure, the paradigm also suggests rich services, abstracting away from the untractable zillions of files and products - actionable datacubes as established by Array Databases enable users to ask "any query, any time" without programming. The principle of location-transparent federations establishes a single, coherent information space.
The EarthServer federation is a large, growing data center network offering Petabytes of a critical variety, such as radar and optical satellite data, atmospheric data, elevation data, and thematic cubes like global sea ice. Around CODE-DE and DIASs an ecosystem of data has been established that is available to users as a single pool, in particular for efficient distributed data fusion irrespective of data location.
In our talk we present technology, services, and governance of this unique intercontinental line-up of data centers. A live demo will show dist
ributed datacube fusion.
How to cite: Baumann, P.: United in Variety: The EarthServer Datacube Federation, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10849, https://doi.org/10.5194/egusphere-egu2020-10849, 2020.
EGU2020-2275 | Displays | ITS4.1/NP4.2
Based on Artificial Intelligence Simulation Study on the Impact of Land Use on Coastal Ecological Security in China's Coastal ZoneYuelei Xu
As a transitional area between land and ocean system, coastal zone is a sensitive area of global change, which gathers 2/3 of the global population and wealth. Under the background of coastal urbanization and ecological civilization construction in China, more attention has been attached to develop the coastal zone economy efficiently with the strong interference of human activities. However, the deficiency of a suitable method to evaluate coastal ecological environment, affects the balance between utilization and protection in the coastal zone. This research compared habitat quality in the present with that in the future, and used this as the evaluation index of the impact of land use on coastal ecological security. The impact of land use transformation on natural wetlands and the quality of natural habitats has been calculated based on the coastal land use data since 1980 and the forecast land use in 2050, which under the scenario of RCP 4.5 carbon dioxide emission simulated by FLUS model artificial intelligence. The results show that in recent 20 years, there have been obvious reclamation activities in China's coastal areas, especially in Bohai Bay area, Yangtze river delta and Pearl River Delta. From 1990 to 2010, the reclamation expansion areas are 272.49 km2,270.09 km2 and 50.57 km2, respectively. With the development of economic transformation and ecological priority in the southeast coastal areas in recent years, the effect of habitat restoration will be remarkable by 2050, while habitat in Bohai Bay area and Pearl River Delta present an obvious degradation trend. These results, including the 30-metre-resolution habitat quality, can be used for reference for coastal ecological security maintenance and economic restructuring in different regions. This research will help to build the national ecological security evaluation system and formulate future policies for coastal ecological environment protection, and accelerate China's economic transformation.
How to cite: Xu, Y.: Based on Artificial Intelligence Simulation Study on the Impact of Land Use on Coastal Ecological Security in China's Coastal Zone, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2275, https://doi.org/10.5194/egusphere-egu2020-2275, 2020.
As a transitional area between land and ocean system, coastal zone is a sensitive area of global change, which gathers 2/3 of the global population and wealth. Under the background of coastal urbanization and ecological civilization construction in China, more attention has been attached to develop the coastal zone economy efficiently with the strong interference of human activities. However, the deficiency of a suitable method to evaluate coastal ecological environment, affects the balance between utilization and protection in the coastal zone. This research compared habitat quality in the present with that in the future, and used this as the evaluation index of the impact of land use on coastal ecological security. The impact of land use transformation on natural wetlands and the quality of natural habitats has been calculated based on the coastal land use data since 1980 and the forecast land use in 2050, which under the scenario of RCP 4.5 carbon dioxide emission simulated by FLUS model artificial intelligence. The results show that in recent 20 years, there have been obvious reclamation activities in China's coastal areas, especially in Bohai Bay area, Yangtze river delta and Pearl River Delta. From 1990 to 2010, the reclamation expansion areas are 272.49 km2,270.09 km2 and 50.57 km2, respectively. With the development of economic transformation and ecological priority in the southeast coastal areas in recent years, the effect of habitat restoration will be remarkable by 2050, while habitat in Bohai Bay area and Pearl River Delta present an obvious degradation trend. These results, including the 30-metre-resolution habitat quality, can be used for reference for coastal ecological security maintenance and economic restructuring in different regions. This research will help to build the national ecological security evaluation system and formulate future policies for coastal ecological environment protection, and accelerate China's economic transformation.
How to cite: Xu, Y.: Based on Artificial Intelligence Simulation Study on the Impact of Land Use on Coastal Ecological Security in China's Coastal Zone, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2275, https://doi.org/10.5194/egusphere-egu2020-2275, 2020.
EGU2020-3856 | Displays | ITS4.1/NP4.2
A data mining method to identify field type in global oil and gas field case studyQian Zhang, Dawei Li, Min Niu, and Zhenzhen Wu
Based on the locations and types of past oil and gas field, new discoveries can be predicted from the tectonic setting of the world’s oil and gas field. Geoscientists can characterize a field based on the dominant geological event that influenced the structure’s ability to trap and contain oil and gas in recoverable quantities. But in fact multiple factors affected the type of the oil and gas fields. In this paper, a data mining approach was used to integrated factors of field type. The factors are evaluated by the quantified field data. These data included general field data, location, well statistics, cumulative production data, reserves data and reservoir properties data. The method includes four steps. Firstly, a set of attributes are identified to describe the field characteristics. Secondly, the application of principal component analysis and categorical principal components analysis reduced redundant data and noise by representing the main data variances with a few vector components in a transformed coordinate space. Finally, clustering was done based on a proximity matrix between samples. Euclidean distance definitions were tested in order to build a meaningful cluster tree. By applying this method to the world’s oil and gas field data, we concluded that: (1) the world’s fields can be clusfied in six types according to the quantified field data; (2) over 20% of the world’s fields are clustered at top depth between 2000 and 2500 meters. (3)more attributes can be added to this clustering method, and the influence of the attributes can be evaluated.
How to cite: Zhang, Q., Li, D., Niu, M., and Wu, Z.: A data mining method to identify field type in global oil and gas field case study, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3856, https://doi.org/10.5194/egusphere-egu2020-3856, 2020.
Based on the locations and types of past oil and gas field, new discoveries can be predicted from the tectonic setting of the world’s oil and gas field. Geoscientists can characterize a field based on the dominant geological event that influenced the structure’s ability to trap and contain oil and gas in recoverable quantities. But in fact multiple factors affected the type of the oil and gas fields. In this paper, a data mining approach was used to integrated factors of field type. The factors are evaluated by the quantified field data. These data included general field data, location, well statistics, cumulative production data, reserves data and reservoir properties data. The method includes four steps. Firstly, a set of attributes are identified to describe the field characteristics. Secondly, the application of principal component analysis and categorical principal components analysis reduced redundant data and noise by representing the main data variances with a few vector components in a transformed coordinate space. Finally, clustering was done based on a proximity matrix between samples. Euclidean distance definitions were tested in order to build a meaningful cluster tree. By applying this method to the world’s oil and gas field data, we concluded that: (1) the world’s fields can be clusfied in six types according to the quantified field data; (2) over 20% of the world’s fields are clustered at top depth between 2000 and 2500 meters. (3)more attributes can be added to this clustering method, and the influence of the attributes can be evaluated.
How to cite: Zhang, Q., Li, D., Niu, M., and Wu, Z.: A data mining method to identify field type in global oil and gas field case study, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3856, https://doi.org/10.5194/egusphere-egu2020-3856, 2020.
EGU2020-6922 | Displays | ITS4.1/NP4.2
Automating the pre-processing of time-domain induced polarization data using machine learningAdrian S. Barfod* and Jakob Juul Larsen
Exploring and studying the earth system is becoming increasingly important as the slow depletion of natural resources ensues. An important data source is geophysical data, collected worldwide. After gathering data, it goes through vigorous quality control, pre-processing, and inverse modelling procedures. Such procedures often have manual components, and require a trained geophysicist who understands the data, in order to translate it into useful information regarding the earth system. The sheer amounts of geophysical data collected today makes manual approaches impractical. Therefore, automating as much of the workflow related to geophysical data as possible, would allow novel opportunities such as fully automated geophysical monitoring systems, real-time modeling during data collection, larger geophysical data sets, etc.
Machine learning has been proposed as a tool for automating workflows related to geophysical data. The field of machine learning encompasses multiple tools, which can be applied in a wide range of geophysical workflows, such as pre-processing, inverse modeling, data exploration etc.
We present a study where machine learning is applied to automate the time domain induced polarization geophysical workflow. Such induced polarization data requires pre-processing, which is manual in nature. One of the pre-processing steps is that a trained geophysicist inspects the data, and removes so-called non-geologic signals, i.e. noise, which does not represent geological variance. Specifically, a real-world case from Grindsted Denmark is presented. Here, a time domain induced polarization survey was conducted containing seven profiles. Two lines were manually processed and used for supervised training of an artificial neural network. The neural net then automatically processed the remaining profiles of the survey, with satisfactory results. Afterwards, the processed data was inverted, yielding the induced polarization parameters respective to the Cole-Cole model. We discuss the limitations and optimization steps related to training such a classification network.
How to cite: Barfod*, A. S. and Larsen, J. J.: Automating the pre-processing of time-domain induced polarization data using machine learning, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6922, https://doi.org/10.5194/egusphere-egu2020-6922, 2020.
Exploring and studying the earth system is becoming increasingly important as the slow depletion of natural resources ensues. An important data source is geophysical data, collected worldwide. After gathering data, it goes through vigorous quality control, pre-processing, and inverse modelling procedures. Such procedures often have manual components, and require a trained geophysicist who understands the data, in order to translate it into useful information regarding the earth system. The sheer amounts of geophysical data collected today makes manual approaches impractical. Therefore, automating as much of the workflow related to geophysical data as possible, would allow novel opportunities such as fully automated geophysical monitoring systems, real-time modeling during data collection, larger geophysical data sets, etc.
Machine learning has been proposed as a tool for automating workflows related to geophysical data. The field of machine learning encompasses multiple tools, which can be applied in a wide range of geophysical workflows, such as pre-processing, inverse modeling, data exploration etc.
We present a study where machine learning is applied to automate the time domain induced polarization geophysical workflow. Such induced polarization data requires pre-processing, which is manual in nature. One of the pre-processing steps is that a trained geophysicist inspects the data, and removes so-called non-geologic signals, i.e. noise, which does not represent geological variance. Specifically, a real-world case from Grindsted Denmark is presented. Here, a time domain induced polarization survey was conducted containing seven profiles. Two lines were manually processed and used for supervised training of an artificial neural network. The neural net then automatically processed the remaining profiles of the survey, with satisfactory results. Afterwards, the processed data was inverted, yielding the induced polarization parameters respective to the Cole-Cole model. We discuss the limitations and optimization steps related to training such a classification network.
How to cite: Barfod*, A. S. and Larsen, J. J.: Automating the pre-processing of time-domain induced polarization data using machine learning, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6922, https://doi.org/10.5194/egusphere-egu2020-6922, 2020.
EGU2020-7586 | Displays | ITS4.1/NP4.2
Selection of Reliable Machine Learning Algorithms for Geophysical ApplicationsOctavian Dumitru, Gottfried Schwarz, Dongyang Ao, Gabriel Dax, Vlad Andrei, Chandra Karmakar, and Mihai Datcu
During the last years, one could see a broad use of machine learning tools and applications. However, when we use these techniques for geophysical analyses, we must be sure that the obtained results are scientifically valid and allow us to derive quantitative outcomes that can be directly compared with other measurements.
Therefore, we set out to identify typical datasets that lend themselves well to geophysical data interpretation. To simplify this very general task, we concentrate in this contribution on multi-dimensional image data acquired by satellites with typical remote sensing instruments for Earth observation being used for the analysis for:
- Atmospheric phenomena (cloud cover, cloud characteristics, smoke and plumes, strong winds, etc.)
- Land cover and land use (open terrain, agriculture, forestry, settlements, buildings and streets, industrial and transportation facilities, mountains, etc.)
- Sea and ocean surfaces (waves, currents, ships, icebergs, coastlines, etc.)
- Ice and snow on land and water (ice fields, glaciers, etc.)
- Image time series (dynamical phenomena, their occurrence and magnitude, mapping techniques)
Then we analyze important data characteristics for each type of instrument. One can see that most selected images are characterized by their type of imaging instrument (e.g., radar or optical images), their typical signal-to-noise figures, their preferred pixel sizes, their various spectral bands, etc.
As a third step, we select a number of established machine learning algorithms, available tools, software packages, required environments, published experiences, and specific caveats. The comparisons cover traditional “flat” as well as advanced “deep” techniques that have to be compared in detail before making any decision about their usefulness for geophysical applications. They range from simple thresholding to k-means, from multi-scale approaches to convolutional networks (with visible or hidden layers) and auto-encoders with sub-components from rectified linear units to adversarial networks.
Finally, we summarize our findings in several instrument / machine learning algorithm matrices (e.g., for active or passive instruments). These matrices also contain important features of the input data and their consequences, computational effort, attainable figures-of-merit, and necessary testing and verification steps (positive and negative examples). Typical examples are statistical similarities, characteristic scales, rotation invariance, target groupings, topic bagging and targeting (hashing) capabilities as well as local compression behavior.
How to cite: Dumitru, O., Schwarz, G., Ao, D., Dax, G., Andrei, V., Karmakar, C., and Datcu, M.: Selection of Reliable Machine Learning Algorithms for Geophysical Applications, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7586, https://doi.org/10.5194/egusphere-egu2020-7586, 2020.
During the last years, one could see a broad use of machine learning tools and applications. However, when we use these techniques for geophysical analyses, we must be sure that the obtained results are scientifically valid and allow us to derive quantitative outcomes that can be directly compared with other measurements.
Therefore, we set out to identify typical datasets that lend themselves well to geophysical data interpretation. To simplify this very general task, we concentrate in this contribution on multi-dimensional image data acquired by satellites with typical remote sensing instruments for Earth observation being used for the analysis for:
- Atmospheric phenomena (cloud cover, cloud characteristics, smoke and plumes, strong winds, etc.)
- Land cover and land use (open terrain, agriculture, forestry, settlements, buildings and streets, industrial and transportation facilities, mountains, etc.)
- Sea and ocean surfaces (waves, currents, ships, icebergs, coastlines, etc.)
- Ice and snow on land and water (ice fields, glaciers, etc.)
- Image time series (dynamical phenomena, their occurrence and magnitude, mapping techniques)
Then we analyze important data characteristics for each type of instrument. One can see that most selected images are characterized by their type of imaging instrument (e.g., radar or optical images), their typical signal-to-noise figures, their preferred pixel sizes, their various spectral bands, etc.
As a third step, we select a number of established machine learning algorithms, available tools, software packages, required environments, published experiences, and specific caveats. The comparisons cover traditional “flat” as well as advanced “deep” techniques that have to be compared in detail before making any decision about their usefulness for geophysical applications. They range from simple thresholding to k-means, from multi-scale approaches to convolutional networks (with visible or hidden layers) and auto-encoders with sub-components from rectified linear units to adversarial networks.
Finally, we summarize our findings in several instrument / machine learning algorithm matrices (e.g., for active or passive instruments). These matrices also contain important features of the input data and their consequences, computational effort, attainable figures-of-merit, and necessary testing and verification steps (positive and negative examples). Typical examples are statistical similarities, characteristic scales, rotation invariance, target groupings, topic bagging and targeting (hashing) capabilities as well as local compression behavior.
How to cite: Dumitru, O., Schwarz, G., Ao, D., Dax, G., Andrei, V., Karmakar, C., and Datcu, M.: Selection of Reliable Machine Learning Algorithms for Geophysical Applications, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7586, https://doi.org/10.5194/egusphere-egu2020-7586, 2020.
EGU2020-8501 | Displays | ITS4.1/NP4.2
Mapping the geogenic radon potential for Germany by machine learningEric Petermann, Hanna Meyer, Madlene Nussbaum, and Peter Bossew
The radioactive gas radon (Rn) is considered as an indoor air pollutant due to its detrimental effects on human health. Radon is known as the second most important cause for lung cancer after tobacco smoking. The dominant source of indoor Rn is the ground beneath the building in most cases. Following the European Basic Safety Standards, all EU Member States are required to delineate Rn priority areas, i.e. areas with increased risk of high indoor radon concentrations. One possibility to this end is using the “geogenic Rn potential” (GRP), which quantifies the availability of geogenic Rn for infiltration into buildings. The GRP is defined as a function of Rn concentration in soil gas and soil gas permeability.
In this study we used > 4,000 point measurements across Germany in combination with ~50 environmental co-variables (predictors). We fitted machine learning regression models to the target variables Rn concentration in soil and soil gas permeability. Subsequently, the GRP is calculated from both quantities. We compared the performance of three algorithms: Multivariate Adaptive Regression Splines (MARS), Random Forest (RF) and Support Vector Machines (SVM). Potential candidate predictors are geological, hydrogeological and soil landscape units, soil physical properties, soil chemical properties, soil hydraulic properties, climatic data, tectonic fault data, and geomorphological parameters.
The identification of informative predictors, tuning the model hyperparameters and estimation of the model performance was conducted using a spatial 10-fold cross-validation, where the folds were split by spatial blocks of 40*40 km. This procedure counteracts spatial autocorrelation of predictor and response data and is expected to ensure independence of training and test data. MARS, RF and SVM were evaluated in terms of its prediction accuracy and prediction variance. The results revealed that RF provided the most accurate predictions so far. The effect of the selected predictors on the final map was assessed in a quantitative way using partial dependence plots and spatial dependence maps. The RF model included 8 and 14 informative predictors for radon and permeability, respectively. The most important predictors in the RF model were geological and hydrogeological units as well as field capacity for radon and soil landscape, geological and hydrogeological units for soil gas permeability.
How to cite: Petermann, E., Meyer, H., Nussbaum, M., and Bossew, P.: Mapping the geogenic radon potential for Germany by machine learning, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8501, https://doi.org/10.5194/egusphere-egu2020-8501, 2020.
The radioactive gas radon (Rn) is considered as an indoor air pollutant due to its detrimental effects on human health. Radon is known as the second most important cause for lung cancer after tobacco smoking. The dominant source of indoor Rn is the ground beneath the building in most cases. Following the European Basic Safety Standards, all EU Member States are required to delineate Rn priority areas, i.e. areas with increased risk of high indoor radon concentrations. One possibility to this end is using the “geogenic Rn potential” (GRP), which quantifies the availability of geogenic Rn for infiltration into buildings. The GRP is defined as a function of Rn concentration in soil gas and soil gas permeability.
In this study we used > 4,000 point measurements across Germany in combination with ~50 environmental co-variables (predictors). We fitted machine learning regression models to the target variables Rn concentration in soil and soil gas permeability. Subsequently, the GRP is calculated from both quantities. We compared the performance of three algorithms: Multivariate Adaptive Regression Splines (MARS), Random Forest (RF) and Support Vector Machines (SVM). Potential candidate predictors are geological, hydrogeological and soil landscape units, soil physical properties, soil chemical properties, soil hydraulic properties, climatic data, tectonic fault data, and geomorphological parameters.
The identification of informative predictors, tuning the model hyperparameters and estimation of the model performance was conducted using a spatial 10-fold cross-validation, where the folds were split by spatial blocks of 40*40 km. This procedure counteracts spatial autocorrelation of predictor and response data and is expected to ensure independence of training and test data. MARS, RF and SVM were evaluated in terms of its prediction accuracy and prediction variance. The results revealed that RF provided the most accurate predictions so far. The effect of the selected predictors on the final map was assessed in a quantitative way using partial dependence plots and spatial dependence maps. The RF model included 8 and 14 informative predictors for radon and permeability, respectively. The most important predictors in the RF model were geological and hydrogeological units as well as field capacity for radon and soil landscape, geological and hydrogeological units for soil gas permeability.
How to cite: Petermann, E., Meyer, H., Nussbaum, M., and Bossew, P.: Mapping the geogenic radon potential for Germany by machine learning, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8501, https://doi.org/10.5194/egusphere-egu2020-8501, 2020.
EGU2020-8878 | Displays | ITS4.1/NP4.2
Evaluating Modelled Aerosol Absorption by Simulating the UV Aerosol Index using Machine LearningJiyunting Sun, J.Pepijn Veefkind, Peter van Velthoven, and Pieternel.F Levelt
The environmental effects of absorbing aerosols are complex: they warm the surface and the atmosphere on a large scale, while locally they cool the surface. Absorbing aerosols also affect precipitation and cloud formation. A comprehensive understanding of aerosol absorption is important to reduce the uncertainties in aerosol radiative forcing assessments. The ultraviolet aerosol index (UVAI) is a qualitative measure of aerosol absorption provided by multiple satellite missions since 1978. UVAI is directly calculated by the difference between the measured spectral contrast and the simulated ones in the near-UV channel, without assumptions on aerosol properties. This long-term global daily data set is advantageous for many applications. In previous work, we have attempted to derive the single scattering albedo (SSA) from UVAI. In this work, we evaluate the UVAI derived from a chemistry transport model (CTM) with satellite observations. Conventionally, UVAI from a model aerosol fields at a satellite footprint is simulated using a radiative transfer model. In order to do this, one has to make assumptions on the spectral dependence of the aerosol optical properties. The lack of measurements and our poor knowledge of these properties may lead to large uncertainties in the simulated UVAI, and these uncertainties are difficult to quantify. In this work, we propose an alternative method, that is to simulate the UVAI based on Machine Learning (ML) approaches. A training data set is constructed by independent measurements and/or model simulations with strict quality controls. We simulate the UVAI using modelled aerosol properties, the Sun-satellite geometry and the surface parameters. The discrepancy between the retrieved UVAI and the ML predictions can help us to identify the unrealistic inputs of aerosol absorption in the model.
How to cite: Sun, J., Veefkind, J. P., van Velthoven, P., and Levelt, P. F.: Evaluating Modelled Aerosol Absorption by Simulating the UV Aerosol Index using Machine Learning, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8878, https://doi.org/10.5194/egusphere-egu2020-8878, 2020.
The environmental effects of absorbing aerosols are complex: they warm the surface and the atmosphere on a large scale, while locally they cool the surface. Absorbing aerosols also affect precipitation and cloud formation. A comprehensive understanding of aerosol absorption is important to reduce the uncertainties in aerosol radiative forcing assessments. The ultraviolet aerosol index (UVAI) is a qualitative measure of aerosol absorption provided by multiple satellite missions since 1978. UVAI is directly calculated by the difference between the measured spectral contrast and the simulated ones in the near-UV channel, without assumptions on aerosol properties. This long-term global daily data set is advantageous for many applications. In previous work, we have attempted to derive the single scattering albedo (SSA) from UVAI. In this work, we evaluate the UVAI derived from a chemistry transport model (CTM) with satellite observations. Conventionally, UVAI from a model aerosol fields at a satellite footprint is simulated using a radiative transfer model. In order to do this, one has to make assumptions on the spectral dependence of the aerosol optical properties. The lack of measurements and our poor knowledge of these properties may lead to large uncertainties in the simulated UVAI, and these uncertainties are difficult to quantify. In this work, we propose an alternative method, that is to simulate the UVAI based on Machine Learning (ML) approaches. A training data set is constructed by independent measurements and/or model simulations with strict quality controls. We simulate the UVAI using modelled aerosol properties, the Sun-satellite geometry and the surface parameters. The discrepancy between the retrieved UVAI and the ML predictions can help us to identify the unrealistic inputs of aerosol absorption in the model.
How to cite: Sun, J., Veefkind, J. P., van Velthoven, P., and Levelt, P. F.: Evaluating Modelled Aerosol Absorption by Simulating the UV Aerosol Index using Machine Learning, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8878, https://doi.org/10.5194/egusphere-egu2020-8878, 2020.
EGU2020-9243 | Displays | ITS4.1/NP4.2
Using deep learning to transfer knowledge between satellite datasets for automated agricultural land discrimination in AfghanistanAlex Hamer, Daniel Simms, and Toby Waine
Accurate mapping of agricultural area is essential for Afghanistan’s annual opium poppy monitoring programme. Access to labelled data remains the main barrier for utilising deep learning from satellite imagery to automate the process of land cover classification. In this study, we aim to transfer knowledge from historical labelled data of agricultural land, from work on poppy cultivation estimates undertaken between 2007 and 2010, to classify imagery from a range of sensors using deep learning. Fully Convolutional Networks (FCNs) have been used to learn the complex features of agriculture in southern Afghanistan using their inherent spatial and spectral characteristics from satellite imagery. FCNs are trained and validated using labelled Disaster Monitoring Constellation (DMC) data (32 m) to transfer knowledge of agricultural land to classify other imagery, such as Landsat (30 m). The dependency on spatial and spectral characteristics are explored using intensity, Normalised Difference Vegetation Index (NDVI), top of atmosphere reflectance and tasselled cap transformation. The underlying spatial features associated with agriculture are found to play a significant role in agriculture discrimination. High classification performance has been achieved with over 92% overall accuracy and 0.58 intersection over union. The ability to transfer knowledge from historical datasets to new satellite sensors is an exciting prospect for future automated agricultural land discrimination in the United Nations Office on Drugs and Crime annual opium survey.
How to cite: Hamer, A., Simms, D., and Waine, T.: Using deep learning to transfer knowledge between satellite datasets for automated agricultural land discrimination in Afghanistan, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9243, https://doi.org/10.5194/egusphere-egu2020-9243, 2020.
Accurate mapping of agricultural area is essential for Afghanistan’s annual opium poppy monitoring programme. Access to labelled data remains the main barrier for utilising deep learning from satellite imagery to automate the process of land cover classification. In this study, we aim to transfer knowledge from historical labelled data of agricultural land, from work on poppy cultivation estimates undertaken between 2007 and 2010, to classify imagery from a range of sensors using deep learning. Fully Convolutional Networks (FCNs) have been used to learn the complex features of agriculture in southern Afghanistan using their inherent spatial and spectral characteristics from satellite imagery. FCNs are trained and validated using labelled Disaster Monitoring Constellation (DMC) data (32 m) to transfer knowledge of agricultural land to classify other imagery, such as Landsat (30 m). The dependency on spatial and spectral characteristics are explored using intensity, Normalised Difference Vegetation Index (NDVI), top of atmosphere reflectance and tasselled cap transformation. The underlying spatial features associated with agriculture are found to play a significant role in agriculture discrimination. High classification performance has been achieved with over 92% overall accuracy and 0.58 intersection over union. The ability to transfer knowledge from historical datasets to new satellite sensors is an exciting prospect for future automated agricultural land discrimination in the United Nations Office on Drugs and Crime annual opium survey.
How to cite: Hamer, A., Simms, D., and Waine, T.: Using deep learning to transfer knowledge between satellite datasets for automated agricultural land discrimination in Afghanistan, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9243, https://doi.org/10.5194/egusphere-egu2020-9243, 2020.
EGU2020-12943 | Displays | ITS4.1/NP4.2
Correction for the Measurements of Particulate Matter Sensors through Machine LearningZhen Cheng and Qiaofeng Guo
Instruments based on light scattering used to measure total suspended particulate (TSP) concentrations have the advantages of fast response, small size and low cost as compared to the gravimetric reference method. However, the relationship between scattering intensity and TSP mass concentration varies nonlinearly with both environmental conditions and particle properties, making it difficult to make corrections. This study applied four machine learning models (support vector machine, random forest, gradient boosting regression trees and an artificial neural network) to correct scattering measurements for TSP mass concentrations. A total of 1141 hourly records of collocated gravimetric and light scattering measurements taken at 17 urban sites in Shanghai, China were used for model training and validation. All four machine learning models improved the linear regressions between scattering and gravimetric mass by increasing slopes from 0.4 to 0.9-1.1 and coefficients of determination from 0.1 to 0.8-0.9. Partial dependence plots indicate that TSP concentrations determined by light scattering instruments increased continuously in the PM2.5 concentration range of ~0-80 µg/m3; however, they leveled off above PM10 and TSP concentrations of ~60 and 200 µg/m3, respectively. The TSP mass concentrations determined by scattering showed an exponential growth after relative humidity exceeded 70%, in agreement with previous studies on hygroscopic growth of fine particles. This study demonstrates that machine learning models can effectively improve the correlation between light scattering measurements and TSP mass concentrations with filter-based methods. Interpretation analysis further provides scientific insights into the major factors (e.g., hygroscopic growth) that cause scattering measurements to deviate from TSP mass concentrations besides other factors like fluctuation of mass density and refractive index.
Figure 1. Comparison of TSP concentrations determined by light scattering and machine learning model outputs with those by gravimetric analyses. (a) LR: Linear Regression; (b) SVM: Support Vector Machine; (c) RF: Random Forest; (d) GBRT: Gradient Boosting Regression Tree; (e) ANN: Artificial Neural Network. y/x represents the slope, R2 is the coefficient of determination, N means the volume of the dataset.
How to cite: Cheng, Z. and Guo, Q.: Correction for the Measurements of Particulate Matter Sensors through Machine Learning, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12943, https://doi.org/10.5194/egusphere-egu2020-12943, 2020.
Instruments based on light scattering used to measure total suspended particulate (TSP) concentrations have the advantages of fast response, small size and low cost as compared to the gravimetric reference method. However, the relationship between scattering intensity and TSP mass concentration varies nonlinearly with both environmental conditions and particle properties, making it difficult to make corrections. This study applied four machine learning models (support vector machine, random forest, gradient boosting regression trees and an artificial neural network) to correct scattering measurements for TSP mass concentrations. A total of 1141 hourly records of collocated gravimetric and light scattering measurements taken at 17 urban sites in Shanghai, China were used for model training and validation. All four machine learning models improved the linear regressions between scattering and gravimetric mass by increasing slopes from 0.4 to 0.9-1.1 and coefficients of determination from 0.1 to 0.8-0.9. Partial dependence plots indicate that TSP concentrations determined by light scattering instruments increased continuously in the PM2.5 concentration range of ~0-80 µg/m3; however, they leveled off above PM10 and TSP concentrations of ~60 and 200 µg/m3, respectively. The TSP mass concentrations determined by scattering showed an exponential growth after relative humidity exceeded 70%, in agreement with previous studies on hygroscopic growth of fine particles. This study demonstrates that machine learning models can effectively improve the correlation between light scattering measurements and TSP mass concentrations with filter-based methods. Interpretation analysis further provides scientific insights into the major factors (e.g., hygroscopic growth) that cause scattering measurements to deviate from TSP mass concentrations besides other factors like fluctuation of mass density and refractive index.
Figure 1. Comparison of TSP concentrations determined by light scattering and machine learning model outputs with those by gravimetric analyses. (a) LR: Linear Regression; (b) SVM: Support Vector Machine; (c) RF: Random Forest; (d) GBRT: Gradient Boosting Regression Tree; (e) ANN: Artificial Neural Network. y/x represents the slope, R2 is the coefficient of determination, N means the volume of the dataset.
How to cite: Cheng, Z. and Guo, Q.: Correction for the Measurements of Particulate Matter Sensors through Machine Learning, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12943, https://doi.org/10.5194/egusphere-egu2020-12943, 2020.
EGU2020-3860 | Displays | ITS4.1/NP4.2
Big data analysis and achievements of global Petroleum explorationshiyun Mi, Zhenzhen Wu, and Qian Zhang
EGU2020-3809 | Displays | ITS4.1/NP4.2
Deep learning Q inversion from reflection seismic data with strong attenuation using an encoder-decoder convolutional neural network: an example from South China SeaHao Zhang, Jianguang Han, Heng Zhang, and Yi Zhang
The seismic waves exhibit various types of attenuation while propagating through the subsurface, which is strongly related to the complexity of the earth. Anelasticity of the subsurface medium, which is quantified by the quality factor Q, causes dissipation of seismic energy. Attenuation distorts the phase of the seismic data and decays the higher frequencies in the data more than lower frequencies. Strong attenuation effect resulting from geology such as gas pocket is a notoriously challenging problem for high resolution imaging because it strongly reduces the amplitude and downgrade the imaging quality of deeper events. To compensate this attenuation effect, first we need to accurately estimate the attenuation model (Q). However, it is challenging to directly derive a laterally and vertically varying attenuation model in depth domain from the surface reflection seismic data. This research paper proposes a method to derive the anomalous Q model corresponding to strong attenuative media from marine reflection seismic data using a deep-learning approach, the convolutional neural network (CNN). We treat Q anomaly detection problem as a semantic segmentation task and train an encoder-decoder CNN (U-Net) to perform a pixel-by-pixel prediction on the seismic section to invert a pixel group belongs to different level of attenuation probability which can help to build up the attenuation model. The proposed method in this paper uses a volume of marine 3D reflection seismic data for network training and validation, which needs only a very small amount of data as the training set due to the feature of U-Net, a specific encoder-decoder CNN architecture in semantic segmentation task. Finally, in order to evaluate the attenuation model result predicted by the proposed method, we validate the predicted heterogeneous Q model using de-absorption pre-stack depth migration (Q-PSDM), a high-resolution depth imaging result with reasonable compensation is obtained.
How to cite: Zhang, H., Han, J., Zhang, H., and Zhang, Y.: Deep learning Q inversion from reflection seismic data with strong attenuation using an encoder-decoder convolutional neural network: an example from South China Sea, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3809, https://doi.org/10.5194/egusphere-egu2020-3809, 2020.
The seismic waves exhibit various types of attenuation while propagating through the subsurface, which is strongly related to the complexity of the earth. Anelasticity of the subsurface medium, which is quantified by the quality factor Q, causes dissipation of seismic energy. Attenuation distorts the phase of the seismic data and decays the higher frequencies in the data more than lower frequencies. Strong attenuation effect resulting from geology such as gas pocket is a notoriously challenging problem for high resolution imaging because it strongly reduces the amplitude and downgrade the imaging quality of deeper events. To compensate this attenuation effect, first we need to accurately estimate the attenuation model (Q). However, it is challenging to directly derive a laterally and vertically varying attenuation model in depth domain from the surface reflection seismic data. This research paper proposes a method to derive the anomalous Q model corresponding to strong attenuative media from marine reflection seismic data using a deep-learning approach, the convolutional neural network (CNN). We treat Q anomaly detection problem as a semantic segmentation task and train an encoder-decoder CNN (U-Net) to perform a pixel-by-pixel prediction on the seismic section to invert a pixel group belongs to different level of attenuation probability which can help to build up the attenuation model. The proposed method in this paper uses a volume of marine 3D reflection seismic data for network training and validation, which needs only a very small amount of data as the training set due to the feature of U-Net, a specific encoder-decoder CNN architecture in semantic segmentation task. Finally, in order to evaluate the attenuation model result predicted by the proposed method, we validate the predicted heterogeneous Q model using de-absorption pre-stack depth migration (Q-PSDM), a high-resolution depth imaging result with reasonable compensation is obtained.
How to cite: Zhang, H., Han, J., Zhang, H., and Zhang, Y.: Deep learning Q inversion from reflection seismic data with strong attenuation using an encoder-decoder convolutional neural network: an example from South China Sea, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3809, https://doi.org/10.5194/egusphere-egu2020-3809, 2020.
EGU2020-19807 | Displays | ITS4.1/NP4.2
Mixing height derivation from aerosol lidar using machine learning: KABL and ADABL algorithmsThomas Rieutord and Sylvain Aubert
Atmospheric boundary layer height (BLH) is a key parameter for air quality forecast. To measure it, a common practice is to use aerosol lidars: a strong decrease in the backscatter signal indicates the top of the boundary layer. This work explains and compares two methods of machine learning to derive BLH from backscatter profiles: the K-means algorithm and the AdaBoost algorithm. As K-means is unsupervised, it has less dependency on instrument settings, hence more generalization skills. AdaBoost was used for binary classification: boundary layer/free atmosphere. It has been trained on 2 days, labelled by hand, therefore it has less generalization skills but a better representation of the diurnal cycle. Both methods are compared to the lidar manufacturer's software and to the BLH derived from collocated radiosondes. The radiosondes are taken as reference for all other methods. The comparison is carried out on a 2 years period (2017-2018) on 2 sites (Trappes and Brest). Data come from Meteo-France's operational network. The code and the data that produced these results will be put under a fully open access licence, with the name of KABL (K-means for Atmospheric Boundary Layer) and ADABL (AdaBoost for Atmospheric Boundary Layer).
How to cite: Rieutord, T. and Aubert, S.: Mixing height derivation from aerosol lidar using machine learning: KABL and ADABL algorithms, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19807, https://doi.org/10.5194/egusphere-egu2020-19807, 2020.
Atmospheric boundary layer height (BLH) is a key parameter for air quality forecast. To measure it, a common practice is to use aerosol lidars: a strong decrease in the backscatter signal indicates the top of the boundary layer. This work explains and compares two methods of machine learning to derive BLH from backscatter profiles: the K-means algorithm and the AdaBoost algorithm. As K-means is unsupervised, it has less dependency on instrument settings, hence more generalization skills. AdaBoost was used for binary classification: boundary layer/free atmosphere. It has been trained on 2 days, labelled by hand, therefore it has less generalization skills but a better representation of the diurnal cycle. Both methods are compared to the lidar manufacturer's software and to the BLH derived from collocated radiosondes. The radiosondes are taken as reference for all other methods. The comparison is carried out on a 2 years period (2017-2018) on 2 sites (Trappes and Brest). Data come from Meteo-France's operational network. The code and the data that produced these results will be put under a fully open access licence, with the name of KABL (K-means for Atmospheric Boundary Layer) and ADABL (AdaBoost for Atmospheric Boundary Layer).
How to cite: Rieutord, T. and Aubert, S.: Mixing height derivation from aerosol lidar using machine learning: KABL and ADABL algorithms, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19807, https://doi.org/10.5194/egusphere-egu2020-19807, 2020.
EGU2020-13063 | Displays | ITS4.1/NP4.2
Supervised regression learning for predictions of aerosol particle size distributions from PM2.5, total particle number and meteorological parameters at Helsinki SMEAR3 stationJuha Kangasluoma, Yusheng Wu, Runlong Cai, Joel Kuula, Hilkka Timonen, Pasi Aalto, Markku Kulmala, and Tuukka Petäjä
Supervised regression learning for predictions of aerosol particle size distributions from PM2.5, total particle number and meteorological parameters at Helsinki SMEAR3 station
J. Kangasluoma1, Y. Wu1, R. Cai1, J. Kuula2, H. Timonen2, P. P. Aalto1, M. Kulmala1, T. Petäjä1
1 Institute for Atmospheric and Earth System Research / Physics, Faculty of Science, University of Helsinki, Finland
2 Finnish Meteorological Institute, Erik Palménin aukio 1, 00560 Helsinki, Finland
Atmospheric particulate material is a significant pollutant and causes millions premature deaths yearly especially in urban city environments. To conduct epidemiological studies and quantify of the role of sub-micron particles, especially role of the ultrafine particles (<100 nm), in mortality caused by the particulate matter, long-term monitoring of the particle number, surface area, mass and chemical composition are needed. Such monitoring on large scale is currently done only for particulate mass, namely PM2.5 (mass of particulates smaller than 2.5 μm), while large body of evidence suggests that ultrafine particles, which dominate the number of the aerosol distribution, cause significant health effects that do not originate from particle mass.
The chicken-egg-problem here is that monitoring of particle number or surface area is not required from the authorities due to lack of epidemiological evidence showing the harm and suitable instrumentation (although car industry already voluntarily limits the ultrafine particle number emissions), while these epidemiological studies are lacking because of the suitable lack of data. Here we present the first step in solving this “lack of data issue” by predicting aerosol particle size distributions based on PM2.5, particle total number and meteorological measurements, from which particle size distribution, and subsequently number, surface area and mass exposure can be calculated.
We use baggedtree supervised regression learning (from Matlab toolbox) to train an algorithm with one full year data from SMEAR3 station at 10 min time resolution in Helsinki during 2018. The response variable is the particle size distribution (each bin separately) and the training variables are PM2.5, particle number and meteorological parameters. The trained algorithm is then used with the same training variables data, but from 2019 to predict size distributions, which are directly compared to the measured size distributions by a differential mobility particle sizer.
To check the model performance, we divide the predicted distributions to three size bins, 3-25, 25-100 and 100-1000 nm, and calculate the coefficient of determination (r2) between the measured and predicted number concentration at 10 min time resolution, which are 0.79, 0.60 and 0.50 respectively. We also calculate r2 between the measured and predicted number, surface area and mass exposure, which are 0.87, 0.79 and 0.74, respectively. Uncertainties in the prediction are mostly random, thus the r2 values will increase at longer averaging times.
Our results show that an algorithm that is trained with particle size distribution data, and particle number, PM2.5 and meteorological data can predict particle size distributions and number, surface area and mass exposures. In practice, these predictions can be realized e.g. in air pollution monitoring networks by implementing a condensation particle counter at each site, and circulating a differential mobility size spectrometer around the sites.
How to cite: Kangasluoma, J., Wu, Y., Cai, R., Kuula, J., Timonen, H., Aalto, P., Kulmala, M., and Petäjä, T.: Supervised regression learning for predictions of aerosol particle size distributions from PM2.5, total particle number and meteorological parameters at Helsinki SMEAR3 station, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13063, https://doi.org/10.5194/egusphere-egu2020-13063, 2020.
Supervised regression learning for predictions of aerosol particle size distributions from PM2.5, total particle number and meteorological parameters at Helsinki SMEAR3 station
J. Kangasluoma1, Y. Wu1, R. Cai1, J. Kuula2, H. Timonen2, P. P. Aalto1, M. Kulmala1, T. Petäjä1
1 Institute for Atmospheric and Earth System Research / Physics, Faculty of Science, University of Helsinki, Finland
2 Finnish Meteorological Institute, Erik Palménin aukio 1, 00560 Helsinki, Finland
Atmospheric particulate material is a significant pollutant and causes millions premature deaths yearly especially in urban city environments. To conduct epidemiological studies and quantify of the role of sub-micron particles, especially role of the ultrafine particles (<100 nm), in mortality caused by the particulate matter, long-term monitoring of the particle number, surface area, mass and chemical composition are needed. Such monitoring on large scale is currently done only for particulate mass, namely PM2.5 (mass of particulates smaller than 2.5 μm), while large body of evidence suggests that ultrafine particles, which dominate the number of the aerosol distribution, cause significant health effects that do not originate from particle mass.
The chicken-egg-problem here is that monitoring of particle number or surface area is not required from the authorities due to lack of epidemiological evidence showing the harm and suitable instrumentation (although car industry already voluntarily limits the ultrafine particle number emissions), while these epidemiological studies are lacking because of the suitable lack of data. Here we present the first step in solving this “lack of data issue” by predicting aerosol particle size distributions based on PM2.5, particle total number and meteorological measurements, from which particle size distribution, and subsequently number, surface area and mass exposure can be calculated.
We use baggedtree supervised regression learning (from Matlab toolbox) to train an algorithm with one full year data from SMEAR3 station at 10 min time resolution in Helsinki during 2018. The response variable is the particle size distribution (each bin separately) and the training variables are PM2.5, particle number and meteorological parameters. The trained algorithm is then used with the same training variables data, but from 2019 to predict size distributions, which are directly compared to the measured size distributions by a differential mobility particle sizer.
To check the model performance, we divide the predicted distributions to three size bins, 3-25, 25-100 and 100-1000 nm, and calculate the coefficient of determination (r2) between the measured and predicted number concentration at 10 min time resolution, which are 0.79, 0.60 and 0.50 respectively. We also calculate r2 between the measured and predicted number, surface area and mass exposure, which are 0.87, 0.79 and 0.74, respectively. Uncertainties in the prediction are mostly random, thus the r2 values will increase at longer averaging times.
Our results show that an algorithm that is trained with particle size distribution data, and particle number, PM2.5 and meteorological data can predict particle size distributions and number, surface area and mass exposures. In practice, these predictions can be realized e.g. in air pollution monitoring networks by implementing a condensation particle counter at each site, and circulating a differential mobility size spectrometer around the sites.
How to cite: Kangasluoma, J., Wu, Y., Cai, R., Kuula, J., Timonen, H., Aalto, P., Kulmala, M., and Petäjä, T.: Supervised regression learning for predictions of aerosol particle size distributions from PM2.5, total particle number and meteorological parameters at Helsinki SMEAR3 station, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13063, https://doi.org/10.5194/egusphere-egu2020-13063, 2020.
EGU2020-16142 | Displays | ITS4.1/NP4.2
Characterizing sown biodiverse pastures using remote sensing data with machine learningTiago G. Morais, Pedro Vilar, Marjan Jongen, Nuno R. Rodrigues, Ivo Gama, Tiago Domingos, and Ricardo F.M. Teixeira
In Portugal, beef cattle are commonly fed with a mixture of grazing and forages/concentrate feed. Sown biodiverse permanent pastures rich in legumes (SBP) were introduced to provide quality animal feed and offset concentrate consumption. SBP also sequester large amounts of carbon in soils. They use biodiversity to promote pasture productivity, supporting a more than doubling in sustainable stocking rate, with several potential environmental co-benefits besides carbon sequestration in soils.
Here, we develop and test the combination of remote sensing and machine learning approaches to predict the most relevant production parameters of plant and soil. For the plants, we included pasture yield, nitrogen and phosphorus content, and species composition (legumes, grasses and forbs). In the soil, we included soil organic matter content, as well as nitrogen and phosphorus content. For soils, hyperspectral data were obtained in the laboratory using previously collected soil samples (in near-infrared wavelengths). Remotely sensed multispectral data was acquired from the Sentinel-2 satellite. We also calculated several vegetation indexes. The machine learning algorithms used were artificial neural networks and random forests regressions. We used data collected in late winter/spring from 14 farms (more than 150 data samples) located in the Alentejo region, Portugal.
The models demonstrated a good prediction capacity with r-squared (r2) higher than in 0.70 for most of the variables and both spectral datasets. Estimation error decreases with proximity of the spectral data acquisition, i.e. error is lower using hyperspectral datasets than Sentinel-2 data. Further, results not shown systematic overestimation and/or underestimation. The fit is particularly accurate for yield and organic matter, higher than 0.80. Soil organic matter content has the lowest standard estimation error (3 g/kg soil – average SOM: 20 g/kg soil), while the legumes fraction has the highest estimation error (20% legumes fraction).
Results show that a move towards automated monitoring (combining proximal or remote sensing data and machine learning methods) can lead to expedited and low-cost methods for mapping and assessment of variables in sown biodiverse pastures.
How to cite: Morais, T. G., Vilar, P., Jongen, M., Rodrigues, N. R., Gama, I., Domingos, T., and Teixeira, R. F. M.: Characterizing sown biodiverse pastures using remote sensing data with machine learning, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-16142, https://doi.org/10.5194/egusphere-egu2020-16142, 2020.
In Portugal, beef cattle are commonly fed with a mixture of grazing and forages/concentrate feed. Sown biodiverse permanent pastures rich in legumes (SBP) were introduced to provide quality animal feed and offset concentrate consumption. SBP also sequester large amounts of carbon in soils. They use biodiversity to promote pasture productivity, supporting a more than doubling in sustainable stocking rate, with several potential environmental co-benefits besides carbon sequestration in soils.
Here, we develop and test the combination of remote sensing and machine learning approaches to predict the most relevant production parameters of plant and soil. For the plants, we included pasture yield, nitrogen and phosphorus content, and species composition (legumes, grasses and forbs). In the soil, we included soil organic matter content, as well as nitrogen and phosphorus content. For soils, hyperspectral data were obtained in the laboratory using previously collected soil samples (in near-infrared wavelengths). Remotely sensed multispectral data was acquired from the Sentinel-2 satellite. We also calculated several vegetation indexes. The machine learning algorithms used were artificial neural networks and random forests regressions. We used data collected in late winter/spring from 14 farms (more than 150 data samples) located in the Alentejo region, Portugal.
The models demonstrated a good prediction capacity with r-squared (r2) higher than in 0.70 for most of the variables and both spectral datasets. Estimation error decreases with proximity of the spectral data acquisition, i.e. error is lower using hyperspectral datasets than Sentinel-2 data. Further, results not shown systematic overestimation and/or underestimation. The fit is particularly accurate for yield and organic matter, higher than 0.80. Soil organic matter content has the lowest standard estimation error (3 g/kg soil – average SOM: 20 g/kg soil), while the legumes fraction has the highest estimation error (20% legumes fraction).
Results show that a move towards automated monitoring (combining proximal or remote sensing data and machine learning methods) can lead to expedited and low-cost methods for mapping and assessment of variables in sown biodiverse pastures.
How to cite: Morais, T. G., Vilar, P., Jongen, M., Rodrigues, N. R., Gama, I., Domingos, T., and Teixeira, R. F. M.: Characterizing sown biodiverse pastures using remote sensing data with machine learning, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-16142, https://doi.org/10.5194/egusphere-egu2020-16142, 2020.
EGU2020-13473 | Displays | ITS4.1/NP4.2
Advanced harmonization and sensor fusion to transform data readiness and resolution for big data analyticsRasmus Houborg and Giovanni Marchisio
Access to data is no longer a problem. The recent emergence of new observational paradigms combined with advances in conventional spaceborne sensing has resulted in a proliferation of satellite sensor data. This geospatial information revolution constitutes a game changer in the ability to derive time-critical and location-specific insights into dynamic land surface processes.
However, it’s not easy to integrate all of the data that is available. Sensor interoperability issues and cross-calibration challenges present obstacles in realizing the full potential of these rich geospatial datasets.
The production of analysis ready, sensor-agnostic, and very high spatiotemporal resolution information feeds has an obvious role in advancing geospatial data analytics and machine learning applications at broad scales with potentially far reaching societal and economic benefits.
At Planet, our mission is to make the world visible, accessible, and actionable. We are pioneering a methodology--the CubeSat-Enabled Spatio-Temporal Enhancement Method (CESTEM)--to enhance, harmonize, inter-calibrate, and fuse cross-sensor data streams leveraging rigorously calibrated ‘gold standard’ satellites (i.e., Sentinel, Landsat, MODIS) in synergy with superior resolution CubeSats from Planet. The result is next generation analysis ready data, delivering clean (i.e. free from clouds and shadows), gap-filled (i.e., daily, 3 m), temporally consistent, radiometrically robust, and sensor agnostic surface reflectance feeds featuring and synergizing inputs from both public and private sensor sources. The enhanced data readiness, interoperability, and resolution offer unique opportunities for advancing big data analytics and positioning remote sensing as a trustworthy source for delivering usable and actionable insights.
How to cite: Houborg, R. and Marchisio, G.: Advanced harmonization and sensor fusion to transform data readiness and resolution for big data analytics , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13473, https://doi.org/10.5194/egusphere-egu2020-13473, 2020.
Access to data is no longer a problem. The recent emergence of new observational paradigms combined with advances in conventional spaceborne sensing has resulted in a proliferation of satellite sensor data. This geospatial information revolution constitutes a game changer in the ability to derive time-critical and location-specific insights into dynamic land surface processes.
However, it’s not easy to integrate all of the data that is available. Sensor interoperability issues and cross-calibration challenges present obstacles in realizing the full potential of these rich geospatial datasets.
The production of analysis ready, sensor-agnostic, and very high spatiotemporal resolution information feeds has an obvious role in advancing geospatial data analytics and machine learning applications at broad scales with potentially far reaching societal and economic benefits.
At Planet, our mission is to make the world visible, accessible, and actionable. We are pioneering a methodology--the CubeSat-Enabled Spatio-Temporal Enhancement Method (CESTEM)--to enhance, harmonize, inter-calibrate, and fuse cross-sensor data streams leveraging rigorously calibrated ‘gold standard’ satellites (i.e., Sentinel, Landsat, MODIS) in synergy with superior resolution CubeSats from Planet. The result is next generation analysis ready data, delivering clean (i.e. free from clouds and shadows), gap-filled (i.e., daily, 3 m), temporally consistent, radiometrically robust, and sensor agnostic surface reflectance feeds featuring and synergizing inputs from both public and private sensor sources. The enhanced data readiness, interoperability, and resolution offer unique opportunities for advancing big data analytics and positioning remote sensing as a trustworthy source for delivering usable and actionable insights.
How to cite: Houborg, R. and Marchisio, G.: Advanced harmonization and sensor fusion to transform data readiness and resolution for big data analytics , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13473, https://doi.org/10.5194/egusphere-egu2020-13473, 2020.
EGU2020-19171 | Displays | ITS4.1/NP4.2
Design of database systems for optimized spatio-temporal querying to facilitate monitoring, analysing and forecasting in the "Internet of Water"Erik Bollen, Brianna R. Pagán, Bart Kuijpers, Stijn Van Hoey, Nele Desmet, Rik Hendrix, Jef Dams, and Piet Seuntjens
Monitoring, analysing and forecasting water-systems, such as rivers, lakes and seas, is an essential part of the tasks for an environmental agency or government. In the region of Flanders, in Belgium, different organisations have united to create the ”Internet of Water” (IoW). During this project, 2500 wireless water-quality sensors will be deployed in rivers, canals and lakes all over Flanders. This network of sensors will support a more accurate management of water systems by feeding real-time data. Applications include monitoring real-time water-flows, automated warnings and notifications to appropriate organisations, tracing pollution and the prediction of salinisation.
Despite the diversity of these applications, they mostly rely on a correct spatial representation and fast querying of the flow path: where does water flow to, where can the water come from, and when does the water pass at certain locations? In the specific case of Flanders, the human-influenced landscape provides additional complexity with rivers, channels, barriers and even cycles. Numerous models and systems exist that are able to answer the above questions, even very precisely, but they often lack the ability to produce the results quickly enough for real-time applicability that is required in the IoW. Moreover, the rigid data representation makes it impossible to integrate new data sources and data types, especially in the IoW, where the data originates from vastly different backgrounds.
In this research, we focus on the performance of spatio-temporal queries taking into account the spatial configuration of a strongly human-influenced water system and the real-time acquisition and processing of sensor data. The use of graph-database systems is compared with relational-database systems to store topologies and execute recursive path-tracing queries. Not only storing and querying are taken into account, but also the creation and updating of the topologies are an essential part. Moreover, the advantages of a hybrid approach that integrates the graph-based databases for spatial topologies with relational databases for temporal and water-system attributes are investigated. The fast querying of both upstream and downstream flow-path information is of great use in various applications (e.g., pollution tracking, alerting, relating sensor signals, …). By adding a wrapper library and creating a standardised result graph representation, the complexity is abstracted away from the individual applications.
How to cite: Bollen, E., R. Pagán, B., Kuijpers, B., Van Hoey, S., Desmet, N., Hendrix, R., Dams, J., and Seuntjens, P.: Design of database systems for optimized spatio-temporal querying to facilitate monitoring, analysing and forecasting in the "Internet of Water", EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19171, https://doi.org/10.5194/egusphere-egu2020-19171, 2020.
Monitoring, analysing and forecasting water-systems, such as rivers, lakes and seas, is an essential part of the tasks for an environmental agency or government. In the region of Flanders, in Belgium, different organisations have united to create the ”Internet of Water” (IoW). During this project, 2500 wireless water-quality sensors will be deployed in rivers, canals and lakes all over Flanders. This network of sensors will support a more accurate management of water systems by feeding real-time data. Applications include monitoring real-time water-flows, automated warnings and notifications to appropriate organisations, tracing pollution and the prediction of salinisation.
Despite the diversity of these applications, they mostly rely on a correct spatial representation and fast querying of the flow path: where does water flow to, where can the water come from, and when does the water pass at certain locations? In the specific case of Flanders, the human-influenced landscape provides additional complexity with rivers, channels, barriers and even cycles. Numerous models and systems exist that are able to answer the above questions, even very precisely, but they often lack the ability to produce the results quickly enough for real-time applicability that is required in the IoW. Moreover, the rigid data representation makes it impossible to integrate new data sources and data types, especially in the IoW, where the data originates from vastly different backgrounds.
In this research, we focus on the performance of spatio-temporal queries taking into account the spatial configuration of a strongly human-influenced water system and the real-time acquisition and processing of sensor data. The use of graph-database systems is compared with relational-database systems to store topologies and execute recursive path-tracing queries. Not only storing and querying are taken into account, but also the creation and updating of the topologies are an essential part. Moreover, the advantages of a hybrid approach that integrates the graph-based databases for spatial topologies with relational databases for temporal and water-system attributes are investigated. The fast querying of both upstream and downstream flow-path information is of great use in various applications (e.g., pollution tracking, alerting, relating sensor signals, …). By adding a wrapper library and creating a standardised result graph representation, the complexity is abstracted away from the individual applications.
How to cite: Bollen, E., R. Pagán, B., Kuijpers, B., Van Hoey, S., Desmet, N., Hendrix, R., Dams, J., and Seuntjens, P.: Design of database systems for optimized spatio-temporal querying to facilitate monitoring, analysing and forecasting in the "Internet of Water", EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19171, https://doi.org/10.5194/egusphere-egu2020-19171, 2020.
EGU2020-21549 | Displays | ITS4.1/NP4.2
NetCDF: Performance and Storage Optimization of Meteorological DataValentín Kivachuk Burdá and Michaël Zamo
Any software relies on data, and the meteorological field is not an exception. The importance of using correct and accurate data is as important as using it efficiently. GRIB and NetCDF are the most popular file formats used in Meteorology, being able to store exactly the same data in any of them. However, they differ in how they internally treat the data, and transforming from GRIB (a simpler file format) to NetCDF is not enough to ensure the best efficiency for final applications.
In this study, we improved the performance and storage of ARPEGE cloud cover forecasts post-processing with convolutional neural network and Precipitation Nowcasting using Deep Neural Network projects (proposed in other sessions for the EGU general assembly). The data treatments of both projects were studied and different NetCDF capabilities were applied in order to obtain significantly faster execution times (up to 60 times faster) and more efficient space usage.
How to cite: Kivachuk Burdá, V. and Zamo, M.: NetCDF: Performance and Storage Optimization of Meteorological Data, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21549, https://doi.org/10.5194/egusphere-egu2020-21549, 2020.
Any software relies on data, and the meteorological field is not an exception. The importance of using correct and accurate data is as important as using it efficiently. GRIB and NetCDF are the most popular file formats used in Meteorology, being able to store exactly the same data in any of them. However, they differ in how they internally treat the data, and transforming from GRIB (a simpler file format) to NetCDF is not enough to ensure the best efficiency for final applications.
In this study, we improved the performance and storage of ARPEGE cloud cover forecasts post-processing with convolutional neural network and Precipitation Nowcasting using Deep Neural Network projects (proposed in other sessions for the EGU general assembly). The data treatments of both projects were studied and different NetCDF capabilities were applied in order to obtain significantly faster execution times (up to 60 times faster) and more efficient space usage.
How to cite: Kivachuk Burdá, V. and Zamo, M.: NetCDF: Performance and Storage Optimization of Meteorological Data, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21549, https://doi.org/10.5194/egusphere-egu2020-21549, 2020.
EGU2020-19620 | Displays | ITS4.1/NP4.2
Learning ordinary differential equations from remote sensing dataJose E. Adsuara, Adrián Pérez-Suay, Alvaro Moreno-Martínez, Anna Mateo-Sanchis, Maria Piles, Guido Kraemer, Markus Reichstein, Miguel D. Mahecha, and Gustau Camps-Valls
Modeling and understanding the Earth system is of paramount relevance. Modeling the complex interactions among variables in both space and time is a constant and challenging endevour. When a clear mechanistic model of variable interaction and evolution is not available or uncertain, learning from data can be an alternative.
Currently, Earth observation (EO) remote sensing data provides almost continuous space and time sampling of the Earth system which has been used to monitor our planet with advanced, semiautomatic algorithms able to classify and detect changes, and to retrieve relevant biogeophysical parameters of interest. Despite great advances in classification and regression, learning from data seems an ilusive problem in machine learning for the Earth sciences. The hardest part turns out to be the extraction of their relevant information and figuring out reliable models for summarizing, modeling, and understanding variables and parameters of interest.
We introduce the use of machine learning techniques to bring systems of ordinary differential equations (ODEs) to light purely from data. Learning ODEs from stochastic variables is a challenging problem, and hence studied scarcely in the literature. Sparse regression algorithms allow us to explore the space of solutions of ODEs from data. Owing to the Occam's razor, and exploiting extra physics-aware regularization, the presented method identifies the most expressive and simplest ODEs explaining the data. From the learned ODE, one not only learns the underlying dynamical equation governing the system, but standard analysis allows us to also infer collapse, turning points, and stability regions of the system. We illustrate the methodology using some particular remote sensing datasets quantifying biosphere and vegetation status. These analytical equations come to be self-explanatory models which may provide insight into these particular Earth Subsystems.
How to cite: Adsuara, J. E., Pérez-Suay, A., Moreno-Martínez, A., Mateo-Sanchis, A., Piles, M., Kraemer, G., Reichstein, M., Mahecha, M. D., and Camps-Valls, G.: Learning ordinary differential equations from remote sensing data, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19620, https://doi.org/10.5194/egusphere-egu2020-19620, 2020.
Modeling and understanding the Earth system is of paramount relevance. Modeling the complex interactions among variables in both space and time is a constant and challenging endevour. When a clear mechanistic model of variable interaction and evolution is not available or uncertain, learning from data can be an alternative.
Currently, Earth observation (EO) remote sensing data provides almost continuous space and time sampling of the Earth system which has been used to monitor our planet with advanced, semiautomatic algorithms able to classify and detect changes, and to retrieve relevant biogeophysical parameters of interest. Despite great advances in classification and regression, learning from data seems an ilusive problem in machine learning for the Earth sciences. The hardest part turns out to be the extraction of their relevant information and figuring out reliable models for summarizing, modeling, and understanding variables and parameters of interest.
We introduce the use of machine learning techniques to bring systems of ordinary differential equations (ODEs) to light purely from data. Learning ODEs from stochastic variables is a challenging problem, and hence studied scarcely in the literature. Sparse regression algorithms allow us to explore the space of solutions of ODEs from data. Owing to the Occam's razor, and exploiting extra physics-aware regularization, the presented method identifies the most expressive and simplest ODEs explaining the data. From the learned ODE, one not only learns the underlying dynamical equation governing the system, but standard analysis allows us to also infer collapse, turning points, and stability regions of the system. We illustrate the methodology using some particular remote sensing datasets quantifying biosphere and vegetation status. These analytical equations come to be self-explanatory models which may provide insight into these particular Earth Subsystems.
How to cite: Adsuara, J. E., Pérez-Suay, A., Moreno-Martínez, A., Mateo-Sanchis, A., Piles, M., Kraemer, G., Reichstein, M., Mahecha, M. D., and Camps-Valls, G.: Learning ordinary differential equations from remote sensing data, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19620, https://doi.org/10.5194/egusphere-egu2020-19620, 2020.
EGU2020-13151 | Displays | ITS4.1/NP4.2
Learning recurrent transfer functions from data: From climate variability to high river dischargeMateusz Norel, Krzysztof Krawiec, and Zbigniew Kundzewicz
Interpretation of flood hazard and its variability remains a major challenge for climatologists, hydrologists and water management experts. This study investigates the existence of links between variability in high river discharge, worldwide, and inter-annual and inter-decadal climate oscillation indices: El Niño-Southern Oscillation, North Atlantic Oscillation, Pacific Interdecadal Oscillation, and Atlantic Multidecadal Oscillation. Global river discharge data used here stem from the ERA-20CM-R reconstruction at 0.5 degrees resolution and form a multidimensional time series, with each observation being a spatial matrix of estimated discharge volume. Elements of matrices aligned spatially form time series which were used to induce dedicated predictive models using machine learning tools, including multivariate regression (e.g. ARMA) and recurrent neural networks (RNNs), in particular the Long Short Term Memory model (LSTM) that proved to be effective in many other application areas. The models are thoroughly tested and juxtaposed in hindcasting mode on a separate test set and scrutinized with respect to their statistical characteristics. We hope to be able to contribute to improvement of interpretation of variability of flood hazard and reduction of uncertainty.
How to cite: Norel, M., Krawiec, K., and Kundzewicz, Z.: Learning recurrent transfer functions from data: From climate variability to high river discharge, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13151, https://doi.org/10.5194/egusphere-egu2020-13151, 2020.
Interpretation of flood hazard and its variability remains a major challenge for climatologists, hydrologists and water management experts. This study investigates the existence of links between variability in high river discharge, worldwide, and inter-annual and inter-decadal climate oscillation indices: El Niño-Southern Oscillation, North Atlantic Oscillation, Pacific Interdecadal Oscillation, and Atlantic Multidecadal Oscillation. Global river discharge data used here stem from the ERA-20CM-R reconstruction at 0.5 degrees resolution and form a multidimensional time series, with each observation being a spatial matrix of estimated discharge volume. Elements of matrices aligned spatially form time series which were used to induce dedicated predictive models using machine learning tools, including multivariate regression (e.g. ARMA) and recurrent neural networks (RNNs), in particular the Long Short Term Memory model (LSTM) that proved to be effective in many other application areas. The models are thoroughly tested and juxtaposed in hindcasting mode on a separate test set and scrutinized with respect to their statistical characteristics. We hope to be able to contribute to improvement of interpretation of variability of flood hazard and reduction of uncertainty.
How to cite: Norel, M., Krawiec, K., and Kundzewicz, Z.: Learning recurrent transfer functions from data: From climate variability to high river discharge, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13151, https://doi.org/10.5194/egusphere-egu2020-13151, 2020.
ITS4.2/ESSI4.2 – State of the Art in Earth Science Data Visualization
EGU2020-11833 | Displays | ITS4.2/ESSI4.2
How to appreciate, use, and choose Scientific Colour MapsGrace E. Shephard, Fabio Crameri, and Philip J. Heron
The visual representation of data is at the heart of science. One of the choices faced by the scientist in representing data is the decision regarding colours. However, due to historical usage and default colour palettes on visualisation software, colour maps that distort data through uneven colour gradients are still commonly used today. In fact, the most-used colour map in presentations at the EGU General Assembly in 2018 - including Geodynamics sessions - was the one colour map that is most widely known to distort the data and misguide readers (see https://betterfigures.org/2018/04/16/how-many-rainbows-at-egu-2018/).
Here, we present the work that has been accomplished, the readily available solution, and present a how-to guide to ‚Scientific Colour Maps’ (Crameri 2018, Zenodo; Crameri et al. (In Review)), a methodology that prevents data distortion, offers intuitive colouring, and is accessible for people with colour-vision deficiencies.
Crameri, F. (2018). Scientific colour-maps. Zenodo. http://doi.org/10.5281/zenodo.1243862
Crameri, F., Shephard, G.E. Heron, P.J. Advantage, availability, and application of Scientific Colour Maps. (In Review with Nature Communications)
How to cite: Shephard, G. E., Crameri, F., and Heron, P. J.: How to appreciate, use, and choose Scientific Colour Maps, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11833, https://doi.org/10.5194/egusphere-egu2020-11833, 2020.
The visual representation of data is at the heart of science. One of the choices faced by the scientist in representing data is the decision regarding colours. However, due to historical usage and default colour palettes on visualisation software, colour maps that distort data through uneven colour gradients are still commonly used today. In fact, the most-used colour map in presentations at the EGU General Assembly in 2018 - including Geodynamics sessions - was the one colour map that is most widely known to distort the data and misguide readers (see https://betterfigures.org/2018/04/16/how-many-rainbows-at-egu-2018/).
Here, we present the work that has been accomplished, the readily available solution, and present a how-to guide to ‚Scientific Colour Maps’ (Crameri 2018, Zenodo; Crameri et al. (In Review)), a methodology that prevents data distortion, offers intuitive colouring, and is accessible for people with colour-vision deficiencies.
Crameri, F. (2018). Scientific colour-maps. Zenodo. http://doi.org/10.5281/zenodo.1243862
Crameri, F., Shephard, G.E. Heron, P.J. Advantage, availability, and application of Scientific Colour Maps. (In Review with Nature Communications)
How to cite: Shephard, G. E., Crameri, F., and Heron, P. J.: How to appreciate, use, and choose Scientific Colour Maps, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11833, https://doi.org/10.5194/egusphere-egu2020-11833, 2020.
EGU2020-321 | Displays | ITS4.2/ESSI4.2
How Earth Scientists Communicate With Maps?Márton Pál and Gáspár Albert
To communicate the results of a research or scientific facts and theories concerning spatial characteristics, every fields of geoscience use thematic cartography to represent spatial information. Because of this, journals, books and other publications about earth sciences has always needed accurate, reliable and clear methods of map visualization. A map should thematically fit in the body of the publication and should enrich the content. It is an extra task for scientist to take basic cartographic and visual rules into consideration, but with correct methods their publications can earn multiple benefits, such as increased readership and wider dissemination.
When using cartographic methods, we need to find the balance between the triad of i) precision, ii) quality of visual representation and iii) quantity of thematic data. The primary aim is to give an overall ‘image’ of the concerning spatial phenomena that effectively complements the written text of an article. However, these representations sometimes lack some important marks that help the reader to understand the information. Our study focuses on the quality of cartographic visualization in geosciences measuring these marks with the help of an objective system of criteria. These included image quality, cartographic elements that help to locate the studied area (e.g. coordinates), topographic content and copyright rules. By the use of this system we could give grades for each map for each criterion. We have assessed more than 300 maps per field of geoscience (geology, geography, geophysics, meteorology, cartography) in international and Hungarian journals and conference posters.
By summarizing the grades, multiple conclusions can be drawn. We can analyse the map usage habits of each science field: what type of map do they usually use (e.g. thematic or topographic), do they use maps to present results or just give an overview about a studied area and what are the common mistakes that may confuse the reader when interpreting a map. These statistics also hold the opportunity to give advices for each branch to develop their map communication skills. It was also possible to inspect the cartographic practices of each countries. We have found that there is a large spatial variability in the map use habits of different cultures. This can either mean specific but correct ways of visualization or solutions that make the map hard to understand and should not be followed. By examining this spatial factor, proposals concerning objective map element usage can be given to countries or even to whole regions to improve their cartographic communication skills.
How to cite: Pál, M. and Albert, G.: How Earth Scientists Communicate With Maps?, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-321, https://doi.org/10.5194/egusphere-egu2020-321, 2020.
To communicate the results of a research or scientific facts and theories concerning spatial characteristics, every fields of geoscience use thematic cartography to represent spatial information. Because of this, journals, books and other publications about earth sciences has always needed accurate, reliable and clear methods of map visualization. A map should thematically fit in the body of the publication and should enrich the content. It is an extra task for scientist to take basic cartographic and visual rules into consideration, but with correct methods their publications can earn multiple benefits, such as increased readership and wider dissemination.
When using cartographic methods, we need to find the balance between the triad of i) precision, ii) quality of visual representation and iii) quantity of thematic data. The primary aim is to give an overall ‘image’ of the concerning spatial phenomena that effectively complements the written text of an article. However, these representations sometimes lack some important marks that help the reader to understand the information. Our study focuses on the quality of cartographic visualization in geosciences measuring these marks with the help of an objective system of criteria. These included image quality, cartographic elements that help to locate the studied area (e.g. coordinates), topographic content and copyright rules. By the use of this system we could give grades for each map for each criterion. We have assessed more than 300 maps per field of geoscience (geology, geography, geophysics, meteorology, cartography) in international and Hungarian journals and conference posters.
By summarizing the grades, multiple conclusions can be drawn. We can analyse the map usage habits of each science field: what type of map do they usually use (e.g. thematic or topographic), do they use maps to present results or just give an overview about a studied area and what are the common mistakes that may confuse the reader when interpreting a map. These statistics also hold the opportunity to give advices for each branch to develop their map communication skills. It was also possible to inspect the cartographic practices of each countries. We have found that there is a large spatial variability in the map use habits of different cultures. This can either mean specific but correct ways of visualization or solutions that make the map hard to understand and should not be followed. By examining this spatial factor, proposals concerning objective map element usage can be given to countries or even to whole regions to improve their cartographic communication skills.
How to cite: Pál, M. and Albert, G.: How Earth Scientists Communicate With Maps?, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-321, https://doi.org/10.5194/egusphere-egu2020-321, 2020.
EGU2020-5166 | Displays | ITS4.2/ESSI4.2
Visualization Strategies for Optimal 3D Time-Energy Trajectory Planning for AUVs using Ocean General Circulation ModelsThomas Theussl, Sultan Albarakati, Ricardo Lima, Ibrahim Hoteit, and Omar Knio
In this presentation, we discuss visualization strategies for optimal time and energy trajectory planning problems for Autonomous Underwater Vehicles (AUVs) in transient 3D ocean currents. Realistic forecasts using an Ocean General Circulation Model (OGCM) are used to define time and energy optimal AUV trajectory problems in 2D and 3D. The visualization goal is to explore and explain the trajectory the AUV follows, especially how it exploits both the vertical structure of the current field as well as its unsteadiness to minimize travel time and energy consumption. We present our choice of visualization tools for this purpose and discuss shortcomings and possible improvements, especially for challenging scenarios involving 3D time-dependent flow and realistic bathymetry.
How to cite: Theussl, T., Albarakati, S., Lima, R., Hoteit, I., and Knio, O.: Visualization Strategies for Optimal 3D Time-Energy Trajectory Planning for AUVs using Ocean General Circulation Models, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5166, https://doi.org/10.5194/egusphere-egu2020-5166, 2020.
In this presentation, we discuss visualization strategies for optimal time and energy trajectory planning problems for Autonomous Underwater Vehicles (AUVs) in transient 3D ocean currents. Realistic forecasts using an Ocean General Circulation Model (OGCM) are used to define time and energy optimal AUV trajectory problems in 2D and 3D. The visualization goal is to explore and explain the trajectory the AUV follows, especially how it exploits both the vertical structure of the current field as well as its unsteadiness to minimize travel time and energy consumption. We present our choice of visualization tools for this purpose and discuss shortcomings and possible improvements, especially for challenging scenarios involving 3D time-dependent flow and realistic bathymetry.
How to cite: Theussl, T., Albarakati, S., Lima, R., Hoteit, I., and Knio, O.: Visualization Strategies for Optimal 3D Time-Energy Trajectory Planning for AUVs using Ocean General Circulation Models, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5166, https://doi.org/10.5194/egusphere-egu2020-5166, 2020.
EGU2020-7078 | Displays | ITS4.2/ESSI4.2
Visualization of high-resolution climate model output in a Visualization domeFlorian Ziemen, Niklas Röber, Dela Spickermann, and Michael Böttinger
The new generation of global storm-resolving climate models yields model output at unprecedented resolution, going way beyond what can be displayed on a state-of-the-art computer screen. This data can be visualized in photo-realistic renderings that cannot be easily distinguished from satellite data (e.g. Stevens et al, 2019). The EU-funded Centre of Excellence in Simulation of Weather and Climate in Europe (ESiWACE) enables this kind of simulations through improvements of model performance, data storage and processing. It is closely related with the DYAMOND model intercomparison project. The Max-Planck-Institute for Meteorology (MPI-M) will contribute to the second phase of the DYAMOND intercomparison with coupled global 5 km-resolving atmosphere-ocean climate simulations, internally called DYAMOND++.
Because of the great level of detail, these simulations are especially appealing for scientific outreach. In this PICO presentation we will illustrate how we turn the output of a DYAMOND++ test simulation into a movie clip for dome theaters, as used in the WISDOME contest of the IEEE EUROVIS conference and in planetaria and science centers. Our presentation outlines the main steps of this process from data generation via pre-processing to the methods employed in the rendering of the scenes.
Stevens, B., Satoh, M., Auger, L. et al.: DYAMOND: the DYnamics of the Atmospheric general circulation Modeled On Non-hydrostatic Domains. Prog Earth Planet Sci (2019) 6: 61. https://doi.org/10.1186/s40645-019-0304-z
How to cite: Ziemen, F., Röber, N., Spickermann, D., and Böttinger, M.: Visualization of high-resolution climate model output in a Visualization dome, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7078, https://doi.org/10.5194/egusphere-egu2020-7078, 2020.
The new generation of global storm-resolving climate models yields model output at unprecedented resolution, going way beyond what can be displayed on a state-of-the-art computer screen. This data can be visualized in photo-realistic renderings that cannot be easily distinguished from satellite data (e.g. Stevens et al, 2019). The EU-funded Centre of Excellence in Simulation of Weather and Climate in Europe (ESiWACE) enables this kind of simulations through improvements of model performance, data storage and processing. It is closely related with the DYAMOND model intercomparison project. The Max-Planck-Institute for Meteorology (MPI-M) will contribute to the second phase of the DYAMOND intercomparison with coupled global 5 km-resolving atmosphere-ocean climate simulations, internally called DYAMOND++.
Because of the great level of detail, these simulations are especially appealing for scientific outreach. In this PICO presentation we will illustrate how we turn the output of a DYAMOND++ test simulation into a movie clip for dome theaters, as used in the WISDOME contest of the IEEE EUROVIS conference and in planetaria and science centers. Our presentation outlines the main steps of this process from data generation via pre-processing to the methods employed in the rendering of the scenes.
Stevens, B., Satoh, M., Auger, L. et al.: DYAMOND: the DYnamics of the Atmospheric general circulation Modeled On Non-hydrostatic Domains. Prog Earth Planet Sci (2019) 6: 61. https://doi.org/10.1186/s40645-019-0304-z
How to cite: Ziemen, F., Röber, N., Spickermann, D., and Böttinger, M.: Visualization of high-resolution climate model output in a Visualization dome, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7078, https://doi.org/10.5194/egusphere-egu2020-7078, 2020.
EGU2020-7173 | Displays | ITS4.2/ESSI4.2
Robust Color Maps That Work for Most Audiences (Including the U.S. President)Reto Stauffer and Achim Zeileis
Color is an integral element in many visualizations in (geo-)sciences, specifically in maps but also bar plots, scatter plots, or time series displays. Well-chosen colors can make graphics more appealing and, more importantly, help to clearly communicate the underlying information. Conversely, poorly-chosen colors can obscure information or confuse the readers. One example for the latter gained prominence in the controversy over Hurricane Dorian: Using an official weather forecast map, U.S. President Donald Trump repeatedly claimed that early forecasts showed a high probability of Alabama being hit. We demonstrate that a potentially confusing rainbow color map may have attributed to an overestimation of the risk (among other factors that stirred the discussion).
To avoid such problems, we introduce general strategies for selecting robust color maps that are intuitive for many audiences, including readers with color vision deficiencies. The construction of sequential, diverging, or qualitative palettes is based on on appropriate light-dark "luminance" contrasts while suitably controlling the "hue" and the colorfulness ("chroma"). The strategies are also easy to put into practice using computations based on the so-called Hue-Chroma-Luminance (HCL) color model, e.g., as provided in our "colorspace" software package (http://hclwizard.org), available for both the R and Python programming languages. In addition to the HCL-based color maps the package provides interactive apps for exploring and modifying palettes along with further tools for manipulation and customization, demonstration plots, and emulation of visual constraints.
How to cite: Stauffer, R. and Zeileis, A.: Robust Color Maps That Work for Most Audiences (Including the U.S. President), EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7173, https://doi.org/10.5194/egusphere-egu2020-7173, 2020.
Color is an integral element in many visualizations in (geo-)sciences, specifically in maps but also bar plots, scatter plots, or time series displays. Well-chosen colors can make graphics more appealing and, more importantly, help to clearly communicate the underlying information. Conversely, poorly-chosen colors can obscure information or confuse the readers. One example for the latter gained prominence in the controversy over Hurricane Dorian: Using an official weather forecast map, U.S. President Donald Trump repeatedly claimed that early forecasts showed a high probability of Alabama being hit. We demonstrate that a potentially confusing rainbow color map may have attributed to an overestimation of the risk (among other factors that stirred the discussion).
To avoid such problems, we introduce general strategies for selecting robust color maps that are intuitive for many audiences, including readers with color vision deficiencies. The construction of sequential, diverging, or qualitative palettes is based on on appropriate light-dark "luminance" contrasts while suitably controlling the "hue" and the colorfulness ("chroma"). The strategies are also easy to put into practice using computations based on the so-called Hue-Chroma-Luminance (HCL) color model, e.g., as provided in our "colorspace" software package (http://hclwizard.org), available for both the R and Python programming languages. In addition to the HCL-based color maps the package provides interactive apps for exploring and modifying palettes along with further tools for manipulation and customization, demonstration plots, and emulation of visual constraints.
How to cite: Stauffer, R. and Zeileis, A.: Robust Color Maps That Work for Most Audiences (Including the U.S. President), EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7173, https://doi.org/10.5194/egusphere-egu2020-7173, 2020.
EGU2020-9931 | Displays | ITS4.2/ESSI4.2
RTView 1.0: a new software for High Rate GNSS data analysis and visualization in Real TimeFrancesco Pandolfo, Mario Mattia, Massimo Rossi, and Valentina Bruno
Volcano ground deformations needs hardware and software tools of high complexity related to the processing of raw GNSS data, filtering of outliers and spikes and clear visualization of displacements occurring in real time. In this project we developed a web application for high rate real time signals visualization from permanent GNSS remote stations managed by INGV OE (Istituto Nazionale di Geofisica e Vulcanologia, Osservatorio Etneo). Currently the new software tool is able to import GNSS data processed by some of the most important high rate real time software like GeoRTD® (owned by Geodetics), GNSS Spider® (Owned by Leica Geosystems) and RTKlib. The tool is based on the Grafana open source platform and InfluxDB open source database. Various dashboards have been configured to display time series of the North-East-Up coordinates to monitor single stations, to compare signals coming from different data sources and to display the displacement vectors on the map. We also applied a simple alghoritm for the detection of abnormal variations due to impending volcanic activity.This web interface is applied to different active Italian volcanoes as Etna (Sicily), Stromboli (Aeolian Islands) and Phlegrean Fields (Naples). We tested the performance of this software using as a case study the 24th December 2018 dike intrusion on the Etna volcano.
How to cite: Pandolfo, F., Mattia, M., Rossi, M., and Bruno, V.: RTView 1.0: a new software for High Rate GNSS data analysis and visualization in Real Time, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9931, https://doi.org/10.5194/egusphere-egu2020-9931, 2020.
Volcano ground deformations needs hardware and software tools of high complexity related to the processing of raw GNSS data, filtering of outliers and spikes and clear visualization of displacements occurring in real time. In this project we developed a web application for high rate real time signals visualization from permanent GNSS remote stations managed by INGV OE (Istituto Nazionale di Geofisica e Vulcanologia, Osservatorio Etneo). Currently the new software tool is able to import GNSS data processed by some of the most important high rate real time software like GeoRTD® (owned by Geodetics), GNSS Spider® (Owned by Leica Geosystems) and RTKlib. The tool is based on the Grafana open source platform and InfluxDB open source database. Various dashboards have been configured to display time series of the North-East-Up coordinates to monitor single stations, to compare signals coming from different data sources and to display the displacement vectors on the map. We also applied a simple alghoritm for the detection of abnormal variations due to impending volcanic activity.This web interface is applied to different active Italian volcanoes as Etna (Sicily), Stromboli (Aeolian Islands) and Phlegrean Fields (Naples). We tested the performance of this software using as a case study the 24th December 2018 dike intrusion on the Etna volcano.
How to cite: Pandolfo, F., Mattia, M., Rossi, M., and Bruno, V.: RTView 1.0: a new software for High Rate GNSS data analysis and visualization in Real Time, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9931, https://doi.org/10.5194/egusphere-egu2020-9931, 2020.
EGU2020-9945 | Displays | ITS4.2/ESSI4.2
Visualizing anthropogenic methane plumes from the California Methane SurveyAndrew Thorpe, Riley Duren, Robert Tapella, Brian Bue, Kelsey Foster, Vineet Yadav, Talha Rafiq, Francesca Hopkins, Kevin Gill, Joshua Rodriguez, Aaron Plave, Daniel Cusworth, and Charles Miller
The 2016-2018 California Methane Survey used the airborne imaging spectrometer AVIRIS-NG to survey approximately 59,000 km2 and 272,000 individual facilities and infrastructure components. Over 500 strong methane point sources spanning the waste management, agriculture, and energy sectors were detected, geolocated, and quantified. In order to facilitate communication of results with scientists, stakeholder agencies in California, private sector companies, and the public, we developed the Methane Source Finder web-based data portal. This state of the art Earth science data visualization tool allows users to discover, analyze, and download data across a range of spatial scales derived from remote-sensing, surface monitoring, and bottom-up infrastructure information. In this presentation, we will highlight our overall science findings from the California Methane Survey and provide a number of examples where observed methane plumes were used to directly guide leak detection and repair efforts. Future plans include expanding the data portal beyond California and incorporating regional scale flux inversions derived from satellite observations. Methane Source Finder supports methane research (e.g., multi-scale synthesis), enables facility-scale mitigation, and improves public awareness of greenhouse gas emissions.
How to cite: Thorpe, A., Duren, R., Tapella, R., Bue, B., Foster, K., Yadav, V., Rafiq, T., Hopkins, F., Gill, K., Rodriguez, J., Plave, A., Cusworth, D., and Miller, C.: Visualizing anthropogenic methane plumes from the California Methane Survey, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9945, https://doi.org/10.5194/egusphere-egu2020-9945, 2020.
The 2016-2018 California Methane Survey used the airborne imaging spectrometer AVIRIS-NG to survey approximately 59,000 km2 and 272,000 individual facilities and infrastructure components. Over 500 strong methane point sources spanning the waste management, agriculture, and energy sectors were detected, geolocated, and quantified. In order to facilitate communication of results with scientists, stakeholder agencies in California, private sector companies, and the public, we developed the Methane Source Finder web-based data portal. This state of the art Earth science data visualization tool allows users to discover, analyze, and download data across a range of spatial scales derived from remote-sensing, surface monitoring, and bottom-up infrastructure information. In this presentation, we will highlight our overall science findings from the California Methane Survey and provide a number of examples where observed methane plumes were used to directly guide leak detection and repair efforts. Future plans include expanding the data portal beyond California and incorporating regional scale flux inversions derived from satellite observations. Methane Source Finder supports methane research (e.g., multi-scale synthesis), enables facility-scale mitigation, and improves public awareness of greenhouse gas emissions.
How to cite: Thorpe, A., Duren, R., Tapella, R., Bue, B., Foster, K., Yadav, V., Rafiq, T., Hopkins, F., Gill, K., Rodriguez, J., Plave, A., Cusworth, D., and Miller, C.: Visualizing anthropogenic methane plumes from the California Methane Survey, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9945, https://doi.org/10.5194/egusphere-egu2020-9945, 2020.
EGU2020-11205 | Displays | ITS4.2/ESSI4.2
Counter-Earths – Planetary models beyond operational imagesPaul Heinicker and Lukáš Likavčan
This contributions deals with an extensive apparatus of sensing and modelling the Earth, producing numerous fragmented Counter-Earths - the digital models and data visualizations of the planetary ecosystem. We center our analysis around this increasingly non-human visual culture, in order to seek possible theoretical framings of global climate sensing and modelling. After a historical and theoretical introduction to emergence and composition of this infrastructure - drawing from the works of Jennifer Gabrys and Paul N. Edwards -, we elaborate a framework in which we can see machine production of images of the planet as continuous algorithmic process of transformation of planetary circumstances. Contesting interpretation of the imagery that facilitates this process as representations of the planet, we categorize climate models and satellite visual outputs as operational images, following insights by Vilem Flusser and Harun Farocki. While fully acknowledging its historical and theoretical importance, this terminology is in this contribution further assessed as still too human-centric, and for this reason, we proceed with Dietmar Offenhuber’s concept of autographic visualization that endows non-human assemblages with capacity of self-presentation and self-diagrammatization. Consequently, we conclude with several examples of autographic visualization of climate change on a planetary scale, discovering Earth’s tendency to be externalized, geological memory of modernity, that can be read through machine sensing systems uncovering these hidden traces of the past.
How to cite: Heinicker, P. and Likavčan, L.: Counter-Earths – Planetary models beyond operational images, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11205, https://doi.org/10.5194/egusphere-egu2020-11205, 2020.
This contributions deals with an extensive apparatus of sensing and modelling the Earth, producing numerous fragmented Counter-Earths - the digital models and data visualizations of the planetary ecosystem. We center our analysis around this increasingly non-human visual culture, in order to seek possible theoretical framings of global climate sensing and modelling. After a historical and theoretical introduction to emergence and composition of this infrastructure - drawing from the works of Jennifer Gabrys and Paul N. Edwards -, we elaborate a framework in which we can see machine production of images of the planet as continuous algorithmic process of transformation of planetary circumstances. Contesting interpretation of the imagery that facilitates this process as representations of the planet, we categorize climate models and satellite visual outputs as operational images, following insights by Vilem Flusser and Harun Farocki. While fully acknowledging its historical and theoretical importance, this terminology is in this contribution further assessed as still too human-centric, and for this reason, we proceed with Dietmar Offenhuber’s concept of autographic visualization that endows non-human assemblages with capacity of self-presentation and self-diagrammatization. Consequently, we conclude with several examples of autographic visualization of climate change on a planetary scale, discovering Earth’s tendency to be externalized, geological memory of modernity, that can be read through machine sensing systems uncovering these hidden traces of the past.
How to cite: Heinicker, P. and Likavčan, L.: Counter-Earths – Planetary models beyond operational images, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11205, https://doi.org/10.5194/egusphere-egu2020-11205, 2020.
EGU2020-4798 | Displays | ITS4.2/ESSI4.2
Visualization of 4-component borehole strainmeter data in ChinaFuzhen Li, Tianxiang Ren, Huai Zhang, and Yaolin Shi
With the accumulation of 4-component borehole strainmeter data and the improvement of observation reliability, it is a primary of current research to improve the efficiency of processing, analyzing and visualizing of these data. The visualization of borehole strain observation data is a key means to convey the information behind the data, display the data research results and extract the shallow surface stress state revealed by borehole strain.
Borehole strainmeter data are of great significance for earthquake prediction research due to its’ high resolution in short-medium term time scale of earthquake prediction. With the progress of observation technology, many four-component borehole strain gauges in China had experienced the data stabilization period, the early years of establishing the instrument, and the borehole strain station began to obtain a batch of high-quality observation data.
By using the normal stress petal diagram to show the change of the ground stress, it can not only qualitatively analyze the change of the relative ground stress of the station, but also quantitatively read the observed normal stress in any direction at a certain time. In this paper, the method of normal stress petals diagram is combined with map visualization technology to process and analyze the strain observation data of four-component borehole across the country. The main works are as follows: first of all, The construction of the stress petal visualization platform can display the dynamic stress effectively in all directions of 30 stations across the country; secondly, Variable sliding window length and sliding spacing added according to specific needs can not only directly display the change of the stress petal over the years, but also show the stress petal map of the solid tide strain all over the country; thirdly, The platform can display the co-seismic stress petal variation image observed at the national borehole strain stations and visually show the stress changes observed by the local borehole strain gauge during the seismic wave propagation. Finally, The borehole strainmeter data can monitor the relative geostress state of the fault near the borehole. Then the magnitude and direction of the maximum principal stress at the borehole strain station reflected by the stress petal can further calculate the corresponding changes of dynamic coulomb stress and static coulomb stress which can help to analyze seismic dynamic triggering problems.
How to cite: Li, F., Ren, T., Zhang, H., and Shi, Y.: Visualization of 4-component borehole strainmeter data in China, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4798, https://doi.org/10.5194/egusphere-egu2020-4798, 2020.
With the accumulation of 4-component borehole strainmeter data and the improvement of observation reliability, it is a primary of current research to improve the efficiency of processing, analyzing and visualizing of these data. The visualization of borehole strain observation data is a key means to convey the information behind the data, display the data research results and extract the shallow surface stress state revealed by borehole strain.
Borehole strainmeter data are of great significance for earthquake prediction research due to its’ high resolution in short-medium term time scale of earthquake prediction. With the progress of observation technology, many four-component borehole strain gauges in China had experienced the data stabilization period, the early years of establishing the instrument, and the borehole strain station began to obtain a batch of high-quality observation data.
By using the normal stress petal diagram to show the change of the ground stress, it can not only qualitatively analyze the change of the relative ground stress of the station, but also quantitatively read the observed normal stress in any direction at a certain time. In this paper, the method of normal stress petals diagram is combined with map visualization technology to process and analyze the strain observation data of four-component borehole across the country. The main works are as follows: first of all, The construction of the stress petal visualization platform can display the dynamic stress effectively in all directions of 30 stations across the country; secondly, Variable sliding window length and sliding spacing added according to specific needs can not only directly display the change of the stress petal over the years, but also show the stress petal map of the solid tide strain all over the country; thirdly, The platform can display the co-seismic stress petal variation image observed at the national borehole strain stations and visually show the stress changes observed by the local borehole strain gauge during the seismic wave propagation. Finally, The borehole strainmeter data can monitor the relative geostress state of the fault near the borehole. Then the magnitude and direction of the maximum principal stress at the borehole strain station reflected by the stress petal can further calculate the corresponding changes of dynamic coulomb stress and static coulomb stress which can help to analyze seismic dynamic triggering problems.
How to cite: Li, F., Ren, T., Zhang, H., and Shi, Y.: Visualization of 4-component borehole strainmeter data in China, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4798, https://doi.org/10.5194/egusphere-egu2020-4798, 2020.
EGU2020-15469 | Displays | ITS4.2/ESSI4.2
Met.3D: Interactive 3D ensemble visualization for rapid exploration of atmospheric simulation dataMarc Rautenhaus
Visualization is an important and ubiquitous tool in the daily work of atmospheric researchers and weather forecasters to analyse data from simulations and observations. Visualization research has made much progress in recent years, in particular with respect to techniques for ensemble data, interactivity, 3D depiction, and feature-detection. Transfer of new techniques into the atmospheric sciences, however, is slow.
Met.3D (https://met3d.wavestoweather.de) is an open-source research software aiming at making novel interactive 3D and ensemble visualization techniques accessible to the atmospheric community. Since its first public release in 2015, Met.3D has been used in multiple visualization research projects targeted at atmospheric science applications, and also has evolved into a feature-rich visual analysis tool facilitating rapid exploration of atmospheric simulation data. The software is based on the concept of “building a bridge” between “traditional” 2D visual analysis techniques and interactive 3D techniques and allows users to analyse their data using combinations of 2D maps and cross-sections, meteorological diagrams and 3D techniques including direct volume rendering, isosurfaces and trajectories, all combined in an interactive 3D context.
This PICO will provide an overview of the Met.3D project and highlight recent additions and improvements to the software. We will show several examples of how the combination of 2D and 3D visualization elements in an interactive context can be used to explore atmospheric simulation data, including the analysis of forecast errors, analysis of synoptic-scale features including jet-streams and fronts, and analysis of forecast uncertainty in ensemble forecasts.
How to cite: Rautenhaus, M.: Met.3D: Interactive 3D ensemble visualization for rapid exploration of atmospheric simulation data, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-15469, https://doi.org/10.5194/egusphere-egu2020-15469, 2020.
Visualization is an important and ubiquitous tool in the daily work of atmospheric researchers and weather forecasters to analyse data from simulations and observations. Visualization research has made much progress in recent years, in particular with respect to techniques for ensemble data, interactivity, 3D depiction, and feature-detection. Transfer of new techniques into the atmospheric sciences, however, is slow.
Met.3D (https://met3d.wavestoweather.de) is an open-source research software aiming at making novel interactive 3D and ensemble visualization techniques accessible to the atmospheric community. Since its first public release in 2015, Met.3D has been used in multiple visualization research projects targeted at atmospheric science applications, and also has evolved into a feature-rich visual analysis tool facilitating rapid exploration of atmospheric simulation data. The software is based on the concept of “building a bridge” between “traditional” 2D visual analysis techniques and interactive 3D techniques and allows users to analyse their data using combinations of 2D maps and cross-sections, meteorological diagrams and 3D techniques including direct volume rendering, isosurfaces and trajectories, all combined in an interactive 3D context.
This PICO will provide an overview of the Met.3D project and highlight recent additions and improvements to the software. We will show several examples of how the combination of 2D and 3D visualization elements in an interactive context can be used to explore atmospheric simulation data, including the analysis of forecast errors, analysis of synoptic-scale features including jet-streams and fronts, and analysis of forecast uncertainty in ensemble forecasts.
How to cite: Rautenhaus, M.: Met.3D: Interactive 3D ensemble visualization for rapid exploration of atmospheric simulation data, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-15469, https://doi.org/10.5194/egusphere-egu2020-15469, 2020.
EGU2020-18831 | Displays | ITS4.2/ESSI4.2
Interactive Jupyter Notebooks for the visual analysis of critical choices in Global Sensitivity AnalysisValentina Noacco, Andres Peñuela-Fernandez, Francesca Pianosi, and Thorsten Wagener
With earth system models growing ever more complex, a comprehensive, transparent and easily communicable analysis of the interacting model components is becoming increasingly difficult. Global Sensitivity Analysis (GSA) provides a structured analytical approach to tackle this problem by quantifying the relative importance of various model inputs and components on the variability of the model outputs. However, there are a number of critical choices needed to set up a GSA, such as selecting the appropriate GSA method for the intended purpose and defining the inputs to be tested and their variability space. In this work, we test the use of interactive visualization to analyze the impacts of such critical choices on GSA results and hence achieve a more robust and comprehensive understanding of the model behavior. To this end, we combine the Python version of the Sensitivity Analysis For Everybody (SAFE) toolbox, which is currently used by more than 2000 researchers worldwide, with the literate programming platform Jupyter Notebooks, and interactive visualizations. Unlike traditional static visualization, interacting visualizations allow the user to interrogate in real time the impact of user choices on the analysis result. Due to computational constraints of most earth system models, not all impacts can be visualized in real time (e.g. only those not requiring the model to be re-run). In those cases where a model needs to be re-run (e.g. to test the impact of the definition of the inputs space of variability), interactive visualizations still offer a useful tool, which allows to highlight only specific features of interest (e.g. behavior of the input/output samples for extreme values of the inputs or outputs). Jupyter Notebooks, by combining text, code and figures, enhance the transparency, transferability and reproducibility of GSA results. Interactive visualizations strengthen the understanding of the impacts of the choices made to run GSA and the robustness of GSA result (e.g. by being able to easily assess the impact of varying output metrics or the GSA method on the analysis results). In general, this work offers an example of how the use of notebooks and interactive visualizations can increase the transparency and communication of complex modelling concepts.
How to cite: Noacco, V., Peñuela-Fernandez, A., Pianosi, F., and Wagener, T.: Interactive Jupyter Notebooks for the visual analysis of critical choices in Global Sensitivity Analysis, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18831, https://doi.org/10.5194/egusphere-egu2020-18831, 2020.
With earth system models growing ever more complex, a comprehensive, transparent and easily communicable analysis of the interacting model components is becoming increasingly difficult. Global Sensitivity Analysis (GSA) provides a structured analytical approach to tackle this problem by quantifying the relative importance of various model inputs and components on the variability of the model outputs. However, there are a number of critical choices needed to set up a GSA, such as selecting the appropriate GSA method for the intended purpose and defining the inputs to be tested and their variability space. In this work, we test the use of interactive visualization to analyze the impacts of such critical choices on GSA results and hence achieve a more robust and comprehensive understanding of the model behavior. To this end, we combine the Python version of the Sensitivity Analysis For Everybody (SAFE) toolbox, which is currently used by more than 2000 researchers worldwide, with the literate programming platform Jupyter Notebooks, and interactive visualizations. Unlike traditional static visualization, interacting visualizations allow the user to interrogate in real time the impact of user choices on the analysis result. Due to computational constraints of most earth system models, not all impacts can be visualized in real time (e.g. only those not requiring the model to be re-run). In those cases where a model needs to be re-run (e.g. to test the impact of the definition of the inputs space of variability), interactive visualizations still offer a useful tool, which allows to highlight only specific features of interest (e.g. behavior of the input/output samples for extreme values of the inputs or outputs). Jupyter Notebooks, by combining text, code and figures, enhance the transparency, transferability and reproducibility of GSA results. Interactive visualizations strengthen the understanding of the impacts of the choices made to run GSA and the robustness of GSA result (e.g. by being able to easily assess the impact of varying output metrics or the GSA method on the analysis results). In general, this work offers an example of how the use of notebooks and interactive visualizations can increase the transparency and communication of complex modelling concepts.
How to cite: Noacco, V., Peñuela-Fernandez, A., Pianosi, F., and Wagener, T.: Interactive Jupyter Notebooks for the visual analysis of critical choices in Global Sensitivity Analysis, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18831, https://doi.org/10.5194/egusphere-egu2020-18831, 2020.
EGU2020-19018 | Displays | ITS4.2/ESSI4.2
Visual approach to clustering large-scale meteorological datasetsAndrew Barnes, Thomas Kjeldsen, and Nick McCullen
Identifying the atmospheric processes which lead to extreme events requires careful generalisation of the meteorological conditions surrounding such events such as sea-level pressure and air temperature. Through the case study of clustering the processes behind extreme rainfall events (annual maximum 1-day rainfall totals) in Great Britain, this presentation shows how visualising the iterative processes used by clustering algorithms can aid in algorithm selection and optimisation. Here two big data datasets, namely the CEH-GEAR (gridded observed rainfall) and NCEP/NCAR Reanalysis datasets are synthesised and clustered using different methods such as k-means, linkage methods and self-organising maps. The performances of these methods are compared and contrasted through analysis of the clusters created at each iteration, highlighting the importance of algorithm selection and understanding. The key findings of this clustering process result in three large-scale meteorological condition types which lead to extreme rainfall in Great Britain as well as a novel approach to comparing clustering mechanisms when using meteorological data.
How to cite: Barnes, A., Kjeldsen, T., and McCullen, N.: Visual approach to clustering large-scale meteorological datasets, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19018, https://doi.org/10.5194/egusphere-egu2020-19018, 2020.
Identifying the atmospheric processes which lead to extreme events requires careful generalisation of the meteorological conditions surrounding such events such as sea-level pressure and air temperature. Through the case study of clustering the processes behind extreme rainfall events (annual maximum 1-day rainfall totals) in Great Britain, this presentation shows how visualising the iterative processes used by clustering algorithms can aid in algorithm selection and optimisation. Here two big data datasets, namely the CEH-GEAR (gridded observed rainfall) and NCEP/NCAR Reanalysis datasets are synthesised and clustered using different methods such as k-means, linkage methods and self-organising maps. The performances of these methods are compared and contrasted through analysis of the clusters created at each iteration, highlighting the importance of algorithm selection and understanding. The key findings of this clustering process result in three large-scale meteorological condition types which lead to extreme rainfall in Great Britain as well as a novel approach to comparing clustering mechanisms when using meteorological data.
How to cite: Barnes, A., Kjeldsen, T., and McCullen, N.: Visual approach to clustering large-scale meteorological datasets, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19018, https://doi.org/10.5194/egusphere-egu2020-19018, 2020.
EGU2020-20005 | Displays | ITS4.2/ESSI4.2
OpenAltimetry: Key Elements of Success in Visualizing NASA's Spaceborne LiDAR DataSiri Jodha Khalsa, Adrian Borsa, Viswanath Nandigam, and Minh Phan
NASA’s spaceborne laser altimeter, ICESat-2, sends 10,000 laser pulses per second towards Earth, in 6 separate beams, and records individual photons reflected back to its telescope. From these photon elevations, specialized ICESat-2 data products for land ice, sea ice, sea surface, land surface, vegetation and inland water are generated. Altogether these products total nearly 1 TB per day, which poses data management/visualization challenges for potential users. OpenAltimetry, a browser-based interactive visualization tool, was built to provide intuitive access to data from ICESat-2 and its predecessor mission (ICESat). It emphasizes ease of use and rapid access for expert and non-expert audiences alike. The initial design choices and subsequent user-informed development have led to a tool that has been enthusiastically received by the ICESat-2 Science Team, researchers from various disciplines, and the general public. This presentation will highlight the elements that led to OpenAltimetry’s success.
How to cite: Khalsa, S. J., Borsa, A., Nandigam, V., and Phan, M.: OpenAltimetry: Key Elements of Success in Visualizing NASA's Spaceborne LiDAR Data, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20005, https://doi.org/10.5194/egusphere-egu2020-20005, 2020.
NASA’s spaceborne laser altimeter, ICESat-2, sends 10,000 laser pulses per second towards Earth, in 6 separate beams, and records individual photons reflected back to its telescope. From these photon elevations, specialized ICESat-2 data products for land ice, sea ice, sea surface, land surface, vegetation and inland water are generated. Altogether these products total nearly 1 TB per day, which poses data management/visualization challenges for potential users. OpenAltimetry, a browser-based interactive visualization tool, was built to provide intuitive access to data from ICESat-2 and its predecessor mission (ICESat). It emphasizes ease of use and rapid access for expert and non-expert audiences alike. The initial design choices and subsequent user-informed development have led to a tool that has been enthusiastically received by the ICESat-2 Science Team, researchers from various disciplines, and the general public. This presentation will highlight the elements that led to OpenAltimetry’s success.
How to cite: Khalsa, S. J., Borsa, A., Nandigam, V., and Phan, M.: OpenAltimetry: Key Elements of Success in Visualizing NASA's Spaceborne LiDAR Data, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20005, https://doi.org/10.5194/egusphere-egu2020-20005, 2020.
EGU2020-20574 | Displays | ITS4.2/ESSI4.2
Data visualisation and information design at the science-policy interface: drawing from the IPCC experience.Gomis Melissa, Berger Sophie, Matthews Robin, Connors Sarah, Yelekci Ozge, Harold Jordan, Morelli Angela, and Johansen Tom Gabriel
In this digital age, communication has become increasingly visual. Like never before, visual information is increasing exposure and widening outreach to new audiences. With growing demands from Journals (Table of Content arts, visual abstracts, scientific figures), conferences (posters, presentation) and competitive grants submissions, the science world is not spared, and figures represent a tremendous opportunity to communicate findings more effectively. It is therefore important to get figures and images right for the intended audience, even more so when visualizing scientific data and conveying complex concepts.
The Intergovernmental Panel on Climate Change (IPCC), whose primary role is to inform policy makers on the state of knowledge on climate change, showcases how complex science can be visually communicated to a non-expert audience. Since its fifth assessment report, published in 2014, the IPCC has acknowledged the importance of communicating its assessments in an understandable, accessible, actionable and relevant way to all its stakeholders without compromising on the scientific robustness and accuracy.
Currently in its sixth assessment cycle, the IPCC features a new approach to figure design in its three recently published Special Reports. This approach consists in an unprecedented collaboration between design, information and cognitive specialists and the IPCC authors. This co-design process, along with a continuous guidance to authors on visualization and cognitive concepts, was conducted in a user-centered way to best serve the audience needs and their respective background. The challenge of visually representing multi-disciplinary results, testing, evaluating, and refining the figures improved the clarity of the key messages. The entire co-design method has proven to be a successful process during the preparation of the special reports and the preparation of the sixth assessment report is building on this experience. Despite a lack of available analytics, the IPCC communication department has observed an unprecedented media coverage and a certain amount of derivative products based on the special reports figures created by third parties.
How to cite: Melissa, G., Sophie, B., Robin, M., Sarah, C., Ozge, Y., Jordan, H., Angela, M., and Tom Gabriel, J.: Data visualisation and information design at the science-policy interface: drawing from the IPCC experience. , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20574, https://doi.org/10.5194/egusphere-egu2020-20574, 2020.
In this digital age, communication has become increasingly visual. Like never before, visual information is increasing exposure and widening outreach to new audiences. With growing demands from Journals (Table of Content arts, visual abstracts, scientific figures), conferences (posters, presentation) and competitive grants submissions, the science world is not spared, and figures represent a tremendous opportunity to communicate findings more effectively. It is therefore important to get figures and images right for the intended audience, even more so when visualizing scientific data and conveying complex concepts.
The Intergovernmental Panel on Climate Change (IPCC), whose primary role is to inform policy makers on the state of knowledge on climate change, showcases how complex science can be visually communicated to a non-expert audience. Since its fifth assessment report, published in 2014, the IPCC has acknowledged the importance of communicating its assessments in an understandable, accessible, actionable and relevant way to all its stakeholders without compromising on the scientific robustness and accuracy.
Currently in its sixth assessment cycle, the IPCC features a new approach to figure design in its three recently published Special Reports. This approach consists in an unprecedented collaboration between design, information and cognitive specialists and the IPCC authors. This co-design process, along with a continuous guidance to authors on visualization and cognitive concepts, was conducted in a user-centered way to best serve the audience needs and their respective background. The challenge of visually representing multi-disciplinary results, testing, evaluating, and refining the figures improved the clarity of the key messages. The entire co-design method has proven to be a successful process during the preparation of the special reports and the preparation of the sixth assessment report is building on this experience. Despite a lack of available analytics, the IPCC communication department has observed an unprecedented media coverage and a certain amount of derivative products based on the special reports figures created by third parties.
How to cite: Melissa, G., Sophie, B., Robin, M., Sarah, C., Ozge, Y., Jordan, H., Angela, M., and Tom Gabriel, J.: Data visualisation and information design at the science-policy interface: drawing from the IPCC experience. , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20574, https://doi.org/10.5194/egusphere-egu2020-20574, 2020.
EGU2020-21494 | Displays | ITS4.2/ESSI4.2
Using Blender for Earth Science’s visualizationstella paronuzzi ticco, oriol tinto primis, and Thomas Arsouze
Blender is an open-source 3D creation suite with a wide range of applications and users. Even though it is not a tool specifically designed for scientific visualization, it proved to be a very valuable tool to produce stunning visual results. We will show how in our workflow we go from model’s output written in netCDF to a finished visual product just relying on open-source software. The kind of visualization formats that can be produced ranges from static images to 2D/3D/360/Virtual Reality videos, enabling a wide span of potential outcomes. This kind of products are highly suitable for dissemination and scientific outreach.
How to cite: paronuzzi ticco, S., tinto primis, O., and Arsouze, T.: Using Blender for Earth Science’s visualization, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21494, https://doi.org/10.5194/egusphere-egu2020-21494, 2020.
Blender is an open-source 3D creation suite with a wide range of applications and users. Even though it is not a tool specifically designed for scientific visualization, it proved to be a very valuable tool to produce stunning visual results. We will show how in our workflow we go from model’s output written in netCDF to a finished visual product just relying on open-source software. The kind of visualization formats that can be produced ranges from static images to 2D/3D/360/Virtual Reality videos, enabling a wide span of potential outcomes. This kind of products are highly suitable for dissemination and scientific outreach.
How to cite: paronuzzi ticco, S., tinto primis, O., and Arsouze, T.: Using Blender for Earth Science’s visualization, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21494, https://doi.org/10.5194/egusphere-egu2020-21494, 2020.
ITS4.3/AS5.2 – Machine learning for Earth System modelling
EGU2020-19339 | Displays | ITS4.3/AS5.2
Developing a data-driven ocean modelRachel Furner, Peter Haynes, Dan Jones, Dave Munday, Brooks Paige, and Emily Shuckburgh
The recent boom in machine learning and data science has led to a number of new opportunities in the environmental sciences. In particular, climate models represent the best tools we have to predict, understand and potentially mitigate climate change, however these process-based models are incredibly complex and require huge amounts of high-performance computing resources. Machine learning offers opportunities to greatly improve the computational efficiency of these models.
Here we discuss our recent efforts to reduce the computational cost associated with running a process-based model of the physical ocean by developing an analogous data-driven model. We train statistical and machine learning algorithms using the outputs from a highly idealised sector configuration of general circulation model (MITgcm). Our aim is to develop an algorithm which is able to predict the future state of the general circulation model to a similar level of accuracy in a more computationally efficient manner.
We first develop a linear regression model to investigate the sensitivity of data-driven approaches to various inputs, e.g. temperature on different spatial and temporal scales, and meta-variables such as location information. Following this, we develop a neural network model to replicate the general circulation model, as in the work of Dueben and Bauer 2018, and Scher 2018.
We present a discussion on the sensitivity of data-driven models and preliminary results from the neural network based model.
Dueben, P. D., & Bauer, P. (2018). Challenges and design choices for global weather and climate models based on machine learning. Geoscientific Model Development, 11(10), 3999-4009.
Scher, S. (2018). Toward Data‐Driven Weather and Climate Forecasting: Approximating a Simple General Circulation Model With Deep Learning. Geophysical Research Letters, 45(22), 12-616.
How to cite: Furner, R., Haynes, P., Jones, D., Munday, D., Paige, B., and Shuckburgh, E.: Developing a data-driven ocean model, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19339, https://doi.org/10.5194/egusphere-egu2020-19339, 2020.
The recent boom in machine learning and data science has led to a number of new opportunities in the environmental sciences. In particular, climate models represent the best tools we have to predict, understand and potentially mitigate climate change, however these process-based models are incredibly complex and require huge amounts of high-performance computing resources. Machine learning offers opportunities to greatly improve the computational efficiency of these models.
Here we discuss our recent efforts to reduce the computational cost associated with running a process-based model of the physical ocean by developing an analogous data-driven model. We train statistical and machine learning algorithms using the outputs from a highly idealised sector configuration of general circulation model (MITgcm). Our aim is to develop an algorithm which is able to predict the future state of the general circulation model to a similar level of accuracy in a more computationally efficient manner.
We first develop a linear regression model to investigate the sensitivity of data-driven approaches to various inputs, e.g. temperature on different spatial and temporal scales, and meta-variables such as location information. Following this, we develop a neural network model to replicate the general circulation model, as in the work of Dueben and Bauer 2018, and Scher 2018.
We present a discussion on the sensitivity of data-driven models and preliminary results from the neural network based model.
Dueben, P. D., & Bauer, P. (2018). Challenges and design choices for global weather and climate models based on machine learning. Geoscientific Model Development, 11(10), 3999-4009.
Scher, S. (2018). Toward Data‐Driven Weather and Climate Forecasting: Approximating a Simple General Circulation Model With Deep Learning. Geophysical Research Letters, 45(22), 12-616.
How to cite: Furner, R., Haynes, P., Jones, D., Munday, D., Paige, B., and Shuckburgh, E.: Developing a data-driven ocean model, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19339, https://doi.org/10.5194/egusphere-egu2020-19339, 2020.
EGU2020-21754 | Displays | ITS4.3/AS5.2
Machine Learning of committor functions for predicting high impact climate eventsDario Lucente, Freddy Bouchet, and Corentin Herbert
There is a growing interest in the climate community to improve the prediction of high impact climate events, for instance ENSO (El-Ni\~no--Southern Oscillation) or extreme events, using a combination of model and observation data. In this talk we present a machine learning approach for predicting the committor function, the relevant concept.
Because the dynamics of the climate system is chaotic, one usually distinguishes between time scales much shorter than a Lyapunov time for which a deterministic weather forecast is relevant, and time scales much longer than a mixing times beyond which any deterministic forecast is irrelevant and only climate averaged or probabilistic quantities can be predicted. However, for most applications, the largest interest is for intermediate time scales for which some information, more precise than the climate averages, might be predicted, but for which a deterministic forecast is not relevant. We call this range of time scales \it{the predictability margin}. We stress in this talk that the prediction problem at the predictability margin is of a probabilistic nature. Indeed, such time scales might typically be of the order of the Lyapunov time scale or larger, where errors on the initial condition and model errors limit our ability to compute deterministically the evolution. In this talk we explain that, in a dynamical context, the relevant quantity for predicting a future event at the predictability margin is a committor function. A committor function is the probability that an event will occur or not in the future, as a function of the current state of the system.
We compute and discuss the committor function from data, either through a direct approach or through a machine learning approach using neural networks. We discuss two examples: a) the computation of the Jin and Timmerman model, a low dimensional model proposed to explain the decadal amplitude changes of El-Ni\~no, b) the computation of committor function for extreme heat waves. We compare several machine learning approaches, using neural network or using kernel-based analogue methods.
From the point of view of the climate extremes, our main conclusion is that one should generically distinguish between states with either intrinsic predictability or intrinsic unpredictability. This predictability concept is markedly different from the deterministic unpredictability arising because of chaotic dynamics and exponential sensivity to initial conditions.
How to cite: Lucente, D., Bouchet, F., and Herbert, C.: Machine Learning of committor functions for predicting high impact climate events, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21754, https://doi.org/10.5194/egusphere-egu2020-21754, 2020.
There is a growing interest in the climate community to improve the prediction of high impact climate events, for instance ENSO (El-Ni\~no--Southern Oscillation) or extreme events, using a combination of model and observation data. In this talk we present a machine learning approach for predicting the committor function, the relevant concept.
Because the dynamics of the climate system is chaotic, one usually distinguishes between time scales much shorter than a Lyapunov time for which a deterministic weather forecast is relevant, and time scales much longer than a mixing times beyond which any deterministic forecast is irrelevant and only climate averaged or probabilistic quantities can be predicted. However, for most applications, the largest interest is for intermediate time scales for which some information, more precise than the climate averages, might be predicted, but for which a deterministic forecast is not relevant. We call this range of time scales \it{the predictability margin}. We stress in this talk that the prediction problem at the predictability margin is of a probabilistic nature. Indeed, such time scales might typically be of the order of the Lyapunov time scale or larger, where errors on the initial condition and model errors limit our ability to compute deterministically the evolution. In this talk we explain that, in a dynamical context, the relevant quantity for predicting a future event at the predictability margin is a committor function. A committor function is the probability that an event will occur or not in the future, as a function of the current state of the system.
We compute and discuss the committor function from data, either through a direct approach or through a machine learning approach using neural networks. We discuss two examples: a) the computation of the Jin and Timmerman model, a low dimensional model proposed to explain the decadal amplitude changes of El-Ni\~no, b) the computation of committor function for extreme heat waves. We compare several machine learning approaches, using neural network or using kernel-based analogue methods.
From the point of view of the climate extremes, our main conclusion is that one should generically distinguish between states with either intrinsic predictability or intrinsic unpredictability. This predictability concept is markedly different from the deterministic unpredictability arising because of chaotic dynamics and exponential sensivity to initial conditions.
How to cite: Lucente, D., Bouchet, F., and Herbert, C.: Machine Learning of committor functions for predicting high impact climate events, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21754, https://doi.org/10.5194/egusphere-egu2020-21754, 2020.
EGU2020-20207 | Displays | ITS4.3/AS5.2
Learned Criticality in “Supermodels” That Combine Competing Models of the Earth System With Adaptable Inter-Model ConnectionsGregory Duane and Mao-Lin Shen
In a supermodel, different models of the same objective process exchange information in run-time, effecting a form of inter-model data assimilation, with learned connections, that brings the models into partial synchrony and resolves differences. It has been shown [Chaos Focus Issue, Dec. ‘17], that supermodels can avoid errors of the separate models, even when all the models err qualitatively in the same way. They can thus surpass results obtained from any ex post facto averaging of model outputs.
Since climate models differ largely in their schemes for parametrization of sub-grid-scale processes, one would expect supermodeling to be most useful when the small-scale processes have the largest effect on the dynamics of the entire model. According to the self-organized criticality conjecture of Bak [‘87] inter-scale interactions are greatest near critical points of the system, characterized by a power-law form in the amplitude spectrum, and real-world systems naturally tend toward such critical points. Supermodels are therefore expected to be particularly useful near such states.
We validate this hypothesis first in a toy supermodel consisting of two quasigeostrophic channel models of the blocked/zonal flow vacillation, each model forced by relaxation to a jet flow pattern, but with different forcing strengths. One model, with low forcing, remains in a state of low-amplitude turbulence with no blocking. The other model, with high forcing, remains in the state defined by the forcing jet, again with no blocking. Yet a model with realistic forcing, and the supermodel formed from the two extreme models by training the connections, exhibit blocking with the desired vacillation. The amplitude or energy spectrum of the supermodel exhibits the power-law dependence on wavenumber, characteristic of critical states, over a larger range of scales than does either of the individual models.
Then we turn to the more realistic case of a supermodel formed by coupling different ECHAM atmospheres to a common MPI ocean model. The atmosphere models differ only in their schemes for parametrizing small-scale convection. The weights on the energy and momentum fluxes from the two atmospheres, as they affect the ocean, are trained to form a supermodel. The separate models both exhibit the error of a double inter-tropical convergence zone (ITCZ), i.e. an extended cold tongue. But the trained supermodel (with positive weights) has the single ITCZ found in reality. The double ITCZ error in one model arises from a weak Bjernkes ocean-atmosphere feedback in the 2D tropical circulation. The double ITCZ in the other model arises from a more complex mechanism involving the 3D circulation pattern extending into the sub-tropics. The more correct supermodel behavior, and associated ENSO cycle, are reflected in an energy spectrum with power-law form with a dynamic range and an exponent that are more like those of reality than are the corresponding quantities for the separate models, which are similar to each other. It thus appears that supermodels, in avoiding similar errors made by different constituent models for different reasons, are particularly useful both for emulating critical behavior, and for capturing the correct properties of critical states.
How to cite: Duane, G. and Shen, M.-L.: Learned Criticality in “Supermodels” That Combine Competing Models of the Earth System With Adaptable Inter-Model Connections , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20207, https://doi.org/10.5194/egusphere-egu2020-20207, 2020.
In a supermodel, different models of the same objective process exchange information in run-time, effecting a form of inter-model data assimilation, with learned connections, that brings the models into partial synchrony and resolves differences. It has been shown [Chaos Focus Issue, Dec. ‘17], that supermodels can avoid errors of the separate models, even when all the models err qualitatively in the same way. They can thus surpass results obtained from any ex post facto averaging of model outputs.
Since climate models differ largely in their schemes for parametrization of sub-grid-scale processes, one would expect supermodeling to be most useful when the small-scale processes have the largest effect on the dynamics of the entire model. According to the self-organized criticality conjecture of Bak [‘87] inter-scale interactions are greatest near critical points of the system, characterized by a power-law form in the amplitude spectrum, and real-world systems naturally tend toward such critical points. Supermodels are therefore expected to be particularly useful near such states.
We validate this hypothesis first in a toy supermodel consisting of two quasigeostrophic channel models of the blocked/zonal flow vacillation, each model forced by relaxation to a jet flow pattern, but with different forcing strengths. One model, with low forcing, remains in a state of low-amplitude turbulence with no blocking. The other model, with high forcing, remains in the state defined by the forcing jet, again with no blocking. Yet a model with realistic forcing, and the supermodel formed from the two extreme models by training the connections, exhibit blocking with the desired vacillation. The amplitude or energy spectrum of the supermodel exhibits the power-law dependence on wavenumber, characteristic of critical states, over a larger range of scales than does either of the individual models.
Then we turn to the more realistic case of a supermodel formed by coupling different ECHAM atmospheres to a common MPI ocean model. The atmosphere models differ only in their schemes for parametrizing small-scale convection. The weights on the energy and momentum fluxes from the two atmospheres, as they affect the ocean, are trained to form a supermodel. The separate models both exhibit the error of a double inter-tropical convergence zone (ITCZ), i.e. an extended cold tongue. But the trained supermodel (with positive weights) has the single ITCZ found in reality. The double ITCZ error in one model arises from a weak Bjernkes ocean-atmosphere feedback in the 2D tropical circulation. The double ITCZ in the other model arises from a more complex mechanism involving the 3D circulation pattern extending into the sub-tropics. The more correct supermodel behavior, and associated ENSO cycle, are reflected in an energy spectrum with power-law form with a dynamic range and an exponent that are more like those of reality than are the corresponding quantities for the separate models, which are similar to each other. It thus appears that supermodels, in avoiding similar errors made by different constituent models for different reasons, are particularly useful both for emulating critical behavior, and for capturing the correct properties of critical states.
How to cite: Duane, G. and Shen, M.-L.: Learned Criticality in “Supermodels” That Combine Competing Models of the Earth System With Adaptable Inter-Model Connections , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20207, https://doi.org/10.5194/egusphere-egu2020-20207, 2020.
EGU2020-20845 | Displays | ITS4.3/AS5.2
Learning Lyapunov stable Dynamical Embeddings of Geophysical DynamicsSaid Ouala, Lucas Drumetz, Bertrand Chapron, Ananda Pascual, Fabrice Collard, Lucile Gaultier, and Ronan Fablet
Within the geosciences community, data-driven techniques have encountered a great success in the last few years. This is principally due to the success of machine learning techniques in several image and signal processing domains. However, when considering the data-driven simulation of ocean and atmospheric fields, the application of these methods is still an extremely challenging task due to the fact that the underlying dynamics usually depend on several complex hidden variables, which makes the learning and simulation process much more challenging.
In this work, we aim to extract Ordinary Differential Equations (ODE) from partial observations of a system. We propose a novel neural network architecture guided by physical and mathematical considerations of the underlying dynamics. Specifically, our architecture is able to simulate the dynamics of the system from a single initial condition even if the initial condition does not lie in the attractor spanned by the training data. We show on different case studies the effectiveness of the proposed framework both in capturing long term asymptotic patterns of the dynamics of the system and in addressing data assimilation issues which relates to the short term forecasting performance of our model.
How to cite: Ouala, S., Drumetz, L., Chapron, B., Pascual, A., Collard, F., Gaultier, L., and Fablet, R.: Learning Lyapunov stable Dynamical Embeddings of Geophysical Dynamics, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20845, https://doi.org/10.5194/egusphere-egu2020-20845, 2020.
Within the geosciences community, data-driven techniques have encountered a great success in the last few years. This is principally due to the success of machine learning techniques in several image and signal processing domains. However, when considering the data-driven simulation of ocean and atmospheric fields, the application of these methods is still an extremely challenging task due to the fact that the underlying dynamics usually depend on several complex hidden variables, which makes the learning and simulation process much more challenging.
In this work, we aim to extract Ordinary Differential Equations (ODE) from partial observations of a system. We propose a novel neural network architecture guided by physical and mathematical considerations of the underlying dynamics. Specifically, our architecture is able to simulate the dynamics of the system from a single initial condition even if the initial condition does not lie in the attractor spanned by the training data. We show on different case studies the effectiveness of the proposed framework both in capturing long term asymptotic patterns of the dynamics of the system and in addressing data assimilation issues which relates to the short term forecasting performance of our model.
How to cite: Ouala, S., Drumetz, L., Chapron, B., Pascual, A., Collard, F., Gaultier, L., and Fablet, R.: Learning Lyapunov stable Dynamical Embeddings of Geophysical Dynamics, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20845, https://doi.org/10.5194/egusphere-egu2020-20845, 2020.
EGU2020-7569 | Displays | ITS4.3/AS5.2
Boosting performance in Machine Learning of Turbulent and Geophysical Flows via scale separationDavide Faranda, Mathieu Vrac, Pascal Yiou, Flavio Maria Emanuele Pons, Adnane Hamid, Giulia Carella, Cedric Gacial Ngoungue Langue, Soulivanh Thao, and Valerie Gautard
Recent advances in statistical learning have opened the possibility to forecast the behavior of chaotic systems using recurrent neural networks. In this letter we investigate the applicability of this framework to geophysical flows, known to be intermittent and turbulent. We show that both turbulence and intermittency introduce severe limitations on the applicability of recurrent neural networks, both for short term forecasts as well as for the reconstruction of the underlying attractor. We test these ideas on global sea-level pressure data for the past 40 years, issued from the NCEP reanalysis datase, a proxy of the atmospheric circulation dynamics. The performance of recurrent neural network in predicting both short and long term behaviors rapidly drops when the systems are perturbed with noise. However, we found that a good predictability is partially recovered when scale separation is performed via a moving average filter. We suggest that possible strategies to overcome limitations should be based on separating the smooth large-scale dynamics, from the intermittent/turbulent features.
How to cite: Faranda, D., Vrac, M., Yiou, P., Pons, F. M. E., Hamid, A., Carella, G., Ngoungue Langue, C. G., Thao, S., and Gautard, V.: Boosting performance in Machine Learning of Turbulent and Geophysical Flows via scale separation, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7569, https://doi.org/10.5194/egusphere-egu2020-7569, 2020.
Recent advances in statistical learning have opened the possibility to forecast the behavior of chaotic systems using recurrent neural networks. In this letter we investigate the applicability of this framework to geophysical flows, known to be intermittent and turbulent. We show that both turbulence and intermittency introduce severe limitations on the applicability of recurrent neural networks, both for short term forecasts as well as for the reconstruction of the underlying attractor. We test these ideas on global sea-level pressure data for the past 40 years, issued from the NCEP reanalysis datase, a proxy of the atmospheric circulation dynamics. The performance of recurrent neural network in predicting both short and long term behaviors rapidly drops when the systems are perturbed with noise. However, we found that a good predictability is partially recovered when scale separation is performed via a moving average filter. We suggest that possible strategies to overcome limitations should be based on separating the smooth large-scale dynamics, from the intermittent/turbulent features.
How to cite: Faranda, D., Vrac, M., Yiou, P., Pons, F. M. E., Hamid, A., Carella, G., Ngoungue Langue, C. G., Thao, S., and Gautard, V.: Boosting performance in Machine Learning of Turbulent and Geophysical Flows via scale separation, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7569, https://doi.org/10.5194/egusphere-egu2020-7569, 2020.
EGU2020-13982 | Displays | ITS4.3/AS5.2
Deep Learning based cloud parametrization for the Community Atmosphere ModelGunnar Behrens, Veronika Eyring, Pierre Gentine, Mike S. Pritchard, Tom Beucler, and Stephan Rasp
EGU2020-14055 | Displays | ITS4.3/AS5.2
Large-eddy simulation subgrid modelling using neural networksRobin Stoffer, Caspar van Leeuwen, Damian Podareanu, Valeriu Codreanu, Menno Veerman, and Chiel van Heerwaarden
Large-eddy simulation (LES) is an often used technique in the geosciences to simulate turbulent oceanic and atmospheric flows. In LES, the effects of the unresolved turbulence scales on the resolved scales (via the Reynolds stress tensor) have to be parameterized with subgrid models. These subgrid models usually require strong assumptions about the relationship between the resolved flow fields and the Reynolds stress tensor, which are often violated in reality and potentially hamper their accuracy.
In this study, using the finite-difference computational fluid dynamics code MicroHH (v2.0) and turbulent channel flow as a test case (friction Reynolds number Reτ 590), we incorporated and tested a newly emerging subgrid modelling approach that does not require those assumptions. Instead, it relies on neural networks that are highly non-linear and flexible. Similar to currently used subgrid models, we designed our neural networks such that they can be applied locally in the grid domain: at each grid point the neural networks receive as an input the locally resolved flow fields (u,v,w), rather than the full flow fields. As an output, the neural networks give the Reynolds stress tensor at the considered grid point. This local application integrates well with our simulation code, and is necessary to run our code in parallel within distributed memory systems.
To allow our neural networks to learn the relationship between the specified input and output, we created a training dataset that contains ~10.000.000 samples of corresponding inputs and outputs. We derived those samples directly from high-resolution 3D direct numerical simulation (DNS) snapshots of turbulent flow fields. Since the DNS explicitly resolves all the relevant turbulence scales, by downsampling the DNS we were able to derive both the Reynolds stress tensor and the corresponding lower-resolution flow fields typical for LES. In this calculation, we took into account both the discretization and interpolation errors introduced by the finite staggered LES grid. Subsequently, using these samples we optimized the parameters of the neural networks to minimize the difference between the predicted and the ‘true’ output derived from DNS.
After that, we tested the performance of our neural networks in two different ways:
- A priori or offline testing, where we used a withheld part of the training dataset (10%) to test the capability of the neural networks to correctly predict the Reynolds stress tensor for data not used to optimize its parameters. We found that the neural networks were, in general, well able to predict the correct values.
- A posteriori or online testing, where we incorporated our neural networks directly into our LES. To keep the total involved computational effort feasible, we strongly enhanced the prediction speed of the neural network by relying on highly optimized matrix-vector libraries. The full successful integration of the neural networks within LES remains challenging though, mainly because the neural networks tend to introduce numerical instability into the LES. We are currently investigating ways to minimize this instability, while maintaining the high accuracy in the a priori test and the high prediction speed.
How to cite: Stoffer, R., van Leeuwen, C., Podareanu, D., Codreanu, V., Veerman, M., and van Heerwaarden, C.: Large-eddy simulation subgrid modelling using neural networks, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-14055, https://doi.org/10.5194/egusphere-egu2020-14055, 2020.
Large-eddy simulation (LES) is an often used technique in the geosciences to simulate turbulent oceanic and atmospheric flows. In LES, the effects of the unresolved turbulence scales on the resolved scales (via the Reynolds stress tensor) have to be parameterized with subgrid models. These subgrid models usually require strong assumptions about the relationship between the resolved flow fields and the Reynolds stress tensor, which are often violated in reality and potentially hamper their accuracy.
In this study, using the finite-difference computational fluid dynamics code MicroHH (v2.0) and turbulent channel flow as a test case (friction Reynolds number Reτ 590), we incorporated and tested a newly emerging subgrid modelling approach that does not require those assumptions. Instead, it relies on neural networks that are highly non-linear and flexible. Similar to currently used subgrid models, we designed our neural networks such that they can be applied locally in the grid domain: at each grid point the neural networks receive as an input the locally resolved flow fields (u,v,w), rather than the full flow fields. As an output, the neural networks give the Reynolds stress tensor at the considered grid point. This local application integrates well with our simulation code, and is necessary to run our code in parallel within distributed memory systems.
To allow our neural networks to learn the relationship between the specified input and output, we created a training dataset that contains ~10.000.000 samples of corresponding inputs and outputs. We derived those samples directly from high-resolution 3D direct numerical simulation (DNS) snapshots of turbulent flow fields. Since the DNS explicitly resolves all the relevant turbulence scales, by downsampling the DNS we were able to derive both the Reynolds stress tensor and the corresponding lower-resolution flow fields typical for LES. In this calculation, we took into account both the discretization and interpolation errors introduced by the finite staggered LES grid. Subsequently, using these samples we optimized the parameters of the neural networks to minimize the difference between the predicted and the ‘true’ output derived from DNS.
After that, we tested the performance of our neural networks in two different ways:
- A priori or offline testing, where we used a withheld part of the training dataset (10%) to test the capability of the neural networks to correctly predict the Reynolds stress tensor for data not used to optimize its parameters. We found that the neural networks were, in general, well able to predict the correct values.
- A posteriori or online testing, where we incorporated our neural networks directly into our LES. To keep the total involved computational effort feasible, we strongly enhanced the prediction speed of the neural network by relying on highly optimized matrix-vector libraries. The full successful integration of the neural networks within LES remains challenging though, mainly because the neural networks tend to introduce numerical instability into the LES. We are currently investigating ways to minimize this instability, while maintaining the high accuracy in the a priori test and the high prediction speed.
How to cite: Stoffer, R., van Leeuwen, C., Podareanu, D., Codreanu, V., Veerman, M., and van Heerwaarden, C.: Large-eddy simulation subgrid modelling using neural networks, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-14055, https://doi.org/10.5194/egusphere-egu2020-14055, 2020.
EGU2020-9820 | Displays | ITS4.3/AS5.2 | Highlight
Machine learning for detection of climate extremes: New approaches to uncertainty quantificationWilliam Collins, Travis O'Brien, Mr Prabhat, and Karthik Kashinath
- In many cases, the official definitions for the weather events in the current climate are either ad hoc and/or subjective, leading to considerable variance in the statistics of these events even in literature concerning the historical record;
- Operational methods for identifying these events are also typically quite ad hoc with very limited quantification of their structural and parametric uncertainties; and
- Both the generative mechanisms and physical properties of these events are both predicted to evolve due to well-understood physics, and hence the training data set should but typically does not reflect these secular trends in the formation and statistical properties of climate extremes.
- The recent creation of the first labeled data set specifically designed for algorithm training on atmospheric extremes, known as ClimateNet;
- Probabilistic ML algorithms that identify events based on the level of agreement across an ensemble of operational methods;
- Bayesian methods for that identify events based on the level of agreement across an ensemble of human expert-generated labels; and
- The prospects for physics-based detection using fundamental properties of the fluid dynamics (i.e., conserved variables and Lyapunov exponents) and/or information-theoretic concepts.
How to cite: Collins, W., O'Brien, T., Prabhat, M., and Kashinath, K.: Machine learning for detection of climate extremes: New approaches to uncertainty quantification, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9820, https://doi.org/10.5194/egusphere-egu2020-9820, 2020.
- In many cases, the official definitions for the weather events in the current climate are either ad hoc and/or subjective, leading to considerable variance in the statistics of these events even in literature concerning the historical record;
- Operational methods for identifying these events are also typically quite ad hoc with very limited quantification of their structural and parametric uncertainties; and
- Both the generative mechanisms and physical properties of these events are both predicted to evolve due to well-understood physics, and hence the training data set should but typically does not reflect these secular trends in the formation and statistical properties of climate extremes.
- The recent creation of the first labeled data set specifically designed for algorithm training on atmospheric extremes, known as ClimateNet;
- Probabilistic ML algorithms that identify events based on the level of agreement across an ensemble of operational methods;
- Bayesian methods for that identify events based on the level of agreement across an ensemble of human expert-generated labels; and
- The prospects for physics-based detection using fundamental properties of the fluid dynamics (i.e., conserved variables and Lyapunov exponents) and/or information-theoretic concepts.
How to cite: Collins, W., O'Brien, T., Prabhat, M., and Kashinath, K.: Machine learning for detection of climate extremes: New approaches to uncertainty quantification, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9820, https://doi.org/10.5194/egusphere-egu2020-9820, 2020.
EGU2020-10883 | Displays | ITS4.3/AS5.2
Assessment of Predictive Uncertainty of Data-Driven Environmental ModelsBenedikt Knüsel, Christoph Baumberger, Marius Zumwald, David N. Bresch, and Reto Knutti
Due to ever larger volumes of environmental data, environmental scientists can increasingly use machine learning to construct data-driven models of phenomena. Data-driven environmental models can provide useful information to society, but this requires that their uncertainties be understood. However, new conceptual tools are needed for this because existing approaches to assess the uncertainty of environmental models do so in terms of specific locations, such as model structure and parameter values. These locations are not informative for an assessment of the predictive uncertainty of data-driven models. Rather than the model structure or model parameters, we argue that it is the behavior of a data-driven model that should be subject to an assessment of uncertainty.
In this paper, we present a novel framework that can be used to assess the uncertainty of data-driven environmental models. The framework uses argument analysis and focuses on epistemic uncertainty, i.e., uncertainty that is related to a lack of knowledge. It proceeds in three steps. The first step consists in reconstructing the justification of the assumption that the model used is fit for the predictive task at hand. Arguments for this justification may, for example, refer to sensitivity analyses and model performance on a validation dataset. In a second step, this justification is evaluated to identify how conclusively the fitness-for-purpose assumption is justified. In a third step, the epistemic uncertainty is assessed based on the evaluation of the arguments. Epistemic uncertainty emerges due to insufficient justification of the fitness-for-purpose assumption, i.e., if the model is less-than-maximally fit-for-purpose. This lack of justification translates to predictive uncertainty, or first-order uncertainty. Uncertainty also emerges if it is unclear how well the fitness-for-purpose assumption is justified. We refer to this uncertainty as “second-order uncertainty”. In other words, second-order uncertainty is uncertainty that researchers face when assessing first-order uncertainty.
We illustrate how the framework is applied by discussing to a case study from environmental science in which data-driven models are used to make long-term projections of soil selenium concentrations. We highlight that in many applications, the lack of system understanding and the lack of transparency of machine learning can introduce a substantial level of second-order uncertainty. We close by sketching how the framework can inform uncertainty quantification.
How to cite: Knüsel, B., Baumberger, C., Zumwald, M., Bresch, D. N., and Knutti, R.: Assessment of Predictive Uncertainty of Data-Driven Environmental Models, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10883, https://doi.org/10.5194/egusphere-egu2020-10883, 2020.
Due to ever larger volumes of environmental data, environmental scientists can increasingly use machine learning to construct data-driven models of phenomena. Data-driven environmental models can provide useful information to society, but this requires that their uncertainties be understood. However, new conceptual tools are needed for this because existing approaches to assess the uncertainty of environmental models do so in terms of specific locations, such as model structure and parameter values. These locations are not informative for an assessment of the predictive uncertainty of data-driven models. Rather than the model structure or model parameters, we argue that it is the behavior of a data-driven model that should be subject to an assessment of uncertainty.
In this paper, we present a novel framework that can be used to assess the uncertainty of data-driven environmental models. The framework uses argument analysis and focuses on epistemic uncertainty, i.e., uncertainty that is related to a lack of knowledge. It proceeds in three steps. The first step consists in reconstructing the justification of the assumption that the model used is fit for the predictive task at hand. Arguments for this justification may, for example, refer to sensitivity analyses and model performance on a validation dataset. In a second step, this justification is evaluated to identify how conclusively the fitness-for-purpose assumption is justified. In a third step, the epistemic uncertainty is assessed based on the evaluation of the arguments. Epistemic uncertainty emerges due to insufficient justification of the fitness-for-purpose assumption, i.e., if the model is less-than-maximally fit-for-purpose. This lack of justification translates to predictive uncertainty, or first-order uncertainty. Uncertainty also emerges if it is unclear how well the fitness-for-purpose assumption is justified. We refer to this uncertainty as “second-order uncertainty”. In other words, second-order uncertainty is uncertainty that researchers face when assessing first-order uncertainty.
We illustrate how the framework is applied by discussing to a case study from environmental science in which data-driven models are used to make long-term projections of soil selenium concentrations. We highlight that in many applications, the lack of system understanding and the lack of transparency of machine learning can introduce a substantial level of second-order uncertainty. We close by sketching how the framework can inform uncertainty quantification.
How to cite: Knüsel, B., Baumberger, C., Zumwald, M., Bresch, D. N., and Knutti, R.: Assessment of Predictive Uncertainty of Data-Driven Environmental Models, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10883, https://doi.org/10.5194/egusphere-egu2020-10883, 2020.
EGU2020-9038 | Displays | ITS4.3/AS5.2
How many proxy are necessary to reconstruct the temperature of the last millennium?Fernando Jaume-Santero, David Barriopedro, Ricardo García-Herrera, Sancho Salcedo-Sanz, and Natalia Calvo
Decades of scientific fieldwork have provided extensive sets of paleoclimate records to reconstruct the climate of the past at local, regional, and global scales. Within this context, the paleoclimate community is continuously undertaking new measuring campaigns to obtain long and reliable proxies. However, as most paleoclimate archives are restricted to land regions of the Northern Hemisphere, increasing the number of proxy records to improve the skill of climate field reconstructions might not always be the best strategy.
By generating pseudo-proxies from several model ensembles at the locations matching the records of the PAGES-2k network, we show how biologically-inspired artificial intelligence can be coupled with reconstruction methods to find the set of representative locations that minimizes the bias in global temperature field reconstructions induced by the non-homogeneous distribution of proxy records.
Our results indicate that small sets of perfect pseudo-proxies situated over key locations of the PAGES-2k network can outperform the reconstruction skill obtained with all available records. They highlight the importance of high latitudes and major teleconnection areas to reconstruct temperature fields at annual timescales. However, long-term temperature variations are better reconstructed by records situated at lower latitudes. According to our experiments, a careful selection of proxy locations should be performed depending on the targeted time scale of the reconstructed field.
How to cite: Jaume-Santero, F., Barriopedro, D., García-Herrera, R., Salcedo-Sanz, S., and Calvo, N.: How many proxy are necessary to reconstruct the temperature of the last millennium?, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9038, https://doi.org/10.5194/egusphere-egu2020-9038, 2020.
Decades of scientific fieldwork have provided extensive sets of paleoclimate records to reconstruct the climate of the past at local, regional, and global scales. Within this context, the paleoclimate community is continuously undertaking new measuring campaigns to obtain long and reliable proxies. However, as most paleoclimate archives are restricted to land regions of the Northern Hemisphere, increasing the number of proxy records to improve the skill of climate field reconstructions might not always be the best strategy.
By generating pseudo-proxies from several model ensembles at the locations matching the records of the PAGES-2k network, we show how biologically-inspired artificial intelligence can be coupled with reconstruction methods to find the set of representative locations that minimizes the bias in global temperature field reconstructions induced by the non-homogeneous distribution of proxy records.
Our results indicate that small sets of perfect pseudo-proxies situated over key locations of the PAGES-2k network can outperform the reconstruction skill obtained with all available records. They highlight the importance of high latitudes and major teleconnection areas to reconstruct temperature fields at annual timescales. However, long-term temperature variations are better reconstructed by records situated at lower latitudes. According to our experiments, a careful selection of proxy locations should be performed depending on the targeted time scale of the reconstructed field.
How to cite: Jaume-Santero, F., Barriopedro, D., García-Herrera, R., Salcedo-Sanz, S., and Calvo, N.: How many proxy are necessary to reconstruct the temperature of the last millennium?, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9038, https://doi.org/10.5194/egusphere-egu2020-9038, 2020.
EGU2020-21555 | Displays | ITS4.3/AS5.2
Image Inpainting for Missing Values in Observational Climate Datasets Using Partial Convolutions in a cuDNNChristopher Kadow, David Hall, and Uwe Ulbrich
Nowadays climate change research relies on climate information of the past. Historic climate records of temperature observations form global gridded datasets like HadCRUT4, which is investigated e.g. in the IPCC reports. However, record combining data-sets are sparse in the past. Even today they contain missing values. Here we show that machine learning technology can be applied to refill these missing climate values in observational datasets. We found that the technology of image inpainting using partial convolutions in a CUDA accelerated deep neural network can be trained by large Earth system model experiments from NOAA reanalysis (20CR) and the Coupled Model Intercomparison Project phase 5 (CMIP5). The derived deep neural networks are capable to independently refill added missing values of these experiments. The analysis shows a very high degree of reconstruction even in the cross-reconstruction of the trained networks on the other dataset. The network reconstruction reaches a better evaluation than other typical methods in climate science. In the end we will show the new reconstructed observational dataset HadCRUT4 and discuss further investigations.
How to cite: Kadow, C., Hall, D., and Ulbrich, U.: Image Inpainting for Missing Values in Observational Climate Datasets Using Partial Convolutions in a cuDNN, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21555, https://doi.org/10.5194/egusphere-egu2020-21555, 2020.
Nowadays climate change research relies on climate information of the past. Historic climate records of temperature observations form global gridded datasets like HadCRUT4, which is investigated e.g. in the IPCC reports. However, record combining data-sets are sparse in the past. Even today they contain missing values. Here we show that machine learning technology can be applied to refill these missing climate values in observational datasets. We found that the technology of image inpainting using partial convolutions in a CUDA accelerated deep neural network can be trained by large Earth system model experiments from NOAA reanalysis (20CR) and the Coupled Model Intercomparison Project phase 5 (CMIP5). The derived deep neural networks are capable to independently refill added missing values of these experiments. The analysis shows a very high degree of reconstruction even in the cross-reconstruction of the trained networks on the other dataset. The network reconstruction reaches a better evaluation than other typical methods in climate science. In the end we will show the new reconstructed observational dataset HadCRUT4 and discuss further investigations.
How to cite: Kadow, C., Hall, D., and Ulbrich, U.: Image Inpainting for Missing Values in Observational Climate Datasets Using Partial Convolutions in a cuDNN, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21555, https://doi.org/10.5194/egusphere-egu2020-21555, 2020.
EGU2020-5532 | Displays | ITS4.3/AS5.2 | Highlight
A Vision for providing Global Weather Forecasts at Point-scaleTimothy Hewson
This presentation will provide a vision, based around current initiatives, of how post-processing and machine learning could work in tandem to downscale the ensemble output of current-generation global models, to deliver probabilistic analyses and forecasts, of multiple surface weather parameters, at point-scale, worldwide. Skill gains would be achieved by adjusting for gridscale and sub-grid biases. One particularly attractive feature of the vision is that observational data is not required for a site that we forecast for, although the more ‘big data’ that we use, worldwide, the better the forecasts will be overall.
The vision is based on four building blocks - or steps - for each parameter. The first step is a simple proof-of-concept, the second is supervised training, the third is hindcast activation and verification, and the fourth is real-time operational implementation. Here we will provide 3 examples, for 3 fundamental surface weather parameters - rainfall, 2m temperature and 100m wind - although the concepts apply also to other parameters too. We stress that different approaches are needed for different parameters, primarily because what determines model bias depends on the parameter. For some, biases depend primarily on local weather type, for others they depend mainly on local topography.
For rainfall downscaling, work at ECMWF has already passed stage 4, with real-time worldwide probabilistic point rainfall forecasts up to day 10 introduced operationally in April 2019, using a decision-tree-based software suite called “ecPoint”, that uses non-local gridbox weather-type analogues. Further work to improve algorithms is underway within the EU-funded MISTRAL project. For 2m temperature we have reached stage 2, and ecPoint-based downscaling will be used to progress this within the EU-funded HIGHLANDER project. The task of 100m wind downscaling requires a different approach, because local topographic forcing is very strong, and this is being addressed under the umbrella of the German Waves-to-Weather programme, using U-net-type convolutional neural networks for which short-period high-resolution simulations provide the training data. This work has also reached stage 2.
For each parameter discussed we see the potential for substantial gains, for point locations, in forecast accuracy and reliability, relative to the raw output of an operational global model. As such we envisage a bright future where probabilistic forecasts for individual sites (and re-analyses) are much better than hitherto, and where the degree of improvement also greatly exceeds what we can reasonably expect in the next two decades or so from advances in global NWP.
This presentation will give a brief overview of downscaling for the 3 parameters, highlight why we believe heavily supervised approaches offer the greatest potential, illustrate also how they provide invaluable feedback for model developers, illustrate areas where more work is needed (such as cross-parameter consistency), and show what form output could take (e.g. point-relevant EPSgrams, as an adaptation of ECMWF’s most popular product).
Contributors to the above initiatives include: Fatima Pillosu (ECMWF, ecPoint); Estibaliz Gascon and Andrea Montani (ECMWF, MISTRAL); Michael Kern and Kevin Höhlein (Technische Universität München, Waves-to-Weather).
How to cite: Hewson, T.: A Vision for providing Global Weather Forecasts at Point-scale, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5532, https://doi.org/10.5194/egusphere-egu2020-5532, 2020.
This presentation will provide a vision, based around current initiatives, of how post-processing and machine learning could work in tandem to downscale the ensemble output of current-generation global models, to deliver probabilistic analyses and forecasts, of multiple surface weather parameters, at point-scale, worldwide. Skill gains would be achieved by adjusting for gridscale and sub-grid biases. One particularly attractive feature of the vision is that observational data is not required for a site that we forecast for, although the more ‘big data’ that we use, worldwide, the better the forecasts will be overall.
The vision is based on four building blocks - or steps - for each parameter. The first step is a simple proof-of-concept, the second is supervised training, the third is hindcast activation and verification, and the fourth is real-time operational implementation. Here we will provide 3 examples, for 3 fundamental surface weather parameters - rainfall, 2m temperature and 100m wind - although the concepts apply also to other parameters too. We stress that different approaches are needed for different parameters, primarily because what determines model bias depends on the parameter. For some, biases depend primarily on local weather type, for others they depend mainly on local topography.
For rainfall downscaling, work at ECMWF has already passed stage 4, with real-time worldwide probabilistic point rainfall forecasts up to day 10 introduced operationally in April 2019, using a decision-tree-based software suite called “ecPoint”, that uses non-local gridbox weather-type analogues. Further work to improve algorithms is underway within the EU-funded MISTRAL project. For 2m temperature we have reached stage 2, and ecPoint-based downscaling will be used to progress this within the EU-funded HIGHLANDER project. The task of 100m wind downscaling requires a different approach, because local topographic forcing is very strong, and this is being addressed under the umbrella of the German Waves-to-Weather programme, using U-net-type convolutional neural networks for which short-period high-resolution simulations provide the training data. This work has also reached stage 2.
For each parameter discussed we see the potential for substantial gains, for point locations, in forecast accuracy and reliability, relative to the raw output of an operational global model. As such we envisage a bright future where probabilistic forecasts for individual sites (and re-analyses) are much better than hitherto, and where the degree of improvement also greatly exceeds what we can reasonably expect in the next two decades or so from advances in global NWP.
This presentation will give a brief overview of downscaling for the 3 parameters, highlight why we believe heavily supervised approaches offer the greatest potential, illustrate also how they provide invaluable feedback for model developers, illustrate areas where more work is needed (such as cross-parameter consistency), and show what form output could take (e.g. point-relevant EPSgrams, as an adaptation of ECMWF’s most popular product).
Contributors to the above initiatives include: Fatima Pillosu (ECMWF, ecPoint); Estibaliz Gascon and Andrea Montani (ECMWF, MISTRAL); Michael Kern and Kevin Höhlein (Technische Universität München, Waves-to-Weather).
How to cite: Hewson, T.: A Vision for providing Global Weather Forecasts at Point-scale, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5532, https://doi.org/10.5194/egusphere-egu2020-5532, 2020.
EGU2020-3485 | Displays | ITS4.3/AS5.2
Extended Range Arctic Sea Ice Forecast with Convolutional Long-Short Term Memory NetworksYang Liu, Laurens Bogaardt, Jisk Attema, and Wilco Hazeleger
Operational Arctic sea ice forecasts are of crucial importance to commercial and scientific activities in the Arctic region. Currently, numerical climate models, including General Circulation Models (GCMs) and regional climate models, are widely used to generate the Arctic sea ice predictions at weather time-scales. However, these numerical climate models require near real-time input of weather conditions to assure the quality of the predictions and these are hard to obtain and the simulations are computationally expensive. In this study, we propose a deep learning approach to forecasts of sea ice in the Barents sea at weather time scales. To work with such spatial-temporal sequence problems, Convolutional Long Short Term Memory Networks (ConvLSTM) are useful. ConvLSTM are LSTM (Long-Short Term Memory) networks with convolutional cells embedded in the LSTM cells. This approach is unsupervised learning and it can make use of enormous amounts of historical records of weather and climate. With input fields from atmospheric (ERA-Interim) and oceanic (ORAS4) reanalysis data sets, we demonstrate that the ConvLSTM is able to learn the variability of the Arctic sea ice within historical records and effectively predict regional sea ice concentration patterns at weekly to monthly time scales. Based on the known sources of predictability, sensitivity tests with different climate fields were also performed. The influences of different predictors on the quality of predictions are evaluated. This method outperforms predictions with climatology and persistence and is promising to act as a fast and cost-efficient operational sea ice forecast system in the future.
How to cite: Liu, Y., Bogaardt, L., Attema, J., and Hazeleger, W.: Extended Range Arctic Sea Ice Forecast with Convolutional Long-Short Term Memory Networks, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3485, https://doi.org/10.5194/egusphere-egu2020-3485, 2020.
Operational Arctic sea ice forecasts are of crucial importance to commercial and scientific activities in the Arctic region. Currently, numerical climate models, including General Circulation Models (GCMs) and regional climate models, are widely used to generate the Arctic sea ice predictions at weather time-scales. However, these numerical climate models require near real-time input of weather conditions to assure the quality of the predictions and these are hard to obtain and the simulations are computationally expensive. In this study, we propose a deep learning approach to forecasts of sea ice in the Barents sea at weather time scales. To work with such spatial-temporal sequence problems, Convolutional Long Short Term Memory Networks (ConvLSTM) are useful. ConvLSTM are LSTM (Long-Short Term Memory) networks with convolutional cells embedded in the LSTM cells. This approach is unsupervised learning and it can make use of enormous amounts of historical records of weather and climate. With input fields from atmospheric (ERA-Interim) and oceanic (ORAS4) reanalysis data sets, we demonstrate that the ConvLSTM is able to learn the variability of the Arctic sea ice within historical records and effectively predict regional sea ice concentration patterns at weekly to monthly time scales. Based on the known sources of predictability, sensitivity tests with different climate fields were also performed. The influences of different predictors on the quality of predictions are evaluated. This method outperforms predictions with climatology and persistence and is promising to act as a fast and cost-efficient operational sea ice forecast system in the future.
How to cite: Liu, Y., Bogaardt, L., Attema, J., and Hazeleger, W.: Extended Range Arctic Sea Ice Forecast with Convolutional Long-Short Term Memory Networks, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3485, https://doi.org/10.5194/egusphere-egu2020-3485, 2020.
EGU2020-17748 | Displays | ITS4.3/AS5.2
Deep learning for short-term temperature forecasts with video prediction methodsBing Gong, Severin Hußmann, Amirpasha Mozaffari, Jan Vogelsang, and Martin Schultz
This study explores the adaptation of state-of-the-art deep learning architectures for video frame prediction in the context of weather and climate applications. A proof-of-concept case study was performed to predict surface temperature fields over Europe for up to 20 hours based on ERA5 reanalyses weather data. Initial results have been achieved with a PredNet and a GAN-based architecture by using various combinations of temperature, surface pressure, and 500 hPa geopotential as inputs. The results show that the GAN-based architecture outperforms the PredNet. To facilitate the massive data processing and testing of various deep learning architectures, we have developed a containerized parallel workflow for the full life-cycle of the application, which consists of data extraction, data pre-processing, training, post-processing and visualisation of results. The training for PredNet was parallelized on JUWELS supercomputer at JSC, and the training scalability performance was also evaluated.
How to cite: Gong, B., Hußmann, S., Mozaffari, A., Vogelsang, J., and Schultz, M.: Deep learning for short-term temperature forecasts with video prediction methods, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-17748, https://doi.org/10.5194/egusphere-egu2020-17748, 2020.
This study explores the adaptation of state-of-the-art deep learning architectures for video frame prediction in the context of weather and climate applications. A proof-of-concept case study was performed to predict surface temperature fields over Europe for up to 20 hours based on ERA5 reanalyses weather data. Initial results have been achieved with a PredNet and a GAN-based architecture by using various combinations of temperature, surface pressure, and 500 hPa geopotential as inputs. The results show that the GAN-based architecture outperforms the PredNet. To facilitate the massive data processing and testing of various deep learning architectures, we have developed a containerized parallel workflow for the full life-cycle of the application, which consists of data extraction, data pre-processing, training, post-processing and visualisation of results. The training for PredNet was parallelized on JUWELS supercomputer at JSC, and the training scalability performance was also evaluated.
How to cite: Gong, B., Hußmann, S., Mozaffari, A., Vogelsang, J., and Schultz, M.: Deep learning for short-term temperature forecasts with video prediction methods, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-17748, https://doi.org/10.5194/egusphere-egu2020-17748, 2020.
EGU2020-1635 | Displays | ITS4.3/AS5.2
Estimation of NO2 and SO2 concentration changes in Europe from meteorological data with Neural NetworkAndrey Vlasenko, Volker Mattias, and Ulrich Callies
Chemical substances of anthropogenic and natural origin released into the atmosphere affect air quality and, as a consequence, the health of the population. As a result, there is a demand for reliable air quality simulations and future scenarios investigating the effects of emission reduction measures. Due to high computational costs, the prediction of concentrations of chemical substances with discretized atmospheric chemistry transport models (CTM) is still a great challenge. An alternative to the cumbersome numerical estimates is a computationally efficient neural network (NN). The design of the NN is much simpler than a CTM and allows approximating any bounded continuous function (i.e., concentration time series) with the desired accuracy. In particular, the NN trained on a set of CTM estimates can produce similar to CTM estimates up to the approximation error. We test the ability of a NN to produce CTM concentration estimates with the example of daily mean summer NO2 and SO2 concentrations. The measures of success in these tests are the difference in the consumption of computational resources and the difference between NN and CTM concentration estimates. Relying on the fact that after spin-up, CTM estimates are independent of the initial concentrations, we show that recurrent NN can also spin-up and predict atmospheric chemical state without having input concentration data. Moreover, we show that if the emission scenario does not change significantly from year to year, the NN can predict daily mean concentrations from meteorological data only.
How to cite: Vlasenko, A., Mattias, V., and Callies, U.: Estimation of NO2 and SO2 concentration changes in Europe from meteorological data with Neural Network, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1635, https://doi.org/10.5194/egusphere-egu2020-1635, 2020.
Chemical substances of anthropogenic and natural origin released into the atmosphere affect air quality and, as a consequence, the health of the population. As a result, there is a demand for reliable air quality simulations and future scenarios investigating the effects of emission reduction measures. Due to high computational costs, the prediction of concentrations of chemical substances with discretized atmospheric chemistry transport models (CTM) is still a great challenge. An alternative to the cumbersome numerical estimates is a computationally efficient neural network (NN). The design of the NN is much simpler than a CTM and allows approximating any bounded continuous function (i.e., concentration time series) with the desired accuracy. In particular, the NN trained on a set of CTM estimates can produce similar to CTM estimates up to the approximation error. We test the ability of a NN to produce CTM concentration estimates with the example of daily mean summer NO2 and SO2 concentrations. The measures of success in these tests are the difference in the consumption of computational resources and the difference between NN and CTM concentration estimates. Relying on the fact that after spin-up, CTM estimates are independent of the initial concentrations, we show that recurrent NN can also spin-up and predict atmospheric chemical state without having input concentration data. Moreover, we show that if the emission scenario does not change significantly from year to year, the NN can predict daily mean concentrations from meteorological data only.
How to cite: Vlasenko, A., Mattias, V., and Callies, U.: Estimation of NO2 and SO2 concentration changes in Europe from meteorological data with Neural Network, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1635, https://doi.org/10.5194/egusphere-egu2020-1635, 2020.
EGU2020-5574 | Displays | ITS4.3/AS5.2
Predicting atmospheric optical properties for radiative transfer computations using neural networksMenno Veerman, Robert Pincus, Caspar van Leeuwen, Damian Podareanu, Robin Stoffer, and Chiel van Heerwaarden
A fast and accurate treatment of radiation in meteorological models is essential for high quality simulations of the atmosphere. Despite our good understanding of the processes governing the transfer of radiation, full radiative transfer solvers are computationally extremely expensive. In this study, we use machine learning to accelerate the optical properties calculations of the Rapid Radiative Transfer Models for General circulation model applications - Parallel (RRTMGP). These optical properties control the absorption, scattering and emission of radiation within each grid cell. We train multiple neural networks that get as input the pressure, temperature and concentrations of water vapour and ozone of each grid cell and together predict all 224 or 256 quadrature points of each optical property. All networks are multilayer perceptrons and we test various network sizes to assess the trade-off between the accuracy of a neural network and its computational costs. We train two different sets of neural networks. The first set (generic) is trained for a wide range of atmospheric conditions, based on the profiles chosen by the Radiative Forcing Model Intercomparison Project (RFMIP). The second set (case-specific) is trained only for the range in temperature, pressure and moisture found in one large-eddy simulation based on a case with shallow convection over a vegetated surface. This case-specific set is used to explore the possible performance gains of case-specific tuning.
Most neural networks are able to predict the optical properties with high accuracy. Using a network with 2 hidden layers of 64 neurons, predicted optical depths in the longwave spectrum are highly accurate (R2 > 0.99). Similar accuracies are achieved for the other optical properties. Subsequently, we take a set of 100 atmospheric profiles and calculate profiles of longwave and shortwave radiative fluxes based on the optical properties predicted by the neural networks. Compared to fluxes based on the optical properties computed by RRTMGP, the downwelling longwave fluxes have errors within 0.5 W m-2 (<1%) and an average error of -0.011 W m-2 at the surface. The downwelling shortwave fluxes have an average error of -0.0013 W m-2 at the surface. Using Intel’s Math Kernel Library’s (MKL) BLAS routines to accelerate matrix multiplications, our implementation of the neural networks in RRTMGP is about 4 times faster than the original optical properties calculations. It can thus be concluded that neural networks are able to emulate the calculation of optical properties with high accuracy and computational speed.
How to cite: Veerman, M., Pincus, R., van Leeuwen, C., Podareanu, D., Stoffer, R., and van Heerwaarden, C.: Predicting atmospheric optical properties for radiative transfer computations using neural networks, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5574, https://doi.org/10.5194/egusphere-egu2020-5574, 2020.
A fast and accurate treatment of radiation in meteorological models is essential for high quality simulations of the atmosphere. Despite our good understanding of the processes governing the transfer of radiation, full radiative transfer solvers are computationally extremely expensive. In this study, we use machine learning to accelerate the optical properties calculations of the Rapid Radiative Transfer Models for General circulation model applications - Parallel (RRTMGP). These optical properties control the absorption, scattering and emission of radiation within each grid cell. We train multiple neural networks that get as input the pressure, temperature and concentrations of water vapour and ozone of each grid cell and together predict all 224 or 256 quadrature points of each optical property. All networks are multilayer perceptrons and we test various network sizes to assess the trade-off between the accuracy of a neural network and its computational costs. We train two different sets of neural networks. The first set (generic) is trained for a wide range of atmospheric conditions, based on the profiles chosen by the Radiative Forcing Model Intercomparison Project (RFMIP). The second set (case-specific) is trained only for the range in temperature, pressure and moisture found in one large-eddy simulation based on a case with shallow convection over a vegetated surface. This case-specific set is used to explore the possible performance gains of case-specific tuning.
Most neural networks are able to predict the optical properties with high accuracy. Using a network with 2 hidden layers of 64 neurons, predicted optical depths in the longwave spectrum are highly accurate (R2 > 0.99). Similar accuracies are achieved for the other optical properties. Subsequently, we take a set of 100 atmospheric profiles and calculate profiles of longwave and shortwave radiative fluxes based on the optical properties predicted by the neural networks. Compared to fluxes based on the optical properties computed by RRTMGP, the downwelling longwave fluxes have errors within 0.5 W m-2 (<1%) and an average error of -0.011 W m-2 at the surface. The downwelling shortwave fluxes have an average error of -0.0013 W m-2 at the surface. Using Intel’s Math Kernel Library’s (MKL) BLAS routines to accelerate matrix multiplications, our implementation of the neural networks in RRTMGP is about 4 times faster than the original optical properties calculations. It can thus be concluded that neural networks are able to emulate the calculation of optical properties with high accuracy and computational speed.
How to cite: Veerman, M., Pincus, R., van Leeuwen, C., Podareanu, D., Stoffer, R., and van Heerwaarden, C.: Predicting atmospheric optical properties for radiative transfer computations using neural networks, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5574, https://doi.org/10.5194/egusphere-egu2020-5574, 2020.
EGU2020-13215 | Displays | ITS4.3/AS5.2
Bare-earth DEM Generation in Urban Areas Based on a Machine Learning MethodYinxue Liu, Paul Bates, Jeffery Neal, and Dai Yamazaki
Precise representation of global terrain is of great significance for estimating global flood risk. As the most vulnerable areas to flooding, urban areas need GDEMs of high quality. However, current Global Digital Elevation Models (GDEMs) are all Digital Surface Models (DSMs) in urban areas, which will cause substantial blockages of flow pathways within flood inundation models. By taking GPS and LIDAR data as terrain observations, errors of popular GDEMs (including SRTM 1” void-filled version DEM - SRTM, Multi-Error-Removed Improved-Terrain DEM - MERIT and TanDEM-X 3” resolution DEM -TDM3) were analysed in seven varied types of cities. It was found that the RMSE of GDEMs errors are in the range of 2.3 m – 7.9 m, and that MERIT and TDM3 both outperformed SRTM. The error comparison between MERIT and TDM3 showed that the most accurate model varied among the studied cities. Generally, error of TDM3 is slightly lower than MERIT, but TDM3 has more extreme errors (absolute value exceeds 15 m). For cities which have experienced rapid development in the past decade, the RMSE of MERIT is lower than that of TDM3, which is mainly caused by the acquisition time difference between these two models. A machine learning method was adopted to estimate MERIT error. Night Time Light, world population density data, Openstreetmap building data, slope, elevation and neighbourhood elevation values from widely available datasets, comprising 14 factors in total, were used in the regression. Models were trained based on single city and combinations of cities, respectively, and then used to estimate error in a target city. By this approach, the RMSE of corrected MERIT can decline by up to 75% with target city trained model, though less significant a reduction of 35% -68% was shown in the combined model with target city excluded in the training data. Further validation via flood simulation showed improvements in terms of both flood extent and inundation depth by the corrected MERIT over the original MERIT, with a validation in small sized city. However, the corrected MERIT was not as good as TDM3 in this case. This method has the potential to generate a better bare-earth global DEM in urban areas, but the sensitive level about the model extrapolative application needs investigation in more study sites.
How to cite: Liu, Y., Bates, P., Neal, J., and Yamazaki, D.: Bare-earth DEM Generation in Urban Areas Based on a Machine Learning Method, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13215, https://doi.org/10.5194/egusphere-egu2020-13215, 2020.
Precise representation of global terrain is of great significance for estimating global flood risk. As the most vulnerable areas to flooding, urban areas need GDEMs of high quality. However, current Global Digital Elevation Models (GDEMs) are all Digital Surface Models (DSMs) in urban areas, which will cause substantial blockages of flow pathways within flood inundation models. By taking GPS and LIDAR data as terrain observations, errors of popular GDEMs (including SRTM 1” void-filled version DEM - SRTM, Multi-Error-Removed Improved-Terrain DEM - MERIT and TanDEM-X 3” resolution DEM -TDM3) were analysed in seven varied types of cities. It was found that the RMSE of GDEMs errors are in the range of 2.3 m – 7.9 m, and that MERIT and TDM3 both outperformed SRTM. The error comparison between MERIT and TDM3 showed that the most accurate model varied among the studied cities. Generally, error of TDM3 is slightly lower than MERIT, but TDM3 has more extreme errors (absolute value exceeds 15 m). For cities which have experienced rapid development in the past decade, the RMSE of MERIT is lower than that of TDM3, which is mainly caused by the acquisition time difference between these two models. A machine learning method was adopted to estimate MERIT error. Night Time Light, world population density data, Openstreetmap building data, slope, elevation and neighbourhood elevation values from widely available datasets, comprising 14 factors in total, were used in the regression. Models were trained based on single city and combinations of cities, respectively, and then used to estimate error in a target city. By this approach, the RMSE of corrected MERIT can decline by up to 75% with target city trained model, though less significant a reduction of 35% -68% was shown in the combined model with target city excluded in the training data. Further validation via flood simulation showed improvements in terms of both flood extent and inundation depth by the corrected MERIT over the original MERIT, with a validation in small sized city. However, the corrected MERIT was not as good as TDM3 in this case. This method has the potential to generate a better bare-earth global DEM in urban areas, but the sensitive level about the model extrapolative application needs investigation in more study sites.
How to cite: Liu, Y., Bates, P., Neal, J., and Yamazaki, D.: Bare-earth DEM Generation in Urban Areas Based on a Machine Learning Method, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13215, https://doi.org/10.5194/egusphere-egu2020-13215, 2020.
EGU2020-2135 | Displays | ITS4.3/AS5.2
GPP and NEE estimation for global forests based on a deep convolutional neural networkWenjin Wu
To generate FluxNet-consistent annual forest GPP and NEE, we have developed a deep neural network that can retrieve estimations globally. Seven parameters considering different aspects of forest ecological and climatic features which include the Normalized Difference Vegetation Index (NDVI), the Enhanced Vegetation Index (EVI), Evapotranspiration (ET), Land Surface Temperature during Daytime (LSTD), Land Surface Temperature at Night (LSTN), precipitation, and forest type were selected as the input. All these datasets can be acquired from the Google earth engine platform to ensure rapid large-scale analysis. The model has three favorable traits: (1) Based on a multidimensional convolutional block, this model arranges all temporal variables into a two-dimensional feature map to consider phenology and inter-parameter relationships. The model can thus obtain the estimation with encoded meaningful patterns instead of raw input variables. (2) In contrast to filling data gaps with historical values or smoothing methods, the new model is developed and trained to catch signals with certain levels of occlusions; therefore, it can tolerate a relativly large portion of missing data. (3) The model is data-driven and interpretable. Therefore, it can potentially discover unknown mechanisms of forest carbon absorption by showing us how these mechanisms work to make correct estimations. The model was compared to three traditional machine learning models and presented superior performances. With this new model, global forest GPP and NEE in 2003 and 2018 were obtained. Variations of the carbon flux during the 16 years in between were analyzed.
How to cite: Wu, W.: GPP and NEE estimation for global forests based on a deep convolutional neural network, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2135, https://doi.org/10.5194/egusphere-egu2020-2135, 2020.
To generate FluxNet-consistent annual forest GPP and NEE, we have developed a deep neural network that can retrieve estimations globally. Seven parameters considering different aspects of forest ecological and climatic features which include the Normalized Difference Vegetation Index (NDVI), the Enhanced Vegetation Index (EVI), Evapotranspiration (ET), Land Surface Temperature during Daytime (LSTD), Land Surface Temperature at Night (LSTN), precipitation, and forest type were selected as the input. All these datasets can be acquired from the Google earth engine platform to ensure rapid large-scale analysis. The model has three favorable traits: (1) Based on a multidimensional convolutional block, this model arranges all temporal variables into a two-dimensional feature map to consider phenology and inter-parameter relationships. The model can thus obtain the estimation with encoded meaningful patterns instead of raw input variables. (2) In contrast to filling data gaps with historical values or smoothing methods, the new model is developed and trained to catch signals with certain levels of occlusions; therefore, it can tolerate a relativly large portion of missing data. (3) The model is data-driven and interpretable. Therefore, it can potentially discover unknown mechanisms of forest carbon absorption by showing us how these mechanisms work to make correct estimations. The model was compared to three traditional machine learning models and presented superior performances. With this new model, global forest GPP and NEE in 2003 and 2018 were obtained. Variations of the carbon flux during the 16 years in between were analyzed.
How to cite: Wu, W.: GPP and NEE estimation for global forests based on a deep convolutional neural network, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2135, https://doi.org/10.5194/egusphere-egu2020-2135, 2020.
EGU2020-5440 | Displays | ITS4.3/AS5.2
Constraining uncertainty in projected gross primary production with machine learningManuel Schlund, Veronika Eyring, Gustau Camps-Valls, Pierre Friedlingstein, Pierre Gentine, and Markus Reichstein
By absorbing about one quarter of the total anthropogenic CO2 emissions, the terrestrial biosphere is an important carbon sink of Earth’s carbon cycle. A key metric of this process is the terrestrial gross primary production (GPP), which describes the biogeochemical production of energy by photosynthesis. Elevated atmospheric CO2 concentrations will increase GPP in the future (CO2 fertilization effect). However, projections from different Earth system models participating in the Coupled Model Intercomparison Project Phase 5 (CMIP5) show a large spread in carbon cycle related quantities. In this study, we present a new supervised machine learning approach to constrain multi-model climate projections using observation-driven data. Our method based on Gradient Boosted Regression Trees handles multiple predictor variables of the present-day climate and accounts for non-linear dependencies. Applied to GPP in the representative concentration pathway RCP 8.5 at the end of the 21st century (2081–2100), the new approach reduces the “likely” range (as defined by the Intergovernmental Panel on Climate Change) of the CMIP5 multi-model projection of GPP to 161–203 GtC yr-1. Compared to the unweighted multi-model mean (148–224 GtC yr-1), this is an uncertainty reduction of 45%. Our new method is not limited to projections of the future carbon cycle, but can be applied to any target variable where suitable gridded data is available.
How to cite: Schlund, M., Eyring, V., Camps-Valls, G., Friedlingstein, P., Gentine, P., and Reichstein, M.: Constraining uncertainty in projected gross primary production with machine learning, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5440, https://doi.org/10.5194/egusphere-egu2020-5440, 2020.
By absorbing about one quarter of the total anthropogenic CO2 emissions, the terrestrial biosphere is an important carbon sink of Earth’s carbon cycle. A key metric of this process is the terrestrial gross primary production (GPP), which describes the biogeochemical production of energy by photosynthesis. Elevated atmospheric CO2 concentrations will increase GPP in the future (CO2 fertilization effect). However, projections from different Earth system models participating in the Coupled Model Intercomparison Project Phase 5 (CMIP5) show a large spread in carbon cycle related quantities. In this study, we present a new supervised machine learning approach to constrain multi-model climate projections using observation-driven data. Our method based on Gradient Boosted Regression Trees handles multiple predictor variables of the present-day climate and accounts for non-linear dependencies. Applied to GPP in the representative concentration pathway RCP 8.5 at the end of the 21st century (2081–2100), the new approach reduces the “likely” range (as defined by the Intergovernmental Panel on Climate Change) of the CMIP5 multi-model projection of GPP to 161–203 GtC yr-1. Compared to the unweighted multi-model mean (148–224 GtC yr-1), this is an uncertainty reduction of 45%. Our new method is not limited to projections of the future carbon cycle, but can be applied to any target variable where suitable gridded data is available.
How to cite: Schlund, M., Eyring, V., Camps-Valls, G., Friedlingstein, P., Gentine, P., and Reichstein, M.: Constraining uncertainty in projected gross primary production with machine learning, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5440, https://doi.org/10.5194/egusphere-egu2020-5440, 2020.
EGU2020-2222 | Displays | ITS4.3/AS5.2 | Highlight
Coastal to Abyssal Vertical Sediment Accumulation Rates Predicted via Machine-Learning: Towards Sediment Characterization on a Global ScaleGiancarlo Restreppo, Warren Wood, and Benjamin Phrampus
Observed vertical sediment accumulation rates (SARs; n = 1166) were gathered from ~55 years of peer reviewed literature. Original methods of rate calculation include long-term isotope geochronology (14C, 210Pb, and 137Cs), pollen analysis, horizon markers, and box coring. These observations are used to create a database of contemporary vertical SARs. Rates were converted to cm yr-1, paired with the observation’s longitude and latitude, and placed into a machine-learning based Geospatial Predictive Seafloor Model (GPSM). GPSM finds correlations between the data and established global “predictors” (quantities known or estimable everywhere; e.g. distance from coast line, river mouths, etc.). The result, using a k-nearest neighbor (k-NN) algorithm, is a 5-arc-minute global map of predicted vertical SARs. The map generated provides a global reference for vertical sedimentation from coastal to abyssal depths. Areas of highest sedimentation, ~3-8 cm yr-1, are generally river mouth proximal coastal zones and continental shelves on passive tectonic margins (e.g. the Gulf of Mexico, eastern United States, eastern continental Asia, the Pacific Islands north of Australia), with rates falling exponentially towards the deepest parts of the oceans. Coastal zones on active tectonic margins display vertical sedimentation of ~1 cm yr-1, which is limited to near shore when compared to passive margins. Abyssal depth rates are functionally zero at the time scale examined (~10-4 cm yr-1), and increase one order of magnitude near the Mid-Atlantic ridge and at the conjunction of the Pacific, Nazca, and Cocos tectonic plates. Predicted sedimentation patterns are then compared to established quantities of fluvial sediment discharge to the oceans, calculated by Milliman and Farnsworth in River Discharge to the Coastal Ocean: A Global Synthesis (2011).
How to cite: Restreppo, G., Wood, W., and Phrampus, B.: Coastal to Abyssal Vertical Sediment Accumulation Rates Predicted via Machine-Learning: Towards Sediment Characterization on a Global Scale, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2222, https://doi.org/10.5194/egusphere-egu2020-2222, 2020.
Observed vertical sediment accumulation rates (SARs; n = 1166) were gathered from ~55 years of peer reviewed literature. Original methods of rate calculation include long-term isotope geochronology (14C, 210Pb, and 137Cs), pollen analysis, horizon markers, and box coring. These observations are used to create a database of contemporary vertical SARs. Rates were converted to cm yr-1, paired with the observation’s longitude and latitude, and placed into a machine-learning based Geospatial Predictive Seafloor Model (GPSM). GPSM finds correlations between the data and established global “predictors” (quantities known or estimable everywhere; e.g. distance from coast line, river mouths, etc.). The result, using a k-nearest neighbor (k-NN) algorithm, is a 5-arc-minute global map of predicted vertical SARs. The map generated provides a global reference for vertical sedimentation from coastal to abyssal depths. Areas of highest sedimentation, ~3-8 cm yr-1, are generally river mouth proximal coastal zones and continental shelves on passive tectonic margins (e.g. the Gulf of Mexico, eastern United States, eastern continental Asia, the Pacific Islands north of Australia), with rates falling exponentially towards the deepest parts of the oceans. Coastal zones on active tectonic margins display vertical sedimentation of ~1 cm yr-1, which is limited to near shore when compared to passive margins. Abyssal depth rates are functionally zero at the time scale examined (~10-4 cm yr-1), and increase one order of magnitude near the Mid-Atlantic ridge and at the conjunction of the Pacific, Nazca, and Cocos tectonic plates. Predicted sedimentation patterns are then compared to established quantities of fluvial sediment discharge to the oceans, calculated by Milliman and Farnsworth in River Discharge to the Coastal Ocean: A Global Synthesis (2011).
How to cite: Restreppo, G., Wood, W., and Phrampus, B.: Coastal to Abyssal Vertical Sediment Accumulation Rates Predicted via Machine-Learning: Towards Sediment Characterization on a Global Scale, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2222, https://doi.org/10.5194/egusphere-egu2020-2222, 2020.
EGU2020-12634 | Displays | ITS4.3/AS5.2
Real-time Japanese nearshore wave prediction for one-week later using GMDH and global wave forecast dataSooyoul Kim, Keishiro Chiyonobu, Hajime Mase, and Masahide Takeda
The present study addresses how one-week later nearshore wave heights and periods are predicted by using a machine learning technique and global wave forecast data. For the machine learning technique, Group Method of Data Handling (GMDH) is used. The GMDH uses computer-based mathematical modeling of multi-parametric regression characterized by fully automatic structural and parametric optimization first introduced by Ivankhnenko (1971). The algorithm of GMDH can be described by a self-selecting procedure deriving a multi-order polynomial to predict an accurate output. Since its procedure is similar to a feed-forward transformation, the algorithm is called a Polynomial Neural Network (Onwubolu, 2016).
For the global wave forecast data, the datasets released by the Japan Meteorological Agency (JMA), National Oceanic and Atmospheric Administration (NOAA), and European Centre for Medium-Range Weather Forecasts (ECMWF). The global wave forecasts are generally available every 6 hours, with forecast out 180 hours in the future. However, since timely available forecasts are produced on synoptic scaled calculation domains, a consistent level of predictive accuracy at specific locations along Japanese coasts cannot be expected from the viewpoint of spatial resolution.
The present study aims to aid harbor and marine construction by establishing a nearshore wave prediction model for 14 stations around Japan that forecast up to one week in the future.
When the GMDH-based wave model uses the input data of global wave data by NOAA and ECMWF, the estimations of significant wave heights agreed well with observations. On the other hand, a combination of JMA and ECMWF wave data gave a good performance for significant wave periods. Since the present method transforms global wave prediction data into local nearshore waves by GMDH, it is possible at any concerned location where the nearshore wave observations can be obtained for the training of GMDH.
How to cite: Kim, S., Chiyonobu, K., Mase, H., and Takeda, M.: Real-time Japanese nearshore wave prediction for one-week later using GMDH and global wave forecast data , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12634, https://doi.org/10.5194/egusphere-egu2020-12634, 2020.
The present study addresses how one-week later nearshore wave heights and periods are predicted by using a machine learning technique and global wave forecast data. For the machine learning technique, Group Method of Data Handling (GMDH) is used. The GMDH uses computer-based mathematical modeling of multi-parametric regression characterized by fully automatic structural and parametric optimization first introduced by Ivankhnenko (1971). The algorithm of GMDH can be described by a self-selecting procedure deriving a multi-order polynomial to predict an accurate output. Since its procedure is similar to a feed-forward transformation, the algorithm is called a Polynomial Neural Network (Onwubolu, 2016).
For the global wave forecast data, the datasets released by the Japan Meteorological Agency (JMA), National Oceanic and Atmospheric Administration (NOAA), and European Centre for Medium-Range Weather Forecasts (ECMWF). The global wave forecasts are generally available every 6 hours, with forecast out 180 hours in the future. However, since timely available forecasts are produced on synoptic scaled calculation domains, a consistent level of predictive accuracy at specific locations along Japanese coasts cannot be expected from the viewpoint of spatial resolution.
The present study aims to aid harbor and marine construction by establishing a nearshore wave prediction model for 14 stations around Japan that forecast up to one week in the future.
When the GMDH-based wave model uses the input data of global wave data by NOAA and ECMWF, the estimations of significant wave heights agreed well with observations. On the other hand, a combination of JMA and ECMWF wave data gave a good performance for significant wave periods. Since the present method transforms global wave prediction data into local nearshore waves by GMDH, it is possible at any concerned location where the nearshore wave observations can be obtained for the training of GMDH.
How to cite: Kim, S., Chiyonobu, K., Mase, H., and Takeda, M.: Real-time Japanese nearshore wave prediction for one-week later using GMDH and global wave forecast data , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12634, https://doi.org/10.5194/egusphere-egu2020-12634, 2020.
EGU2020-19772 | Displays | ITS4.3/AS5.2
Wave data prediction and reconstruction by recurrent neural networks at the nearshore area of NorderneyChristoph Jörges, Cordula Berkenbrink, and Britta Stumpe
Sea level rise, a possible increase in frequency and intensity of storms and other effects of global warming exert pressure on the coastal regions of the North Sea. Also storm surges threaten the basis of existence for many people in the affected areas. As well as for building coastal protection or offshore structures, detailed knowledge of wave data, especially the wave height, is of particular interest. Therefore, the nearshore wave climate at the island Norderney is measured by buoys since the early 1990s. Caused by crossing ships or weather impacts, these buoys can be damaged. This leads to a huge amount of missing data in the wave data time series, which are the basis for numerical modelling, statistical analysis and developing coastal protection.
Artificial neural networks are a common method to reconstruct and forecast wave heights nowadays. This study shows a new technique to reconstruct and forecast significant wave height measured by buoys in the nearshore area of the Norderney coastline. Buoy data of the period 2004 to 2017 from the NLWKN – Coastal Research Station at Norderney were used to train three different statistical and machine learning models namely linear regression, feed-forward neural network and long short-term memory (LSTM), respectively. An energy density spectrum was tested against calculated sea state parameter as input. The LSTM – a recurrent neural network – is the proposed algorithm to reconstruct wave height data. It is especially designed for sequential data, but was performed on wave spectral data in this study for the first time. Depending on the input parameter of the respectively model, the LSTM can reconstruct and forecast time series of arbitrary length.
Using information about wind speed and direction and water depth, as well as the wave height of two neighboring buoy stations, the LSTM reconstructs the wave height with a correlation coefficient of 0.98 between measured and reconstructed data.
Unfortunately, the forecasting and reconstruction error of extreme events is highly underestimated, though these events are of great interest for climate and ocean science. Currently, this error is being specifically attempted to improve. Compared to numerical modeling, the machine learning approach requires less computational effort. Results of this study can be used to complete spatial and temporal wave height datasets, providing a better basis for trend analysis in relation to climate change and for validating numerical models for decision making in coastal protection and management.
How to cite: Jörges, C., Berkenbrink, C., and Stumpe, B.: Wave data prediction and reconstruction by recurrent neural networks at the nearshore area of Norderney, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19772, https://doi.org/10.5194/egusphere-egu2020-19772, 2020.
Sea level rise, a possible increase in frequency and intensity of storms and other effects of global warming exert pressure on the coastal regions of the North Sea. Also storm surges threaten the basis of existence for many people in the affected areas. As well as for building coastal protection or offshore structures, detailed knowledge of wave data, especially the wave height, is of particular interest. Therefore, the nearshore wave climate at the island Norderney is measured by buoys since the early 1990s. Caused by crossing ships or weather impacts, these buoys can be damaged. This leads to a huge amount of missing data in the wave data time series, which are the basis for numerical modelling, statistical analysis and developing coastal protection.
Artificial neural networks are a common method to reconstruct and forecast wave heights nowadays. This study shows a new technique to reconstruct and forecast significant wave height measured by buoys in the nearshore area of the Norderney coastline. Buoy data of the period 2004 to 2017 from the NLWKN – Coastal Research Station at Norderney were used to train three different statistical and machine learning models namely linear regression, feed-forward neural network and long short-term memory (LSTM), respectively. An energy density spectrum was tested against calculated sea state parameter as input. The LSTM – a recurrent neural network – is the proposed algorithm to reconstruct wave height data. It is especially designed for sequential data, but was performed on wave spectral data in this study for the first time. Depending on the input parameter of the respectively model, the LSTM can reconstruct and forecast time series of arbitrary length.
Using information about wind speed and direction and water depth, as well as the wave height of two neighboring buoy stations, the LSTM reconstructs the wave height with a correlation coefficient of 0.98 between measured and reconstructed data.
Unfortunately, the forecasting and reconstruction error of extreme events is highly underestimated, though these events are of great interest for climate and ocean science. Currently, this error is being specifically attempted to improve. Compared to numerical modeling, the machine learning approach requires less computational effort. Results of this study can be used to complete spatial and temporal wave height datasets, providing a better basis for trend analysis in relation to climate change and for validating numerical models for decision making in coastal protection and management.
How to cite: Jörges, C., Berkenbrink, C., and Stumpe, B.: Wave data prediction and reconstruction by recurrent neural networks at the nearshore area of Norderney, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19772, https://doi.org/10.5194/egusphere-egu2020-19772, 2020.
EGU2020-3447 | Displays | ITS4.3/AS5.2
Design of a neural network aimed at predicting meteotsunamis in Ciutadella harbour (Balearic Islands, Spain)Maria-del-Mar Vich and Romualdo Romero
This work explores the applicability of neural networks (NN) for forecasting atmospherically-driven tsunamis affecting Ciutadella harbor in Menorca (Balearic Islands). These meteotsunamis can lead to wave heights around 1 m, and several episodes in the modern history have reached 2-4 m with catastrophic consequences. A timely and skilled prediction of these phenomena could significantly help to mitigate the damages inflicted to the port facilities and moored vessels. We examine the relevant physical mechanisms that promote meteotsunamis in Ciutadella harbour and choose the input variables of the NN accordingly. Two different NNs are devised and tested: a dry and wet scheme. The difference between schemes resides on the input layer; while the first scheme is exclusively focused on the triggering role of atmospheric gravity waves (governed by temperature and wind profiles across the tropospheric column), the second scheme also incorporates humidity as input information with the purpose of accounting for the occasional influence of moist convection. We train both NNs using resilient backpropagation with weight backtracking method. Their performance is tested by means of classical deterministic verification indexes. We also compare both NN results against the performance of a substantially different prognostic method that relies on a sequence of atmospheric and oceanic numerical simulations. Both NN schemes show a skill comparable to that of computationally expensive approaches based on direct numerical simulation of the physical mechanisms. The expected greater versatility of the wet scheme over the dry scheme cannot be clearly proved owing to the limited size of the training database. The results emphasize the potential of a NN approach and open a clear path to an operational implementation, including probabilistic forecasting strategies.
How to cite: Vich, M.-M. and Romero, R.: Design of a neural network aimed at predicting meteotsunamis in Ciutadella harbour (Balearic Islands, Spain), EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3447, https://doi.org/10.5194/egusphere-egu2020-3447, 2020.
This work explores the applicability of neural networks (NN) for forecasting atmospherically-driven tsunamis affecting Ciutadella harbor in Menorca (Balearic Islands). These meteotsunamis can lead to wave heights around 1 m, and several episodes in the modern history have reached 2-4 m with catastrophic consequences. A timely and skilled prediction of these phenomena could significantly help to mitigate the damages inflicted to the port facilities and moored vessels. We examine the relevant physical mechanisms that promote meteotsunamis in Ciutadella harbour and choose the input variables of the NN accordingly. Two different NNs are devised and tested: a dry and wet scheme. The difference between schemes resides on the input layer; while the first scheme is exclusively focused on the triggering role of atmospheric gravity waves (governed by temperature and wind profiles across the tropospheric column), the second scheme also incorporates humidity as input information with the purpose of accounting for the occasional influence of moist convection. We train both NNs using resilient backpropagation with weight backtracking method. Their performance is tested by means of classical deterministic verification indexes. We also compare both NN results against the performance of a substantially different prognostic method that relies on a sequence of atmospheric and oceanic numerical simulations. Both NN schemes show a skill comparable to that of computationally expensive approaches based on direct numerical simulation of the physical mechanisms. The expected greater versatility of the wet scheme over the dry scheme cannot be clearly proved owing to the limited size of the training database. The results emphasize the potential of a NN approach and open a clear path to an operational implementation, including probabilistic forecasting strategies.
How to cite: Vich, M.-M. and Romero, R.: Design of a neural network aimed at predicting meteotsunamis in Ciutadella harbour (Balearic Islands, Spain), EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3447, https://doi.org/10.5194/egusphere-egu2020-3447, 2020.
EGU2020-15481 | Displays | ITS4.3/AS5.2
Deep learning for monthly Arctic sea ice concentration predictionTom Andersson, Fruzsina Agocs, Scott Hosking, María Pérez-Ortiz, Brooks Paige, Chris Russell, Andrew Elliott, Stephen Law, Jeremy Wilkinson, Yevgeny Askenov, David Schroeder, Will Tebbutt, Anita Faul, and Emily Shuckburgh
Over recent decades, the Arctic has warmed faster than any region on Earth. The rapid decline in Arctic sea ice extent (SIE) is often highlighted as a key indicator of anthropogenic climate change. Changes in sea ice disrupt Arctic wildlife and indigenous communities, and influence weather patterns as far as the mid-latitudes. Furthermore, melting sea ice attenuates the albedo effect by replacing the white, reflective ice with dark, heat-absorbing melt ponds and open sea, increasing the Sun’s radiative heat input to the Arctic and amplifying global warming through a positive feedback loop. Thus, the reliable prediction of sea ice under a changing climate is of both regional and global importance. However, Arctic sea ice presents severe modelling challenges due to its complex coupled interactions with the ocean and atmosphere, leading to high levels of uncertainty in numerical sea ice forecasts.
Deep learning (a subset of machine learning) is a family of algorithms that use multiple nonlinear processing layers to extract increasingly high-level features from raw input data. Recent advances in deep learning techniques have enabled widespread success in diverse areas where significant volumes of data are available, such as image recognition, genetics, and online recommendation systems. Despite this success, and the presence of large climate datasets, applications of deep learning in climate science have been scarce until recent years. For example, few studies have posed the prediction of Arctic sea ice in a deep learning framework. We investigate the potential of a fully data-driven, neural network sea ice prediction system based on satellite observations of the Arctic. In particular, we use inputs of monthly-averaged sea ice concentration (SIC) maps since 1979 from the National Snow and Ice Data Centre, as well as climatological variables (such as surface pressure and temperature) from the European Centre for Medium-Range Weather Forecasts reanalysis (ERA5) dataset. Past deep learning-based Arctic sea ice prediction systems tend to overestimate sea ice in recent years - we investigate the potential to learn the non-stationarity induced by climate change with the inclusion of multi-decade global warming indicators (such as average Arctic air temperature). We train the networks to predict SIC maps one month into the future, evaluating network prediction uncertainty by ensembling independent networks with different random weight initialisations. Our model accounts for seasonal variations in the drivers of sea ice by controlling for the month of the year being predicted. We benchmark our prediction system against persistence, linear extrapolation and autoregressive models, as well as September minimum SIE predictions from submissions to the Sea Ice Prediction Network's Sea Ice Outlook. Performance is evaluated quantitatively using the root mean square error and qualitatively by analysing maps of prediction error and uncertainty.
How to cite: Andersson, T., Agocs, F., Hosking, S., Pérez-Ortiz, M., Paige, B., Russell, C., Elliott, A., Law, S., Wilkinson, J., Askenov, Y., Schroeder, D., Tebbutt, W., Faul, A., and Shuckburgh, E.: Deep learning for monthly Arctic sea ice concentration prediction, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-15481, https://doi.org/10.5194/egusphere-egu2020-15481, 2020.
Over recent decades, the Arctic has warmed faster than any region on Earth. The rapid decline in Arctic sea ice extent (SIE) is often highlighted as a key indicator of anthropogenic climate change. Changes in sea ice disrupt Arctic wildlife and indigenous communities, and influence weather patterns as far as the mid-latitudes. Furthermore, melting sea ice attenuates the albedo effect by replacing the white, reflective ice with dark, heat-absorbing melt ponds and open sea, increasing the Sun’s radiative heat input to the Arctic and amplifying global warming through a positive feedback loop. Thus, the reliable prediction of sea ice under a changing climate is of both regional and global importance. However, Arctic sea ice presents severe modelling challenges due to its complex coupled interactions with the ocean and atmosphere, leading to high levels of uncertainty in numerical sea ice forecasts.
Deep learning (a subset of machine learning) is a family of algorithms that use multiple nonlinear processing layers to extract increasingly high-level features from raw input data. Recent advances in deep learning techniques have enabled widespread success in diverse areas where significant volumes of data are available, such as image recognition, genetics, and online recommendation systems. Despite this success, and the presence of large climate datasets, applications of deep learning in climate science have been scarce until recent years. For example, few studies have posed the prediction of Arctic sea ice in a deep learning framework. We investigate the potential of a fully data-driven, neural network sea ice prediction system based on satellite observations of the Arctic. In particular, we use inputs of monthly-averaged sea ice concentration (SIC) maps since 1979 from the National Snow and Ice Data Centre, as well as climatological variables (such as surface pressure and temperature) from the European Centre for Medium-Range Weather Forecasts reanalysis (ERA5) dataset. Past deep learning-based Arctic sea ice prediction systems tend to overestimate sea ice in recent years - we investigate the potential to learn the non-stationarity induced by climate change with the inclusion of multi-decade global warming indicators (such as average Arctic air temperature). We train the networks to predict SIC maps one month into the future, evaluating network prediction uncertainty by ensembling independent networks with different random weight initialisations. Our model accounts for seasonal variations in the drivers of sea ice by controlling for the month of the year being predicted. We benchmark our prediction system against persistence, linear extrapolation and autoregressive models, as well as September minimum SIE predictions from submissions to the Sea Ice Prediction Network's Sea Ice Outlook. Performance is evaluated quantitatively using the root mean square error and qualitatively by analysing maps of prediction error and uncertainty.
How to cite: Andersson, T., Agocs, F., Hosking, S., Pérez-Ortiz, M., Paige, B., Russell, C., Elliott, A., Law, S., Wilkinson, J., Askenov, Y., Schroeder, D., Tebbutt, W., Faul, A., and Shuckburgh, E.: Deep learning for monthly Arctic sea ice concentration prediction, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-15481, https://doi.org/10.5194/egusphere-egu2020-15481, 2020.
EGU2020-10366 | Displays | ITS4.3/AS5.2
Deep Learning for Precipitation Estimation from Satellite and Rain Gauges MeasurementsArthur Moraux, Steven Dewitte, Bruno Cornelis, and Adrian Munteanu
In the coming years, Artificial Intelligence (AI), for which Deep Learning (DL) is an essential component, is expected to transform society in a way that is compared to the introduction of electricity or the introduction of the internet. The high expectations are founded on the many impressive results of recent DL studies for AI tasks (e.g. computer vision, text translation, image or text generation...). Also for weather and climate observations, a large potential for AI application exists.
We present the results of the recent paper [Moraux et al, 2019], which is one of the first demonstrations of the application of cutting edge deep learning techniques to a practical weather observation problem. We developed a multiscale encoder-decoder convolutional neural network using the three most relevant SEVIRI/MSG spectral images at 8.7, 10.8 and 12.0 micron and in situ rain gauge measurements as input. The network is trained to reproduce precipitation measured by rain gauges in Belgium, the Netherlands and Germany. Precipitating pixels are detected with a POD of 0.75 and a FAR of 0.3. Instantaneous precipitation rate is estimated with a RMSE of 1.6 mm/h.
Reference:
[Moraux et al, 2019] Moraux, A.; Dewitte, S.; Cornelis, B.; Munteanu, A. Deep Learning for Precipitation Estimation from Satellite and Rain Gauges Measurements. Remote Sens. 2019, 11, 2463.
How to cite: Moraux, A., Dewitte, S., Cornelis, B., and Munteanu, A.: Deep Learning for Precipitation Estimation from Satellite and Rain Gauges Measurements, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10366, https://doi.org/10.5194/egusphere-egu2020-10366, 2020.
In the coming years, Artificial Intelligence (AI), for which Deep Learning (DL) is an essential component, is expected to transform society in a way that is compared to the introduction of electricity or the introduction of the internet. The high expectations are founded on the many impressive results of recent DL studies for AI tasks (e.g. computer vision, text translation, image or text generation...). Also for weather and climate observations, a large potential for AI application exists.
We present the results of the recent paper [Moraux et al, 2019], which is one of the first demonstrations of the application of cutting edge deep learning techniques to a practical weather observation problem. We developed a multiscale encoder-decoder convolutional neural network using the three most relevant SEVIRI/MSG spectral images at 8.7, 10.8 and 12.0 micron and in situ rain gauge measurements as input. The network is trained to reproduce precipitation measured by rain gauges in Belgium, the Netherlands and Germany. Precipitating pixels are detected with a POD of 0.75 and a FAR of 0.3. Instantaneous precipitation rate is estimated with a RMSE of 1.6 mm/h.
Reference:
[Moraux et al, 2019] Moraux, A.; Dewitte, S.; Cornelis, B.; Munteanu, A. Deep Learning for Precipitation Estimation from Satellite and Rain Gauges Measurements. Remote Sens. 2019, 11, 2463.
How to cite: Moraux, A., Dewitte, S., Cornelis, B., and Munteanu, A.: Deep Learning for Precipitation Estimation from Satellite and Rain Gauges Measurements, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10366, https://doi.org/10.5194/egusphere-egu2020-10366, 2020.
EGU2020-4824 | Displays | ITS4.3/AS5.2
Necessary conditions for algorithmic tuning of weather prediction models using OpenIFS as an exampleLauri Tuppi, Pirkka Ollinaho, Madeleine Ekblom, Vladimir Shemyakin, and Heikki Järvinen
Algorithmic model tuning is a promising approach to yield the best possible performance of multiscale multi-phase atmospheric models once the model structure is fixed. We are curious about to what degree one can trust the algorithmic tuning process. We approach the problem by studying the convergence of this process in a semi-realistic case. Let us denote M(x0;θd) as the default model, where x0 and θd are the initial state and default model parameter vectors, respectively. A necessary condition for an algorithmic tuning process to converge in a fully-realistic case is that the default model is recovered if the tuning process is initialised with perturbed model parameters θ and the default model forecasts are used as pseudo-observations. In this paper we study the circumstances where this condition is valid by carrying out a large set of convergence tests using two different tuning methods and the OpenIFS model. These tests are interpreted as guidelines for algorithmic model tuning applications.
The results of this study can be used as recipe for maximising efficiency of algorithmic tuning. In the convergence tests, maximised efficiency was reached with using ensemble initial conditions, cost function that covers entire model domain, short forecast length and medium-sized ensembles.
How to cite: Tuppi, L., Ollinaho, P., Ekblom, M., Shemyakin, V., and Järvinen, H.: Necessary conditions for algorithmic tuning of weather prediction models using OpenIFS as an example, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4824, https://doi.org/10.5194/egusphere-egu2020-4824, 2020.
Algorithmic model tuning is a promising approach to yield the best possible performance of multiscale multi-phase atmospheric models once the model structure is fixed. We are curious about to what degree one can trust the algorithmic tuning process. We approach the problem by studying the convergence of this process in a semi-realistic case. Let us denote M(x0;θd) as the default model, where x0 and θd are the initial state and default model parameter vectors, respectively. A necessary condition for an algorithmic tuning process to converge in a fully-realistic case is that the default model is recovered if the tuning process is initialised with perturbed model parameters θ and the default model forecasts are used as pseudo-observations. In this paper we study the circumstances where this condition is valid by carrying out a large set of convergence tests using two different tuning methods and the OpenIFS model. These tests are interpreted as guidelines for algorithmic model tuning applications.
The results of this study can be used as recipe for maximising efficiency of algorithmic tuning. In the convergence tests, maximised efficiency was reached with using ensemble initial conditions, cost function that covers entire model domain, short forecast length and medium-sized ensembles.
How to cite: Tuppi, L., Ollinaho, P., Ekblom, M., Shemyakin, V., and Järvinen, H.: Necessary conditions for algorithmic tuning of weather prediction models using OpenIFS as an example, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4824, https://doi.org/10.5194/egusphere-egu2020-4824, 2020.
Deep learning is a modeling approach that has shown impressive results in image processing and is arguably a promising tool for dealing with spatially extended complex systems such earth atmosphere with its visually interpretable patterns. A disadvantage of the neural network approach is that it typically requires an enormous amount of training data.
Another recently proposed modeling approach is supermodeling. In supermodeling it is assumed that a dynamical system – the truth – is modelled by a set of good but imperfect models. The idea is to improve model performance by dynamically combining imperfect models during the simulation. The resulting combination of models is called the supermodel. The combination strength has to be learned from data. However, since supermodels do not start from scratch, but make use of existing domain knowledge, they may learn from less data.
One of the ways to combine models is to define the tendencies of the supermodel as linear (weighted) combinations of the imperfect model tendencies. Several methods including linear regression have been proposed to optimize the weights. However, the combination method might also be nonlinear. In this work we propose and explore a novel combination of deep learning and supermodeling, in which convolutional neural networks are used as tool to combine the predictions of the imperfect models. The different supermodeling strategies are applied in simulations in a controlled environment with a three-level, quasi-geostrophic spectral model that serves as ground truth and perturbed models that serve as the imperfect models.
How to cite: Wiegerinck, W.: Neural Supermodeling, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12601, https://doi.org/10.5194/egusphere-egu2020-12601, 2020.
Deep learning is a modeling approach that has shown impressive results in image processing and is arguably a promising tool for dealing with spatially extended complex systems such earth atmosphere with its visually interpretable patterns. A disadvantage of the neural network approach is that it typically requires an enormous amount of training data.
Another recently proposed modeling approach is supermodeling. In supermodeling it is assumed that a dynamical system – the truth – is modelled by a set of good but imperfect models. The idea is to improve model performance by dynamically combining imperfect models during the simulation. The resulting combination of models is called the supermodel. The combination strength has to be learned from data. However, since supermodels do not start from scratch, but make use of existing domain knowledge, they may learn from less data.
One of the ways to combine models is to define the tendencies of the supermodel as linear (weighted) combinations of the imperfect model tendencies. Several methods including linear regression have been proposed to optimize the weights. However, the combination method might also be nonlinear. In this work we propose and explore a novel combination of deep learning and supermodeling, in which convolutional neural networks are used as tool to combine the predictions of the imperfect models. The different supermodeling strategies are applied in simulations in a controlled environment with a three-level, quasi-geostrophic spectral model that serves as ground truth and perturbed models that serve as the imperfect models.
How to cite: Wiegerinck, W.: Neural Supermodeling, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12601, https://doi.org/10.5194/egusphere-egu2020-12601, 2020.
EGU2020-19085 | Displays | ITS4.3/AS5.2
Machine Learning for Cloud MaskingJohan Sjöberg, Sam Jackson, Karel Adamek, Wesley Armour, and Jeyarajan Thiyagalingam
As the importance of satellite readings grows in fields as varied as meteorology, urban planning and climate change science so has the importance of satellite reading accuracy. This has in turn lead to an increased need for accurate cloud masking algorithms given the large impact that clouds have on the accuracy of these readings. At the moment there are some automatic cloud masking algorithms, including one based on Bayesian statistics. However, they all suffer from precision issues as well as issues with misclassifying normal natural phenomena such as ocean sun glint, sea ice and dust plumes as clouds. Given that these natural phenomena tend to be concentrated in certain regions, this also implies that the precision of most algorithms tends to vary from region to region.
This has led to eyes increasingly turning to machine learning and image segmentation techniques to perform cloud masking. In this presentation it will be described how and with what result these techniques can be applied to Sentinel-3 SLSTR data with the main focus being techniques that are variations of the so called fully convolutional networks (FCNs) originally proposed by Long and Shelhamer in 2015. Given that FCNs have performed well in areas such as medical imaging, facial detection, navigation systems for self-driving cars etc., there should be a large potential for them within cloud detection.
The presentation will also look into the regional variability of these machine learning techniques and whether one can improve the overall cloud masking accuracy by developing models specifically for a region. Furthermore, it will aim to demonstrate how one can, by performing simple perturbation techniques, increase the interpretability of the model predictions something that is a salient issue given the somewhat black box-like nature of many machine learning models.
How to cite: Sjöberg, J., Jackson, S., Adamek, K., Armour, W., and Thiyagalingam, J.: Machine Learning for Cloud Masking, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19085, https://doi.org/10.5194/egusphere-egu2020-19085, 2020.
As the importance of satellite readings grows in fields as varied as meteorology, urban planning and climate change science so has the importance of satellite reading accuracy. This has in turn lead to an increased need for accurate cloud masking algorithms given the large impact that clouds have on the accuracy of these readings. At the moment there are some automatic cloud masking algorithms, including one based on Bayesian statistics. However, they all suffer from precision issues as well as issues with misclassifying normal natural phenomena such as ocean sun glint, sea ice and dust plumes as clouds. Given that these natural phenomena tend to be concentrated in certain regions, this also implies that the precision of most algorithms tends to vary from region to region.
This has led to eyes increasingly turning to machine learning and image segmentation techniques to perform cloud masking. In this presentation it will be described how and with what result these techniques can be applied to Sentinel-3 SLSTR data with the main focus being techniques that are variations of the so called fully convolutional networks (FCNs) originally proposed by Long and Shelhamer in 2015. Given that FCNs have performed well in areas such as medical imaging, facial detection, navigation systems for self-driving cars etc., there should be a large potential for them within cloud detection.
The presentation will also look into the regional variability of these machine learning techniques and whether one can improve the overall cloud masking accuracy by developing models specifically for a region. Furthermore, it will aim to demonstrate how one can, by performing simple perturbation techniques, increase the interpretability of the model predictions something that is a salient issue given the somewhat black box-like nature of many machine learning models.
How to cite: Sjöberg, J., Jackson, S., Adamek, K., Armour, W., and Thiyagalingam, J.: Machine Learning for Cloud Masking, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19085, https://doi.org/10.5194/egusphere-egu2020-19085, 2020.
EGU2020-8492 | Displays | ITS4.3/AS5.2
Mapping (un)certainty of machine learning-based spatial prediction models based on predictor space distancesHanna Meyer and Edzer Pebesma
Spatial mapping is an important task in environmental science to reveal spatial patterns and changes of the environment. In this context predictive modelling using flexible machine learning algorithms has become very popular. However, looking at the diversity of modelled (global) maps of environmental variables, there might be increasingly the impression that machine learning is a magic tool to map everything. Recently, the reliability of such maps have been increasingly questioned, calling for a reliable quantification of uncertainties.
Though spatial (cross-)validation allows giving a general error estimate for the predictions, models are usually applied to make predictions for a much larger area or might even be transferred to make predictions for an area where they were not trained on. But by making predictions on heterogeneous landscapes, there will be areas that feature environmental properties that have not been observed in the training data and hence not learned by the algorithm. This is problematic as most machine learning algorithms are weak in extrapolations and can only make reliable predictions for environments with conditions the model has knowledge about. Hence predictions for environmental conditions that differ significantly from the training data have to be considered as uncertain.
To approach this problem, we suggest a measure of uncertainty that allows identifying locations where predictions should be regarded with care. The proposed uncertainty measure is based on distances to the training data in the multidimensional predictor variable space. However, distances are not equally relevant within the feature space but some variables are more important than others in the machine learning model and hence are mainly responsible for prediction patterns. Therefore, we weight the distances by the model-derived importance of the predictors.
As a case study we use a simulated area-wide response variable for Europe, bio-climatic variables as predictors, as well as simulated field samples. Random Forest is applied as algorithm to predict the simulated response. The model is then used to make predictions for entire Europe. We then calculate the corresponding uncertainty and compare it to the area-wide true prediction error. The results show that the uncertainty map reflects the patterns in the true error very well and considerably outperforms ensemble-based standard deviations of predictions as indicator for uncertainty.
The resulting map of uncertainty gives valuable insights into spatial patterns of prediction uncertainty which is important when the predictions are used as a baseline for decision making or subsequent environmental modelling. Hence, we suggest that a map of distance-based uncertainty should be given in addition to prediction maps.
How to cite: Meyer, H. and Pebesma, E.: Mapping (un)certainty of machine learning-based spatial prediction models based on predictor space distances, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8492, https://doi.org/10.5194/egusphere-egu2020-8492, 2020.
Spatial mapping is an important task in environmental science to reveal spatial patterns and changes of the environment. In this context predictive modelling using flexible machine learning algorithms has become very popular. However, looking at the diversity of modelled (global) maps of environmental variables, there might be increasingly the impression that machine learning is a magic tool to map everything. Recently, the reliability of such maps have been increasingly questioned, calling for a reliable quantification of uncertainties.
Though spatial (cross-)validation allows giving a general error estimate for the predictions, models are usually applied to make predictions for a much larger area or might even be transferred to make predictions for an area where they were not trained on. But by making predictions on heterogeneous landscapes, there will be areas that feature environmental properties that have not been observed in the training data and hence not learned by the algorithm. This is problematic as most machine learning algorithms are weak in extrapolations and can only make reliable predictions for environments with conditions the model has knowledge about. Hence predictions for environmental conditions that differ significantly from the training data have to be considered as uncertain.
To approach this problem, we suggest a measure of uncertainty that allows identifying locations where predictions should be regarded with care. The proposed uncertainty measure is based on distances to the training data in the multidimensional predictor variable space. However, distances are not equally relevant within the feature space but some variables are more important than others in the machine learning model and hence are mainly responsible for prediction patterns. Therefore, we weight the distances by the model-derived importance of the predictors.
As a case study we use a simulated area-wide response variable for Europe, bio-climatic variables as predictors, as well as simulated field samples. Random Forest is applied as algorithm to predict the simulated response. The model is then used to make predictions for entire Europe. We then calculate the corresponding uncertainty and compare it to the area-wide true prediction error. The results show that the uncertainty map reflects the patterns in the true error very well and considerably outperforms ensemble-based standard deviations of predictions as indicator for uncertainty.
The resulting map of uncertainty gives valuable insights into spatial patterns of prediction uncertainty which is important when the predictions are used as a baseline for decision making or subsequent environmental modelling. Hence, we suggest that a map of distance-based uncertainty should be given in addition to prediction maps.
How to cite: Meyer, H. and Pebesma, E.: Mapping (un)certainty of machine learning-based spatial prediction models based on predictor space distances, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8492, https://doi.org/10.5194/egusphere-egu2020-8492, 2020.
EGU2020-7559 | Displays | ITS4.3/AS5.2 | Highlight
A deep learning based approach for inferring the distribution of potential extreme events from coarse resolution climate model outputLeroy Bird, Greg Bodeker, and Jordis Tradowsky
Frequency based climate change attribution of extreme weather events requires thousands of years worth of model output in order to obtain a statistically sound result. Additionally, extreme precipitation events in particular require a high resolution model as they can occur over a relatively small area. Unfortunately due storage and computational restrictions it is not feasible to run traditional models at a sufficiently high spatial resolution for the complete duration of these simulations. Instead, we suggest that deep learning could be used to emulate a proportion of a high resolution model, at a fraction of the computational cost. More specifically, we use a U-Net, a type of convolutional neural network. The U-Net takes as input, several fields from coarse resolution model output and is trained to predict corresponding high resolution precipitation fields. Because there are many potential precipitation fields associated with the coarse resolution model output, stochasticity is added to the U-Net and a generative adversarial network is employed in order to help create a realistic distribution of events. By sampling the U-Net many times, an estimate of the probability of a heavy precipitation event occurring on the sub-grid scale can be derived.
How to cite: Bird, L., Bodeker, G., and Tradowsky, J.: A deep learning based approach for inferring the distribution of potential extreme events from coarse resolution climate model output , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7559, https://doi.org/10.5194/egusphere-egu2020-7559, 2020.
Frequency based climate change attribution of extreme weather events requires thousands of years worth of model output in order to obtain a statistically sound result. Additionally, extreme precipitation events in particular require a high resolution model as they can occur over a relatively small area. Unfortunately due storage and computational restrictions it is not feasible to run traditional models at a sufficiently high spatial resolution for the complete duration of these simulations. Instead, we suggest that deep learning could be used to emulate a proportion of a high resolution model, at a fraction of the computational cost. More specifically, we use a U-Net, a type of convolutional neural network. The U-Net takes as input, several fields from coarse resolution model output and is trained to predict corresponding high resolution precipitation fields. Because there are many potential precipitation fields associated with the coarse resolution model output, stochasticity is added to the U-Net and a generative adversarial network is employed in order to help create a realistic distribution of events. By sampling the U-Net many times, an estimate of the probability of a heavy precipitation event occurring on the sub-grid scale can be derived.
How to cite: Bird, L., Bodeker, G., and Tradowsky, J.: A deep learning based approach for inferring the distribution of potential extreme events from coarse resolution climate model output , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7559, https://doi.org/10.5194/egusphere-egu2020-7559, 2020.
EGU2020-20177 | Displays | ITS4.3/AS5.2
Deep neural networks to downscale ocean climate modelsMarie Déchelle-Marquet, Marina Levy, Patrick Gallinari, Michel Crepon, and Sylvie Thiria
Ocean currents are a major source of impact on climate variability, through the heat transport they induce for instance. Ocean climate models have quite low resolution of about 50 km. Several dynamical processes such as instabilities and filaments which have a scale of 1km have a strong influence on the ocean state. We propose to observe and model these fine scale effects by a combination of satellite high resolution SST observations (1km resolution, daily observations) and mesoscale resolution altimetry observations (10km resolution, weekly observations) with deep neural networks. Whereas the downscaling of climate models has been commonly addressed with assimilation approaches, in the last few years neural networks emerged as powerful multi-scale analysis method. Besides, the large amount of available oceanic data makes attractive the use of deep learning to bridge the gap between scales variability.
This study aims at reconstructing the multi-scale variability of oceanic fields, based on the high resolution NATL60 model of ocean observations at different spatial resolutions: low-resolution sea surface height (SSH) and high resolution SST. As the link between residual neural networks and dynamical systems has recently been established, such a network is trained in a supervised way to reconstruct the high variability of SSH and ocean currents at submesoscale (a few kilometers). To ensure the conservation of physical aspects in the model outputs, physical knowledge is incorporated into the deep learning models training. Different validation methods are investigated and the model outputs are tested with regards to their physical plausibility. The method performance is discussed and compared to other baselines (namely convolutional neural network). The generalization of the proposed method on different ocean variables such as sea surface chlorophyll or sea surface salinity is also examined.
How to cite: Déchelle-Marquet, M., Levy, M., Gallinari, P., Crepon, M., and Thiria, S.: Deep neural networks to downscale ocean climate models, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20177, https://doi.org/10.5194/egusphere-egu2020-20177, 2020.
Ocean currents are a major source of impact on climate variability, through the heat transport they induce for instance. Ocean climate models have quite low resolution of about 50 km. Several dynamical processes such as instabilities and filaments which have a scale of 1km have a strong influence on the ocean state. We propose to observe and model these fine scale effects by a combination of satellite high resolution SST observations (1km resolution, daily observations) and mesoscale resolution altimetry observations (10km resolution, weekly observations) with deep neural networks. Whereas the downscaling of climate models has been commonly addressed with assimilation approaches, in the last few years neural networks emerged as powerful multi-scale analysis method. Besides, the large amount of available oceanic data makes attractive the use of deep learning to bridge the gap between scales variability.
This study aims at reconstructing the multi-scale variability of oceanic fields, based on the high resolution NATL60 model of ocean observations at different spatial resolutions: low-resolution sea surface height (SSH) and high resolution SST. As the link between residual neural networks and dynamical systems has recently been established, such a network is trained in a supervised way to reconstruct the high variability of SSH and ocean currents at submesoscale (a few kilometers). To ensure the conservation of physical aspects in the model outputs, physical knowledge is incorporated into the deep learning models training. Different validation methods are investigated and the model outputs are tested with regards to their physical plausibility. The method performance is discussed and compared to other baselines (namely convolutional neural network). The generalization of the proposed method on different ocean variables such as sea surface chlorophyll or sea surface salinity is also examined.
How to cite: Déchelle-Marquet, M., Levy, M., Gallinari, P., Crepon, M., and Thiria, S.: Deep neural networks to downscale ocean climate models, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20177, https://doi.org/10.5194/egusphere-egu2020-20177, 2020.
EGU2020-17919 | Displays | ITS4.3/AS5.2
Causal Discovery as a novel approach for CMIP6 climate model evaluationKevin Debeire, Veronika Eyring, Peer Nowack, and Jakob Runge
Causal discovery algorithms are machine learning methods that estimate the dependencies between different variables. One of these algorithms, the recently developed PCMCI algorithm (Runge et al., 2019) estimates the time-lagged causal dependency structures from multiple time series and is adapted to common properties of Earth System time series data. The PCMCI algorithm has already been successfully applied in climate science to reveal known interaction pathways between Earth regions commonly referred to as teleconnections, and to explore new teleconnections (Kretschmer et al., 2017). One recent study used this method to evaluate models participating in the Coupled Model Intercomparison Project Phase 5 (CMIP5) (Nowack et al., 2019).
Here, we build on the Nowack et al. study and use PCMCI on dimension-reduced meteorological reanalysis data and the CMIP6 ensembles data. The resulting causal networks represent teleconnections (causal links) in each of the CMIP6 climate models. The models’ performance in representing realistic teleconnections is then assessed by comparing the causal networks of the individual CMIP6 models to the one obtained from meteorological reanalysis. We show that causal discovery is a promising and novel approach that complements existing model evaluation approaches.
References:
Runge, J., P. Nowack, M. Kretschmer, S. Flaxman, D. Sejdinovic, Detecting and quantifying causal associations in large nonlinear time series datasets. Sci. Adv. 5, eaau4996, 2019.
Kretschmer, M., J. Runge, and D. Coumou, Early prediction of extreme stratospheric polar vortex states based on causal precursors, Geophysical Research Letters, doi:10.1002/2017GL074696, 2017.
Nowack, P. J., J. Runge, V. Eyring, and J. D. Haigh, Causal networks for climate model evaluation and constrained projections, in review, 2019.
How to cite: Debeire, K., Eyring, V., Nowack, P., and Runge, J.: Causal Discovery as a novel approach for CMIP6 climate model evaluation, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-17919, https://doi.org/10.5194/egusphere-egu2020-17919, 2020.
Causal discovery algorithms are machine learning methods that estimate the dependencies between different variables. One of these algorithms, the recently developed PCMCI algorithm (Runge et al., 2019) estimates the time-lagged causal dependency structures from multiple time series and is adapted to common properties of Earth System time series data. The PCMCI algorithm has already been successfully applied in climate science to reveal known interaction pathways between Earth regions commonly referred to as teleconnections, and to explore new teleconnections (Kretschmer et al., 2017). One recent study used this method to evaluate models participating in the Coupled Model Intercomparison Project Phase 5 (CMIP5) (Nowack et al., 2019).
Here, we build on the Nowack et al. study and use PCMCI on dimension-reduced meteorological reanalysis data and the CMIP6 ensembles data. The resulting causal networks represent teleconnections (causal links) in each of the CMIP6 climate models. The models’ performance in representing realistic teleconnections is then assessed by comparing the causal networks of the individual CMIP6 models to the one obtained from meteorological reanalysis. We show that causal discovery is a promising and novel approach that complements existing model evaluation approaches.
References:
Runge, J., P. Nowack, M. Kretschmer, S. Flaxman, D. Sejdinovic, Detecting and quantifying causal associations in large nonlinear time series datasets. Sci. Adv. 5, eaau4996, 2019.
Kretschmer, M., J. Runge, and D. Coumou, Early prediction of extreme stratospheric polar vortex states based on causal precursors, Geophysical Research Letters, doi:10.1002/2017GL074696, 2017.
Nowack, P. J., J. Runge, V. Eyring, and J. D. Haigh, Causal networks for climate model evaluation and constrained projections, in review, 2019.
How to cite: Debeire, K., Eyring, V., Nowack, P., and Runge, J.: Causal Discovery as a novel approach for CMIP6 climate model evaluation, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-17919, https://doi.org/10.5194/egusphere-egu2020-17919, 2020.
EGU2020-20132 | Displays | ITS4.3/AS5.2
Towards synthetic data generation for machine learning models in weather and climateDavid Meyer
The use of real data for training machine learning (ML) models are often a cause of major limitations. For example, real data may be (a) representative of a subset of situations and domains, (b) expensive to produce, (c) limited to specific individuals due to licensing restrictions. Although the use of synthetic data are becoming increasingly popular in computer vision, ML models used in weather and climate models still rely on the use of large real data datasets. Here we present some recent work towards the generation of synthetic data for weather and climate applications and outline some of the major challenges and limitations encountered.
How to cite: Meyer, D.: Towards synthetic data generation for machine learning models in weather and climate, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20132, https://doi.org/10.5194/egusphere-egu2020-20132, 2020.
The use of real data for training machine learning (ML) models are often a cause of major limitations. For example, real data may be (a) representative of a subset of situations and domains, (b) expensive to produce, (c) limited to specific individuals due to licensing restrictions. Although the use of synthetic data are becoming increasingly popular in computer vision, ML models used in weather and climate models still rely on the use of large real data datasets. Here we present some recent work towards the generation of synthetic data for weather and climate applications and outline some of the major challenges and limitations encountered.
How to cite: Meyer, D.: Towards synthetic data generation for machine learning models in weather and climate, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20132, https://doi.org/10.5194/egusphere-egu2020-20132, 2020.
EGU2020-21329 | Displays | ITS4.3/AS5.2
Multisensor crop yield estimation with machine learningLaura Martínez Ferrer, Maria Piles, and Gustau Camps-Valls
Providing accurate and spatially resolved predictions of crop yield is of utmost importance due to the rapid increase in the demand of biofuels and food in the foreseeable future. Satellite based remote sensing over agricultural areas allows monitoring crop development through key bio-geophysical variables such as the Enhanced Vegetation Index (EVI), sensitive to canopy greenness, the Vegetation Optical Depth (VOD), sensitive to biomass water-uptake dynamics, and Soil Moisture (SM), which provides direct information of plant available water. The aim of this work is to implement an automatic system for county-based crop yield estimation using time series from multisource satellite observations, meteorological data and available in situ surveys as supporting information. The spatio-temporal resolution of satellite and meteorological observations are fully exploited and synergistically combined for crop yield prediction using machine learning models. Linear and non-linear regression methods are used: least squares, LASSO, random forests, kernel machines and Gaussian processes. Here we are not only interested in the prediction skill, but also on understanding the relative relevance of the covariates. For this, we first study the importance of each feature separately and then propose a global model for operational monitoring of crop status using the most relevant agro-ecological drivers.
We selected the Continental U.S. and a four-year time series dataset to perform the research study. Results reveal that the three satellite variables are complementary and that their combination with maximum temperature and precipitation from meteorological stations provides the best estimations. Interestingly, adding information about crop planted area also improved the predictions. A non-linear regression model based on Gaussian processes led to best results for all considered crops (soybean, corn and wheat), with high accuracy (low bias and correlation coefficients ranging from 0.75 to 0.92). The feature ranking allowed understanding the main drivers for crop monitoring and the underlying factors behind a prediction loss or gain.
How to cite: Martínez Ferrer, L., Piles, M., and Camps-Valls, G.: Multisensor crop yield estimation with machine learning, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21329, https://doi.org/10.5194/egusphere-egu2020-21329, 2020.
Providing accurate and spatially resolved predictions of crop yield is of utmost importance due to the rapid increase in the demand of biofuels and food in the foreseeable future. Satellite based remote sensing over agricultural areas allows monitoring crop development through key bio-geophysical variables such as the Enhanced Vegetation Index (EVI), sensitive to canopy greenness, the Vegetation Optical Depth (VOD), sensitive to biomass water-uptake dynamics, and Soil Moisture (SM), which provides direct information of plant available water. The aim of this work is to implement an automatic system for county-based crop yield estimation using time series from multisource satellite observations, meteorological data and available in situ surveys as supporting information. The spatio-temporal resolution of satellite and meteorological observations are fully exploited and synergistically combined for crop yield prediction using machine learning models. Linear and non-linear regression methods are used: least squares, LASSO, random forests, kernel machines and Gaussian processes. Here we are not only interested in the prediction skill, but also on understanding the relative relevance of the covariates. For this, we first study the importance of each feature separately and then propose a global model for operational monitoring of crop status using the most relevant agro-ecological drivers.
We selected the Continental U.S. and a four-year time series dataset to perform the research study. Results reveal that the three satellite variables are complementary and that their combination with maximum temperature and precipitation from meteorological stations provides the best estimations. Interestingly, adding information about crop planted area also improved the predictions. A non-linear regression model based on Gaussian processes led to best results for all considered crops (soybean, corn and wheat), with high accuracy (low bias and correlation coefficients ranging from 0.75 to 0.92). The feature ranking allowed understanding the main drivers for crop monitoring and the underlying factors behind a prediction loss or gain.
How to cite: Martínez Ferrer, L., Piles, M., and Camps-Valls, G.: Multisensor crop yield estimation with machine learning, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21329, https://doi.org/10.5194/egusphere-egu2020-21329, 2020.
EGU2020-11263 | Displays | ITS4.3/AS5.2
Deep reinforcement learning in World-Earth system models to discover sustainable management strategiesFelix Strnad, Wolfram Barfuss, Jonathan Donges, and Jobst Heitzig
The identification of pathways leading to robust mitigation of dangerous anthropogenic climate change is nowadays of particular interest
not only to the scientific community but also to policy makers and the wider public.
Increasingly complex, non-linear World-Earth system models are used for describing the dynamics of the biophysical Earth system and the socio-economic and socio-cultural World of human societies and their interactions. Identifying pathways towards a sustainable future in these models is a challenging and widely investigated task in the field of climate research and broader Earth system science. This problem is especially difficult when caring for both environmental limits and social foundations need to be taken into account.
In this work, we propose to combine recently developed machine learning techniques, namely deep reinforcement learning (DRL), with classical analysis of trajectories in the World-Earth system as an approach to extend the field of Earth system analysis by a new method. Based on the concept of the agent-environment interface, we develop a method for using a DRL-agent that is able to act and learn in variable manageable environment models of the Earth system in order to discover management strategies for sustainable development.
We demonstrate the potential of our framework by applying DRL algorithms to stylized World-Earth system models. The agent can apply management options to an environment, an Earth system model, and learn by rewards provided by the environment. We train our agent with a deep Q-neural network extended by current state-of-the-art algorithms. Conceptually, we thereby explore the feasibility of finding novel global governance policies leading into a safe and just operating space constrained by certain planetary and socio-economic boundaries.
We find that the agent is able to learn novel, previously undiscovered policies that navigate the system into sustainable regions of the underlying conceptual models of the World-Earth system. In particular, the artificially intelligent agent learns that the timing of a specific mix of taxing carbon emissions and subsidies on renewables is of crucial relevance for finding World-Earth system trajectories that are sustainable in the long term. Overall, we show in this work how concepts and tools from artificial intelligence can help to address the current challenges on the way towards sustainable development.
Underlying publication
[1] Strnad, F. M.; Barfuss, W.; Donges, J. F. & Heitzig, J. Deep reinforcement learning in World-Earth system models to discover sustainable management strategies Chaos: An Interdisciplinary Journal of Nonlinear Science, AIP Publishing LLC, 2019, 29, 123122
How to cite: Strnad, F., Barfuss, W., Donges, J., and Heitzig, J.: Deep reinforcement learning in World-Earth system models to discover sustainable management strategies, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11263, https://doi.org/10.5194/egusphere-egu2020-11263, 2020.
The identification of pathways leading to robust mitigation of dangerous anthropogenic climate change is nowadays of particular interest
not only to the scientific community but also to policy makers and the wider public.
Increasingly complex, non-linear World-Earth system models are used for describing the dynamics of the biophysical Earth system and the socio-economic and socio-cultural World of human societies and their interactions. Identifying pathways towards a sustainable future in these models is a challenging and widely investigated task in the field of climate research and broader Earth system science. This problem is especially difficult when caring for both environmental limits and social foundations need to be taken into account.
In this work, we propose to combine recently developed machine learning techniques, namely deep reinforcement learning (DRL), with classical analysis of trajectories in the World-Earth system as an approach to extend the field of Earth system analysis by a new method. Based on the concept of the agent-environment interface, we develop a method for using a DRL-agent that is able to act and learn in variable manageable environment models of the Earth system in order to discover management strategies for sustainable development.
We demonstrate the potential of our framework by applying DRL algorithms to stylized World-Earth system models. The agent can apply management options to an environment, an Earth system model, and learn by rewards provided by the environment. We train our agent with a deep Q-neural network extended by current state-of-the-art algorithms. Conceptually, we thereby explore the feasibility of finding novel global governance policies leading into a safe and just operating space constrained by certain planetary and socio-economic boundaries.
We find that the agent is able to learn novel, previously undiscovered policies that navigate the system into sustainable regions of the underlying conceptual models of the World-Earth system. In particular, the artificially intelligent agent learns that the timing of a specific mix of taxing carbon emissions and subsidies on renewables is of crucial relevance for finding World-Earth system trajectories that are sustainable in the long term. Overall, we show in this work how concepts and tools from artificial intelligence can help to address the current challenges on the way towards sustainable development.
Underlying publication
[1] Strnad, F. M.; Barfuss, W.; Donges, J. F. & Heitzig, J. Deep reinforcement learning in World-Earth system models to discover sustainable management strategies Chaos: An Interdisciplinary Journal of Nonlinear Science, AIP Publishing LLC, 2019, 29, 123122
How to cite: Strnad, F., Barfuss, W., Donges, J., and Heitzig, J.: Deep reinforcement learning in World-Earth system models to discover sustainable management strategies, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11263, https://doi.org/10.5194/egusphere-egu2020-11263, 2020.
ITS4.5/GI1.4 – New frontiers of multiscale monitoring, analysis, modeling and decisional support (DSS) of environmental systems
EGU2020-11805 | Displays | ITS4.5/GI1.4
Spatial-temporal variations of surface diffuse CO2 degassing at El Hierro volcano, Canary IslandsPedro A. Hernández, Christopher A. Skeldon, Jingwei Zhang, Fátima Rodríguez, Cecilia Amonte, María Asensio-Ramos, Gladys V. Melián, Eleazar Padrón, and Nemesio M. Pérez
El Hierro (278 km2), the youngest, smallest and westernmost island of the Canarian archipelago, is a 5-km-high edifice constructed by rapid constructive and destructive processes in ~1.12 Ma, with a truncated trihedral shape and three convergent ridges of volcanic cones. It experienced a submarine eruption from 12 October, 2011 to 5 March 2012, off its southern coast that was the first one to be monitored from the beginning in the Canary Islands. As no visible emanations occur at the surface environment of El Hierro, diffuse degassing studies are a useful geochemical tool to monitor the volcanic activity in this volcanic island. Diffuse CO2 emission surveys have been performed at El Hierro Island since 1998 in a yearly basis, with much higher frequency during the period 2011-2012. At each survey, about 600 sampling sites are selected to obtain a homogeneous distribution. Measurements of soil CO2 efflux are performed in situ following the accumulation chamber method. During pre-eruptive and eruptive periods, the diffuse CO2 emission released by the whole island experienced significant increases before the onset of the submarine eruption and the most energetic seismic events of the volcanic-seismic unrest (Melián et al., 2014. J. Geophys. Res. Solid Earth, 119, 6976–6991). The most recent diffuse CO2 efflux survey was carried out in July 2019. Values ranged from non-detectable to 28.9 g m−2 d−1. Statistical-graphical analysis of the data shows two different geochemical populations; Background (B) and Peak (P) represented by 97.5% and 0.5% of the total data, respectively, with geometric means of 1.2 and 23.6 g m−2 d−1, respectively. Most of the area showed B values while the P values were mainly observed at the interception center of the three convergent ridges and the north-east of the island. To estimate the diffuse CO2 emission for the 2019 survey, we ran about 100 sGs simulations. The estimated 2019 diffuse CO2 output released to atmosphere by El Hierro was 214 ± 10 t d-1, value lower than the background average of CO2 emission estimated on 412 t d-1 and slightly higher than the background range of 181 t d-1 (−1σ) and 930 t d-1 (+1σ) estimated at El Hierro volcano during the quiescence period 1998-2010 (Melián et al., 2014, JGR). Monitoring the diffuse CO2 emission has proven to be a very effective tool to detect early warning signals of volcanic unrest at El Hierro.
How to cite: Hernández, P. A., Skeldon, C. A., Zhang, J., Rodríguez, F., Amonte, C., Asensio-Ramos, M., Melián, G. V., Padrón, E., and Pérez, N. M.: Spatial-temporal variations of surface diffuse CO2 degassing at El Hierro volcano, Canary Islands, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11805, https://doi.org/10.5194/egusphere-egu2020-11805, 2020.
El Hierro (278 km2), the youngest, smallest and westernmost island of the Canarian archipelago, is a 5-km-high edifice constructed by rapid constructive and destructive processes in ~1.12 Ma, with a truncated trihedral shape and three convergent ridges of volcanic cones. It experienced a submarine eruption from 12 October, 2011 to 5 March 2012, off its southern coast that was the first one to be monitored from the beginning in the Canary Islands. As no visible emanations occur at the surface environment of El Hierro, diffuse degassing studies are a useful geochemical tool to monitor the volcanic activity in this volcanic island. Diffuse CO2 emission surveys have been performed at El Hierro Island since 1998 in a yearly basis, with much higher frequency during the period 2011-2012. At each survey, about 600 sampling sites are selected to obtain a homogeneous distribution. Measurements of soil CO2 efflux are performed in situ following the accumulation chamber method. During pre-eruptive and eruptive periods, the diffuse CO2 emission released by the whole island experienced significant increases before the onset of the submarine eruption and the most energetic seismic events of the volcanic-seismic unrest (Melián et al., 2014. J. Geophys. Res. Solid Earth, 119, 6976–6991). The most recent diffuse CO2 efflux survey was carried out in July 2019. Values ranged from non-detectable to 28.9 g m−2 d−1. Statistical-graphical analysis of the data shows two different geochemical populations; Background (B) and Peak (P) represented by 97.5% and 0.5% of the total data, respectively, with geometric means of 1.2 and 23.6 g m−2 d−1, respectively. Most of the area showed B values while the P values were mainly observed at the interception center of the three convergent ridges and the north-east of the island. To estimate the diffuse CO2 emission for the 2019 survey, we ran about 100 sGs simulations. The estimated 2019 diffuse CO2 output released to atmosphere by El Hierro was 214 ± 10 t d-1, value lower than the background average of CO2 emission estimated on 412 t d-1 and slightly higher than the background range of 181 t d-1 (−1σ) and 930 t d-1 (+1σ) estimated at El Hierro volcano during the quiescence period 1998-2010 (Melián et al., 2014, JGR). Monitoring the diffuse CO2 emission has proven to be a very effective tool to detect early warning signals of volcanic unrest at El Hierro.
How to cite: Hernández, P. A., Skeldon, C. A., Zhang, J., Rodríguez, F., Amonte, C., Asensio-Ramos, M., Melián, G. V., Padrón, E., and Pérez, N. M.: Spatial-temporal variations of surface diffuse CO2 degassing at El Hierro volcano, Canary Islands, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11805, https://doi.org/10.5194/egusphere-egu2020-11805, 2020.
EGU2020-7307 | Displays | ITS4.5/GI1.4
The 2019 Stromboli eruption: the space-borne and ground-based InSAR contributionTeresa Nolesini, Federico Di Traglia, Francesco Casu, Claudio De Luca, Mariarosaria Manzo, Riccardo Lanari, and Nicola Casagli
On 3 July 2019, Stromboli experienced a paroxysmal explosion without long-term precursors, as instead occurred before the last two effusive eruptions. In the following months, lava outpoured from a vent localized in the SW crater area, and sporadically from the NE one. On 28 August 2019, a new paroxysmal explosion occurred, followed by strong volcanic activity, culminating with a lava flow emitted from the SW-Central crater area. Subsequently, the eruptive activity decreased, although frequent instability phenomena linked to the growth of new cones on the edge of the crater terrace occurred. This contribution summarizes the measurements obtained through space-borne and ground-based InSAR sensors. The ground-based data allowed to detect pressurization of the summit area, as the instability of the newly emplaced material. The satellite data instead helped to identify the slope dynamics. The integration of the complementary systems strengthens the monitoring of both the eruptive activity and the instability phenomena.
This work is supported by the 2019-2021 Università di Firenze and Italian Civil Protection Department agreement, and by the 2019-2021 IREA-CNR and Italian Civil Protection Department agreement.
How to cite: Nolesini, T., Di Traglia, F., Casu, F., De Luca, C., Manzo, M., Lanari, R., and Casagli, N.: The 2019 Stromboli eruption: the space-borne and ground-based InSAR contribution, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7307, https://doi.org/10.5194/egusphere-egu2020-7307, 2020.
On 3 July 2019, Stromboli experienced a paroxysmal explosion without long-term precursors, as instead occurred before the last two effusive eruptions. In the following months, lava outpoured from a vent localized in the SW crater area, and sporadically from the NE one. On 28 August 2019, a new paroxysmal explosion occurred, followed by strong volcanic activity, culminating with a lava flow emitted from the SW-Central crater area. Subsequently, the eruptive activity decreased, although frequent instability phenomena linked to the growth of new cones on the edge of the crater terrace occurred. This contribution summarizes the measurements obtained through space-borne and ground-based InSAR sensors. The ground-based data allowed to detect pressurization of the summit area, as the instability of the newly emplaced material. The satellite data instead helped to identify the slope dynamics. The integration of the complementary systems strengthens the monitoring of both the eruptive activity and the instability phenomena.
This work is supported by the 2019-2021 Università di Firenze and Italian Civil Protection Department agreement, and by the 2019-2021 IREA-CNR and Italian Civil Protection Department agreement.
How to cite: Nolesini, T., Di Traglia, F., Casu, F., De Luca, C., Manzo, M., Lanari, R., and Casagli, N.: The 2019 Stromboli eruption: the space-borne and ground-based InSAR contribution, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7307, https://doi.org/10.5194/egusphere-egu2020-7307, 2020.
EGU2020-536 | Displays | ITS4.5/GI1.4
Multiscale edge-detection methods for the geometrical constraint of deformation sources in the volcanic environmentAndrea Barone, Raffaele Castaldo, Maurizio Fedi, Susi Pepe, Giuseppe Solaro, and Pietro Tizzani
The development of the satellite remote sensing technologies is providing a great contribution to monitor volcanic phenomena. Specifically, the large amount of the ground deformation field data (i.e., DInSAR measurements) holds information about the changes of physical and geometrical parameters of deep and shallow volcanic reservoirs; therefore, the exploitation of these data becomes an important task since they actively contribute to the hazard evaluation.
Currently, DinSAR measurements are mostly used for modeling the volcanic deformation sources through the optimization and the inversion procedures; although the latter provide a physical and geometrical model for the considered volcanic site, their results strongly depend on the availability of a priori information and on the considered assumptions about the physical settings; therefore, they do not provide a single solution and they unlikely guarantees a correct analysis for the multi-source cases.
In this scenario, we consider a new methodology based on the use of edge-detection methods for exploiting DInSAR measurements and characterizing the active volcanic sources. Specifically, it allows the estimation of the source geometrical parameters, such as its depth, horizontal position, morphological features and horizontal sizes, by using Multiridge, ScalFun and Total Horizontal Derivative (THD) methods. In particular, it has been proved the validity of Multiridge and ScalFun methods for modeling the point-spherical source independently from its physical features, such as the pressure variation, the physical-elastic parameters of the medium, such as the shear modulus, and low signal-to-noise ratio.
Now, we extend the proposed Multiridge and ScalFun methods from the hydrostatic-pressure point source to the tensile one, and then to the others (rectangular tensile-fault and the prolate spheroid analytical models) in order to investigate volcanic sources as sills, dikes and pipes.
Specifically, after the analysis of the physical and mathematical features of the considered models, we apply Multiridge and ScalFun methods to the synthetic vertical and E-W components of the ground deformation field. We carefully evaluate the advantages and the limitations which could characterize these cases, showing how to solve critical aspects. We especially focus on the sill-like source, for which the edge-detection methods provide very satisfying results. In addition, we perform a joint exploitation of the edge-detection methods to model the deformation source of Fernandina volcano (Galapagos archipelago) by analyzing COSMO-SkyMed acquisitions related to the 2012-2013 time interval.
In conclusion, this approach allows retrieving univocal information about the geometrical configuration of the analyzed deformation pattern. We remark that, although a subsequent analysis is required to fully interpret the ground deformation measurements, this methodology provides a reliable geometrical model, which can be used as a priori information to constrain the entire interpretation procedure during next analyzes.
How to cite: Barone, A., Castaldo, R., Fedi, M., Pepe, S., Solaro, G., and Tizzani, P.: Multiscale edge-detection methods for the geometrical constraint of deformation sources in the volcanic environment, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-536, https://doi.org/10.5194/egusphere-egu2020-536, 2020.
The development of the satellite remote sensing technologies is providing a great contribution to monitor volcanic phenomena. Specifically, the large amount of the ground deformation field data (i.e., DInSAR measurements) holds information about the changes of physical and geometrical parameters of deep and shallow volcanic reservoirs; therefore, the exploitation of these data becomes an important task since they actively contribute to the hazard evaluation.
Currently, DinSAR measurements are mostly used for modeling the volcanic deformation sources through the optimization and the inversion procedures; although the latter provide a physical and geometrical model for the considered volcanic site, their results strongly depend on the availability of a priori information and on the considered assumptions about the physical settings; therefore, they do not provide a single solution and they unlikely guarantees a correct analysis for the multi-source cases.
In this scenario, we consider a new methodology based on the use of edge-detection methods for exploiting DInSAR measurements and characterizing the active volcanic sources. Specifically, it allows the estimation of the source geometrical parameters, such as its depth, horizontal position, morphological features and horizontal sizes, by using Multiridge, ScalFun and Total Horizontal Derivative (THD) methods. In particular, it has been proved the validity of Multiridge and ScalFun methods for modeling the point-spherical source independently from its physical features, such as the pressure variation, the physical-elastic parameters of the medium, such as the shear modulus, and low signal-to-noise ratio.
Now, we extend the proposed Multiridge and ScalFun methods from the hydrostatic-pressure point source to the tensile one, and then to the others (rectangular tensile-fault and the prolate spheroid analytical models) in order to investigate volcanic sources as sills, dikes and pipes.
Specifically, after the analysis of the physical and mathematical features of the considered models, we apply Multiridge and ScalFun methods to the synthetic vertical and E-W components of the ground deformation field. We carefully evaluate the advantages and the limitations which could characterize these cases, showing how to solve critical aspects. We especially focus on the sill-like source, for which the edge-detection methods provide very satisfying results. In addition, we perform a joint exploitation of the edge-detection methods to model the deformation source of Fernandina volcano (Galapagos archipelago) by analyzing COSMO-SkyMed acquisitions related to the 2012-2013 time interval.
In conclusion, this approach allows retrieving univocal information about the geometrical configuration of the analyzed deformation pattern. We remark that, although a subsequent analysis is required to fully interpret the ground deformation measurements, this methodology provides a reliable geometrical model, which can be used as a priori information to constrain the entire interpretation procedure during next analyzes.
How to cite: Barone, A., Castaldo, R., Fedi, M., Pepe, S., Solaro, G., and Tizzani, P.: Multiscale edge-detection methods for the geometrical constraint of deformation sources in the volcanic environment, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-536, https://doi.org/10.5194/egusphere-egu2020-536, 2020.
EGU2020-17569 | Displays | ITS4.5/GI1.4
An extended GeoNode-Based Platform for Detailed Analysis of the Spatial/Temporal DInSAR Information ContentsAdele Fusco, Sabatino Buonanno, Giovanni Zeni, Michele Manunta, Maria Marsella, Paola Carrara, and Riccardo Lanari
We present an efficient tool for managing, visualizing, analysing, and integrating with other data sources, Earth Observation (EO) data for the analysis of surface deformation phenomena. In particular, we focused on specific EO data that are those obtained by an advanced-processing of Synthetic Aperture Radar (SAR) data for monitoring wide areas of the Earth's surface. More specifically, we refer to the SAR technique called advanced differential interferometric synthetic aperture radar (DInSAR) that have demonstrated its capabilities to detect, to map and to analyse the on-going surface displacement phenomena, both spatially and temporally, with centimetre to millimetre accuracy thanks to the generation of deformation maps and time-series. Currently, the DInSAR scenario is characterized by a huge availability of SAR data acquired during the last 25 years, now with a massive and ever-increasing data flow supplied by the C-band Sentinel-1 (S1) constellation of the European Copernicus program.
Considering this big picture, the Spatial Data Infrastructures (SDI) becomes a fundamental tool to implement a framework to handle the informative content of geographic data. Indeed, an SDI represents a collection of technologies, policies, standards, human resources, and related activities permitting the acquisition, processing, distribution, use, maintenance, and preservation of spatial data.
We implemented an SDI, extending the functionalities of GeoNode, which is a web-based platform, providing an open-source framework based on the Open Geospatial Consortium (OGC) standards. OGC makes easier interoperability functionalities, that represent an extremely important aspect because allow the data producers to share geospatial information for all types of cooperative processes, avoiding duplication of efforts and costs. Our implemented GeoNode-Based Platform extends a Geographic Information System (GIS) to a web-accessible resource and adapts the SDI tools to DInSAR-related requirements.
Our efforts have been dedicated to enabling the GeoNode platform to effectively analyze and visualize the spatial/temporal characteristics of the DInSAR deformation time-series and their related products. Moreover, the implemented multi-thread based new functionalities allow us to efficiently upload and update large data volumes of the available DInSAR results into a dedicated geodatabase. We demonstrate the high performance of implemented GeoNode-Based Platform, showing DInSAR results relevant to the acquisitions of the Sentinel-1 constellation, collected during 2015-2018 over Italy.
This work is supported by the 2019-2021 IREA CNR and Italian Civil Protection Department agreement; the H2020 EPOS-SP project (GA 871121); the I-AMICA (PONa3_00363) project; and the IREA-CNR/DGSUNMIG agreement.
How to cite: Fusco, A., Buonanno, S., Zeni, G., Manunta, M., Marsella, M., Carrara, P., and Lanari, R.: An extended GeoNode-Based Platform for Detailed Analysis of the Spatial/Temporal DInSAR Information Contents, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-17569, https://doi.org/10.5194/egusphere-egu2020-17569, 2020.
We present an efficient tool for managing, visualizing, analysing, and integrating with other data sources, Earth Observation (EO) data for the analysis of surface deformation phenomena. In particular, we focused on specific EO data that are those obtained by an advanced-processing of Synthetic Aperture Radar (SAR) data for monitoring wide areas of the Earth's surface. More specifically, we refer to the SAR technique called advanced differential interferometric synthetic aperture radar (DInSAR) that have demonstrated its capabilities to detect, to map and to analyse the on-going surface displacement phenomena, both spatially and temporally, with centimetre to millimetre accuracy thanks to the generation of deformation maps and time-series. Currently, the DInSAR scenario is characterized by a huge availability of SAR data acquired during the last 25 years, now with a massive and ever-increasing data flow supplied by the C-band Sentinel-1 (S1) constellation of the European Copernicus program.
Considering this big picture, the Spatial Data Infrastructures (SDI) becomes a fundamental tool to implement a framework to handle the informative content of geographic data. Indeed, an SDI represents a collection of technologies, policies, standards, human resources, and related activities permitting the acquisition, processing, distribution, use, maintenance, and preservation of spatial data.
We implemented an SDI, extending the functionalities of GeoNode, which is a web-based platform, providing an open-source framework based on the Open Geospatial Consortium (OGC) standards. OGC makes easier interoperability functionalities, that represent an extremely important aspect because allow the data producers to share geospatial information for all types of cooperative processes, avoiding duplication of efforts and costs. Our implemented GeoNode-Based Platform extends a Geographic Information System (GIS) to a web-accessible resource and adapts the SDI tools to DInSAR-related requirements.
Our efforts have been dedicated to enabling the GeoNode platform to effectively analyze and visualize the spatial/temporal characteristics of the DInSAR deformation time-series and their related products. Moreover, the implemented multi-thread based new functionalities allow us to efficiently upload and update large data volumes of the available DInSAR results into a dedicated geodatabase. We demonstrate the high performance of implemented GeoNode-Based Platform, showing DInSAR results relevant to the acquisitions of the Sentinel-1 constellation, collected during 2015-2018 over Italy.
This work is supported by the 2019-2021 IREA CNR and Italian Civil Protection Department agreement; the H2020 EPOS-SP project (GA 871121); the I-AMICA (PONa3_00363) project; and the IREA-CNR/DGSUNMIG agreement.
How to cite: Fusco, A., Buonanno, S., Zeni, G., Manunta, M., Marsella, M., Carrara, P., and Lanari, R.: An extended GeoNode-Based Platform for Detailed Analysis of the Spatial/Temporal DInSAR Information Contents, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-17569, https://doi.org/10.5194/egusphere-egu2020-17569, 2020.
EGU2020-251 | Displays | ITS4.5/GI1.4
How will the next eruption in Tenerife affect aviation?Alberto Prieto, Luca D'Auria, Giovanni Macedonio, Pedro Antonio Hernández, and William Hernández
During a volcanic eruption, one of the most relevant threat for civil aviation is the dispersion of volcanic ash in the atmosphere. All the aircraft are susceptible to suffer damages from volcanic ash even at low concentrations. The economy of Canary Islands (Spain) strongly depends on tourism, so it is of fundamental importance to estimate the consequences of a possible eruptive scenario of the air traffic in the archipelago and consequently on the tourism. We made an exhaustive study about the impact of volcanic ashes on aviation for one of the most important islands in the archipelago: Tenerife.
We developed a large set of numerical simulations of small-magnitude eruptions in Tenerife, which are the most probable eruptive scenario in this island. Our main goal is to develop a probabilistic approach to evaluate the airports most affected by dispersion and fallout of volcanic ash. We carried out more than a thousand simulations with the software FALL3D using supercomputing facilities of Teide-HPC from the Instituto Tecnológico y de Energías Renovables (ITER). In order to model the small-magnitude eruptions, we calculated datasets of total mass of volcanic ash erupted and eruption lengths using a bivariate empirical probability density function obtained using Kernel Density Estimation (KDE) from data of historical eruptions in Tenerife. The vent positions were selected following the density of vents related to Holocene eruptions. Granulometries were chosen following Bi-Gaussian distribution of particle size ranging from Φ=-1 to Φ=12, where Φ=-log2d (diameter in mm). The number of eruptive phases within each eruption is selected randomly. We have split equally the total eruptive duration into these eruptive phases and we set a gaussian distribution in the centre of each division. After that, the intersection between each eruptive phase is chosen taking into account these gaussian distributions to have eruptive phases with different duration.
All the simulations are coupled with ERA-Interim meteorological reanalysis data from the European Centre for Medium-Range Weather Forecasts (ECMWF). We have implemented a probabilistic procedure to map in 3D the hazard associated to volcanic ash. For this purpose, we calculated concentration percentiles (P25, P50 and P75) and time intervals of high concentrations of volcanic ash to evaluate the hazard of suspended ash in the volume surrounding the major airports in Tenerife.
How to cite: Prieto, A., D'Auria, L., Macedonio, G., Hernández, P. A., and Hernández, W.: How will the next eruption in Tenerife affect aviation?, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-251, https://doi.org/10.5194/egusphere-egu2020-251, 2020.
During a volcanic eruption, one of the most relevant threat for civil aviation is the dispersion of volcanic ash in the atmosphere. All the aircraft are susceptible to suffer damages from volcanic ash even at low concentrations. The economy of Canary Islands (Spain) strongly depends on tourism, so it is of fundamental importance to estimate the consequences of a possible eruptive scenario of the air traffic in the archipelago and consequently on the tourism. We made an exhaustive study about the impact of volcanic ashes on aviation for one of the most important islands in the archipelago: Tenerife.
We developed a large set of numerical simulations of small-magnitude eruptions in Tenerife, which are the most probable eruptive scenario in this island. Our main goal is to develop a probabilistic approach to evaluate the airports most affected by dispersion and fallout of volcanic ash. We carried out more than a thousand simulations with the software FALL3D using supercomputing facilities of Teide-HPC from the Instituto Tecnológico y de Energías Renovables (ITER). In order to model the small-magnitude eruptions, we calculated datasets of total mass of volcanic ash erupted and eruption lengths using a bivariate empirical probability density function obtained using Kernel Density Estimation (KDE) from data of historical eruptions in Tenerife. The vent positions were selected following the density of vents related to Holocene eruptions. Granulometries were chosen following Bi-Gaussian distribution of particle size ranging from Φ=-1 to Φ=12, where Φ=-log2d (diameter in mm). The number of eruptive phases within each eruption is selected randomly. We have split equally the total eruptive duration into these eruptive phases and we set a gaussian distribution in the centre of each division. After that, the intersection between each eruptive phase is chosen taking into account these gaussian distributions to have eruptive phases with different duration.
All the simulations are coupled with ERA-Interim meteorological reanalysis data from the European Centre for Medium-Range Weather Forecasts (ECMWF). We have implemented a probabilistic procedure to map in 3D the hazard associated to volcanic ash. For this purpose, we calculated concentration percentiles (P25, P50 and P75) and time intervals of high concentrations of volcanic ash to evaluate the hazard of suspended ash in the volume surrounding the major airports in Tenerife.
How to cite: Prieto, A., D'Auria, L., Macedonio, G., Hernández, P. A., and Hernández, W.: How will the next eruption in Tenerife affect aviation?, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-251, https://doi.org/10.5194/egusphere-egu2020-251, 2020.
EGU2020-19142 | Displays | ITS4.5/GI1.4
New meteorological products based on Sentinel-1 and GNSSGiovanni Nico, Francesco Vespe, Olimpia Masci, Pedro Mateus, João Catalão, and Elisa Rosciano
Recently, the SAR Meteorology technique has demostrated the advantages of assimilating Sentinel-1 maps of Precipitable Water Vapor (PWV) in high resolution Numerical Weather Models (NWP) when forecasting extreme weather events [1]. The impact of Sentinel-1 information on NWP forecast depends on the acquisition parameters of Sentinel-1 images and the physical status of atmosphere [2]. Besides meteorological applications, enhancing NWP forecast could have an impact also on the mitigation of atmospheric artifacts in SAR interferometry applications based on NWP simulations [3].
This work describes a methodology to provide measurements of microwave propagation delay in troposphere. The proposed methodology is based on the processing of Sentinel-1 and GNSS data. In particular, Sentinel-1 images are processed by means of SAR Interferometry technique to get measurements of propagation delay in troposphere over land assuming that phase contribution due terrain displacements can be neglected. To fulfil this condition, the interferometric processing is carried out on Sentinel-1 images having the shortest temporal baseline of six days. Interferometric coherence is used to select portions of the interferogram where to estimates of PWV and the corresponding precision are provided. GNSS measurements of propagation delay in atmosphere are used to validate the Sentinel-1 measurements and derive a quality figure of PWV maps. A procedure is presented to concatenate PWV maps in time in order to derive a time series of spatially dense PWV measurements and the corresponding precisions.
Furthermore, Radio Occultation (RO) profiles are obtained by processing GNSS data. Profiles will be used to derive an estimate of propagation delay in troposphere over sea. In such a way, maps of propagation delay in atmosphere over both land and sea, even though characterized by a different spatial density of measurements, will be provided.
The study area includes the Basilicata, Calabria and Apulia regions and the Gulf of Taranto, southern Italy.
This work was supported by the Ministero dell'Istruzione, dell'Università e della Ricerca (MIUR), Italy, under the project OT4CLIMA.
References
[1] P. Mateus, J. Catalão, G. Nico, “Sentinel-1 interferometric SAR mapping of precipitable water vapor over a country-spanning area”, IEEE Transactions on Geoscience and Remote Sensing, 55(5), 2993-2999, 2017.
[2] P.M.A. Miranda, P. Mateus, G. Nico, J. Catalão, R. Tomé, M. Nogueira, “InSAR meteorology: High‐resolution geodetic data can increase atmospheric predictability”, Geophysical Research Letters, 46(5), 2949-2955, 2019.
[3] G. Nico, R. Tome, J. Catalao, P.M.A. Miranda, “On the use of the WRF model to mitigate tropospheric phase delay effects in SAR interferograms”, IEEE Transactions on Geoscience and Remote Sensing, 49(12), 4970-4976, 2011.
How to cite: Nico, G., Vespe, F., Masci, O., Mateus, P., Catalão, J., and Rosciano, E.: New meteorological products based on Sentinel-1 and GNSS, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19142, https://doi.org/10.5194/egusphere-egu2020-19142, 2020.
Recently, the SAR Meteorology technique has demostrated the advantages of assimilating Sentinel-1 maps of Precipitable Water Vapor (PWV) in high resolution Numerical Weather Models (NWP) when forecasting extreme weather events [1]. The impact of Sentinel-1 information on NWP forecast depends on the acquisition parameters of Sentinel-1 images and the physical status of atmosphere [2]. Besides meteorological applications, enhancing NWP forecast could have an impact also on the mitigation of atmospheric artifacts in SAR interferometry applications based on NWP simulations [3].
This work describes a methodology to provide measurements of microwave propagation delay in troposphere. The proposed methodology is based on the processing of Sentinel-1 and GNSS data. In particular, Sentinel-1 images are processed by means of SAR Interferometry technique to get measurements of propagation delay in troposphere over land assuming that phase contribution due terrain displacements can be neglected. To fulfil this condition, the interferometric processing is carried out on Sentinel-1 images having the shortest temporal baseline of six days. Interferometric coherence is used to select portions of the interferogram where to estimates of PWV and the corresponding precision are provided. GNSS measurements of propagation delay in atmosphere are used to validate the Sentinel-1 measurements and derive a quality figure of PWV maps. A procedure is presented to concatenate PWV maps in time in order to derive a time series of spatially dense PWV measurements and the corresponding precisions.
Furthermore, Radio Occultation (RO) profiles are obtained by processing GNSS data. Profiles will be used to derive an estimate of propagation delay in troposphere over sea. In such a way, maps of propagation delay in atmosphere over both land and sea, even though characterized by a different spatial density of measurements, will be provided.
The study area includes the Basilicata, Calabria and Apulia regions and the Gulf of Taranto, southern Italy.
This work was supported by the Ministero dell'Istruzione, dell'Università e della Ricerca (MIUR), Italy, under the project OT4CLIMA.
References
[1] P. Mateus, J. Catalão, G. Nico, “Sentinel-1 interferometric SAR mapping of precipitable water vapor over a country-spanning area”, IEEE Transactions on Geoscience and Remote Sensing, 55(5), 2993-2999, 2017.
[2] P.M.A. Miranda, P. Mateus, G. Nico, J. Catalão, R. Tomé, M. Nogueira, “InSAR meteorology: High‐resolution geodetic data can increase atmospheric predictability”, Geophysical Research Letters, 46(5), 2949-2955, 2019.
[3] G. Nico, R. Tome, J. Catalao, P.M.A. Miranda, “On the use of the WRF model to mitigate tropospheric phase delay effects in SAR interferograms”, IEEE Transactions on Geoscience and Remote Sensing, 49(12), 4970-4976, 2011.
How to cite: Nico, G., Vespe, F., Masci, O., Mateus, P., Catalão, J., and Rosciano, E.: New meteorological products based on Sentinel-1 and GNSS, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19142, https://doi.org/10.5194/egusphere-egu2020-19142, 2020.
EGU2020-21962 | Displays | ITS4.5/GI1.4
HAPS role of in Earth Observation Multi-platform Paradigm for Environmental MonitoringGiuseppe Persechino, Vincenzo Baraniello, Sara Parrilli, Francesco Tufano, and Guido Rianna
Environmental monitoring often requires the observation of phenomena at different spatial and temporal scales. For example, to study the anthropic impact on natural ecosystems, it is necessary both to evaluate its effects on a large scale and to detect and recognize the environmental criticalities that, locally, determine these effects. These needs impose tight requirements on data temporal, spatial and spectral resolution that a single aerospace platform can hardly satisfy. Therefore, it is necessary to develop new collaborative paradigms between different platforms to improve their observation capabilities, exploiting interoperability between heterogeneous platforms and sensors. However, even multi-platform approaches, due to the limitations of the individual platforms currently available in terms of revisit time and sensor spatial resolution, cannot fully comply with the requirements imposed by some specific environmental issues at acceptable costs.
In this paper, HAPS (High Altitude Pseudo-Satellite) use is proposed as a tool to overcome these limitations and to extend the applicability of the multi-platform paradigm.
In environmental monitoring context, one of the main advantages offered by HAPSs consists in the possibility of providing data with higher spatial resolution than satellites, and at lower cost compared to aerial platforms. Moreover, HAPSs offer a larger field of view than UAVs and can provide data types such as fluorescence or hyperspectral ones that, because of sensor weight and cost, rarely could be acquired by UAVs. Finally, HAPS platforms, thanks to their station-keeping capability on a desired area, offer the possibility of having data with a high temporal resolution to monitor the temporal evolution of phenomena at a rate currently not possible with other platforms.
Different HAPS configurations have been proposed, based on aerostatic or aerodynamic forces. CIRA is designing a HAPS that, thanks to its hybrid configuration, is able to generate aerodynamic and aerostatic forces. It could fly at an altitude of 18-20 km, from this altitude range, the field of view has a diameter length of about 600 km. Maintenance and updating of its equipment and payload is also possible because the platform can land and take-off again.
CIRA is also designing the platform payload. The design goal is to define a new wide-area sensor based on visible, thermal, or hyperspectral cameras, with a better resolution than satellites. In this way, it will be possible to detect environmental anomalies in persistence in order to alert the other platforms. A very high focal second-reading sensor will also be used to avoid false-positive alerts.
In this paper, we will present the main characteristics of HAPS platforms and how they, in synergy with other ones, would lead to considerable advantages in environmental monitoring. In particular, we will discuss the multi-platform paradigm, the current platform limits and their influence on the paradigm effectiveness in the context of environmental monitoring, characteristics of the HAPS platform that CIRA is currently conceptually designing in the context of the OT4clima Project and the main issues relative to its payload design.
How to cite: Persechino, G., Baraniello, V., Parrilli, S., Tufano, F., and Rianna, G.: HAPS role of in Earth Observation Multi-platform Paradigm for Environmental Monitoring, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21962, https://doi.org/10.5194/egusphere-egu2020-21962, 2020.
Environmental monitoring often requires the observation of phenomena at different spatial and temporal scales. For example, to study the anthropic impact on natural ecosystems, it is necessary both to evaluate its effects on a large scale and to detect and recognize the environmental criticalities that, locally, determine these effects. These needs impose tight requirements on data temporal, spatial and spectral resolution that a single aerospace platform can hardly satisfy. Therefore, it is necessary to develop new collaborative paradigms between different platforms to improve their observation capabilities, exploiting interoperability between heterogeneous platforms and sensors. However, even multi-platform approaches, due to the limitations of the individual platforms currently available in terms of revisit time and sensor spatial resolution, cannot fully comply with the requirements imposed by some specific environmental issues at acceptable costs.
In this paper, HAPS (High Altitude Pseudo-Satellite) use is proposed as a tool to overcome these limitations and to extend the applicability of the multi-platform paradigm.
In environmental monitoring context, one of the main advantages offered by HAPSs consists in the possibility of providing data with higher spatial resolution than satellites, and at lower cost compared to aerial platforms. Moreover, HAPSs offer a larger field of view than UAVs and can provide data types such as fluorescence or hyperspectral ones that, because of sensor weight and cost, rarely could be acquired by UAVs. Finally, HAPS platforms, thanks to their station-keeping capability on a desired area, offer the possibility of having data with a high temporal resolution to monitor the temporal evolution of phenomena at a rate currently not possible with other platforms.
Different HAPS configurations have been proposed, based on aerostatic or aerodynamic forces. CIRA is designing a HAPS that, thanks to its hybrid configuration, is able to generate aerodynamic and aerostatic forces. It could fly at an altitude of 18-20 km, from this altitude range, the field of view has a diameter length of about 600 km. Maintenance and updating of its equipment and payload is also possible because the platform can land and take-off again.
CIRA is also designing the platform payload. The design goal is to define a new wide-area sensor based on visible, thermal, or hyperspectral cameras, with a better resolution than satellites. In this way, it will be possible to detect environmental anomalies in persistence in order to alert the other platforms. A very high focal second-reading sensor will also be used to avoid false-positive alerts.
In this paper, we will present the main characteristics of HAPS platforms and how they, in synergy with other ones, would lead to considerable advantages in environmental monitoring. In particular, we will discuss the multi-platform paradigm, the current platform limits and their influence on the paradigm effectiveness in the context of environmental monitoring, characteristics of the HAPS platform that CIRA is currently conceptually designing in the context of the OT4clima Project and the main issues relative to its payload design.
How to cite: Persechino, G., Baraniello, V., Parrilli, S., Tufano, F., and Rianna, G.: HAPS role of in Earth Observation Multi-platform Paradigm for Environmental Monitoring, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21962, https://doi.org/10.5194/egusphere-egu2020-21962, 2020.
EGU2020-2466 | Displays | ITS4.5/GI1.4
Site-specific management zones delineation and Yield prediction for rice based cropping system using on-farm data sets in Tolima (Colombia)Sofiane Ouazaa, Oscar Barrero, Yeison Mauricio Quevedo Amaya, Nesrine Chaali, and Omar Montenegro Ramos
In the valley of the Alto Magdalena, Colombia, intensive agriculture and inefficient soil and water management techniques have generated a within field yield spatial variability, which have increased the production costs for the rice-based cropping system (rice, cotton and maize crops rotation field). Crop yield variations depend on the interaction between climate, soil, topography and management, and it is strongly influenced by the spatial and temporal availabilities of water and nutrients in the soil during the crop growth season. Understanding why the yield in certain portions of a field has a high variability is of paramount importance both from an economic and an environmental point of view, as it is through the better management of these areas that we can improve yields or reduce input costs and environmental impact. The aim of this study was 1) to predict rice yield using on farm data set and machine learning and 2) to compare delimited management zones (MZ) for rice-based cropping system with physiological parameters and within field variation yield.
A 72 sampling points spatially distributed were defined in a 5 hectares plot at the research center Nataima, Agrosavia. For each sampling point, physical and chemical properties, biomass and relative chlorophyll content were determined at different vegetative stages. A multispectral camera mounted to an Unmanned Aerial Vehicle (UAV) was used to acquire multispectral images over the rice canopy in order to estimate vegetation indices. Five nonlinear models and two multilinear algorithms were employed to estimate rice yield. The fuzzy cluster analysis algorithm was used to classify soil data into two to six MZ. The appropriate number of MZ was determined according to the results of a fuzziness performance index and normalized classification entropy.
Results of the rice yield prediction model showed that the best performance was obtained by K-Nearest Neighbors (KNN) regression algorithm with an average absolute error of 10.74%. Nonetheless, the performance of the other algorithms was acceptable except the Multiple Linear regression (MLR). The MLR showed the highest RMSE with 2712.26 kg.ha-1 in the testing dataset, while KNN regression was the best with 1029.69 kg.ha-1. These findings show the importance of machine learning could have for supporting decisions in agriculture processes management.
The cluster analyses revealed that two zones was the optimal number of classes based on different criteria. Delineated zones were evaluated and revealed significant differences (p≤0.05) in sand, apparent density, total porosity, pH, organic matter, phosphorus, calcium, magnesium, iron, zinc, cover and boron. The relative chlorophyll content of cotton and maize crops showed a similar spatial distribution pattern to delimited MZ. The results demonstrate the ability of the proposed procedure to delineate a farmer’s field into zones based on spatially varying soil and crop properties that should be considered for irrigation and fertilization management.
How to cite: Ouazaa, S., Barrero, O., Quevedo Amaya, Y. M., Chaali, N., and Montenegro Ramos, O.: Site-specific management zones delineation and Yield prediction for rice based cropping system using on-farm data sets in Tolima (Colombia), EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2466, https://doi.org/10.5194/egusphere-egu2020-2466, 2020.
In the valley of the Alto Magdalena, Colombia, intensive agriculture and inefficient soil and water management techniques have generated a within field yield spatial variability, which have increased the production costs for the rice-based cropping system (rice, cotton and maize crops rotation field). Crop yield variations depend on the interaction between climate, soil, topography and management, and it is strongly influenced by the spatial and temporal availabilities of water and nutrients in the soil during the crop growth season. Understanding why the yield in certain portions of a field has a high variability is of paramount importance both from an economic and an environmental point of view, as it is through the better management of these areas that we can improve yields or reduce input costs and environmental impact. The aim of this study was 1) to predict rice yield using on farm data set and machine learning and 2) to compare delimited management zones (MZ) for rice-based cropping system with physiological parameters and within field variation yield.
A 72 sampling points spatially distributed were defined in a 5 hectares plot at the research center Nataima, Agrosavia. For each sampling point, physical and chemical properties, biomass and relative chlorophyll content were determined at different vegetative stages. A multispectral camera mounted to an Unmanned Aerial Vehicle (UAV) was used to acquire multispectral images over the rice canopy in order to estimate vegetation indices. Five nonlinear models and two multilinear algorithms were employed to estimate rice yield. The fuzzy cluster analysis algorithm was used to classify soil data into two to six MZ. The appropriate number of MZ was determined according to the results of a fuzziness performance index and normalized classification entropy.
Results of the rice yield prediction model showed that the best performance was obtained by K-Nearest Neighbors (KNN) regression algorithm with an average absolute error of 10.74%. Nonetheless, the performance of the other algorithms was acceptable except the Multiple Linear regression (MLR). The MLR showed the highest RMSE with 2712.26 kg.ha-1 in the testing dataset, while KNN regression was the best with 1029.69 kg.ha-1. These findings show the importance of machine learning could have for supporting decisions in agriculture processes management.
The cluster analyses revealed that two zones was the optimal number of classes based on different criteria. Delineated zones were evaluated and revealed significant differences (p≤0.05) in sand, apparent density, total porosity, pH, organic matter, phosphorus, calcium, magnesium, iron, zinc, cover and boron. The relative chlorophyll content of cotton and maize crops showed a similar spatial distribution pattern to delimited MZ. The results demonstrate the ability of the proposed procedure to delineate a farmer’s field into zones based on spatially varying soil and crop properties that should be considered for irrigation and fertilization management.
How to cite: Ouazaa, S., Barrero, O., Quevedo Amaya, Y. M., Chaali, N., and Montenegro Ramos, O.: Site-specific management zones delineation and Yield prediction for rice based cropping system using on-farm data sets in Tolima (Colombia), EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2466, https://doi.org/10.5194/egusphere-egu2020-2466, 2020.
EGU2020-15941 * | Displays | ITS4.5/GI1.4 | Highlight
Data assimilation of remote sensing data for farm scale maize fertilization in northern ItalyCalogero Schillaci, Edoardo Tomasoni, Marco Acutis, and Alessia Perego
To improve nitrogen fertilization is well known that vegetation indices can offer a picture of the nutritional status of the crop. In this study, field management information (maize sowing and harvesting dates, tillage, fertilization) and estimated vegetation indices VI (Sentinel 2 derived Leaf Area Index LAI, Normalized Difference Vegetation Index NDVI, Fraction of Photosynthetic radiation fPAR) were analysed to develop a batch-mode VIs routine to manage high dimensional temporal and spatial data for Decision Support Systems DSS in precision agriculture, and to optimize the maize N fertilization in the field. The study was carried out in maize (2017-2018) on a farm located in Mantua (northern Italy); the soil is a Vertic Calciustepts with a fine silty texture with moderate content of carbonates. A collection of Sentinel 2 images (with <25% cloud cover) were processed using Graph Processing Tool (GPT). This tool is used through the console to execute Sentinel Application Platform (SNAP) raster data operators in batch-mode. The workflow applied on the Sentinel images consisted in: resampling each band to 10m pixel size, splitting data into subsets according to the farm boundaries using Region of Interest (ROI). Biophysical Operator based on Biophysical Toolbox was used to derive LAI, fPAR for the estimation of maize vegetation indices from emergence until senescence. Yield data were acquired with a volumetric yield sensing in a combine harvester. Fertilization plans were then calculated for each field prior to the side-dressing fertilization. The routine is meant as a user-friendly tool to obtain time series of assimilated VIs of middle and high spatial resolution for field crop fertilization. It also overcomes the failures of the open source graphic user interface of SNAP. For the year 2018, yield data were related to the 34 LAI derived from Sentinel 2a products at 10 m spatial resolution (R2=0.42). This result underlined a trend that can be further studied to define a cluster strategy based on soil properties. As a further step, we will test whether spatial differences in assimilated VIs, integrated with yield data, can guide the nitrogen top-dress fertilization in quantitative way more accurately than a single image or a collection of single images.
How to cite: Schillaci, C., Tomasoni, E., Acutis, M., and Perego, A.: Data assimilation of remote sensing data for farm scale maize fertilization in northern Italy, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-15941, https://doi.org/10.5194/egusphere-egu2020-15941, 2020.
To improve nitrogen fertilization is well known that vegetation indices can offer a picture of the nutritional status of the crop. In this study, field management information (maize sowing and harvesting dates, tillage, fertilization) and estimated vegetation indices VI (Sentinel 2 derived Leaf Area Index LAI, Normalized Difference Vegetation Index NDVI, Fraction of Photosynthetic radiation fPAR) were analysed to develop a batch-mode VIs routine to manage high dimensional temporal and spatial data for Decision Support Systems DSS in precision agriculture, and to optimize the maize N fertilization in the field. The study was carried out in maize (2017-2018) on a farm located in Mantua (northern Italy); the soil is a Vertic Calciustepts with a fine silty texture with moderate content of carbonates. A collection of Sentinel 2 images (with <25% cloud cover) were processed using Graph Processing Tool (GPT). This tool is used through the console to execute Sentinel Application Platform (SNAP) raster data operators in batch-mode. The workflow applied on the Sentinel images consisted in: resampling each band to 10m pixel size, splitting data into subsets according to the farm boundaries using Region of Interest (ROI). Biophysical Operator based on Biophysical Toolbox was used to derive LAI, fPAR for the estimation of maize vegetation indices from emergence until senescence. Yield data were acquired with a volumetric yield sensing in a combine harvester. Fertilization plans were then calculated for each field prior to the side-dressing fertilization. The routine is meant as a user-friendly tool to obtain time series of assimilated VIs of middle and high spatial resolution for field crop fertilization. It also overcomes the failures of the open source graphic user interface of SNAP. For the year 2018, yield data were related to the 34 LAI derived from Sentinel 2a products at 10 m spatial resolution (R2=0.42). This result underlined a trend that can be further studied to define a cluster strategy based on soil properties. As a further step, we will test whether spatial differences in assimilated VIs, integrated with yield data, can guide the nitrogen top-dress fertilization in quantitative way more accurately than a single image or a collection of single images.
How to cite: Schillaci, C., Tomasoni, E., Acutis, M., and Perego, A.: Data assimilation of remote sensing data for farm scale maize fertilization in northern Italy, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-15941, https://doi.org/10.5194/egusphere-egu2020-15941, 2020.
EGU2020-11270 | Displays | ITS4.5/GI1.4
Optimal seasonal water allocation and model predictive control for precision irrigationRuud Kassing, Bart de Schutter, and Edo Abraham
With population growth and a rising demand for meat based diets and energy, the global water demand will grow significantly over the next two decades. Agriculture is the largest global consumer of the available water resources, responsible for almost 70% of annual water withdrawals. Therefore, a pivotal step in addressing the alarming water-scarcity problem is improving water use efficiency in agriculture. This is a complex problem that many farmers face yearly: how to distribute available water optimally to maximize seasonal yield, while considering the uncertainty in future water resources (eg. seasonal rainfall).
In our work, we consider the general problem of optimal soil moisture regulation of multiple fields (e.g., a plantation) for a full growth season, where allocating water optimally over the growth season is considered together with daily irrigation scheduling for multiple fields. This adds complexity to the control problem, as operational constraints need to be included (such as a limited number of fields that can be irrigated in a day) and trade-offs need to be made between irrigation and potential yield of the different fields. Furthermore, the growth stages of the fields can be different, as often not all fields can be planted and harvested at the same time.
We propose a methodology to reduce this complex problem into two separate optimisation problems, which are solved using a two-level structure consisting of a scheduler for seasonal allocation and a model predictive controller for daily irrigation. In this approach, the scheduler determines the optimal allocation of water over the fields for the entire growth season to maximize the summation of each field’s crop yield, by considering a linear approximation of the multiplicative crop productivity function. In addition, the model predictive controller minimizes the daily water stress by regulating the soil moisture of the fields within a water-stress-free zone. This requires a model of the interaction between the soil, the atmosphere, and the crop. A simple water balance model is created for which the saturation dynamics are modeled explicitly using conditionally switched depletion dynamics to improve model quality. To further improve the controller's performance, we create an evapotranspiration model by considering the expected development of the crop over the season using remote-sensing-based measurements of the canopy cover. The presented methodology can handle resource and hydraulic infrastructure constraints. Therefore, our approach is generic as it is not restricted to a specific irrigation method, crop, soil type, or local environment. The performance of the two-level approach is evaluated through a closed-loop simulation in AquaCrop-OS of a real sugarcane plantation in Mozambique. Our optimal control approach boosts water productivity by up to 30% compared to local heuristics and can respect water use constraints that arise in times of drought.
How to cite: Kassing, R., de Schutter, B., and Abraham, E.: Optimal seasonal water allocation and model predictive control for precision irrigation, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11270, https://doi.org/10.5194/egusphere-egu2020-11270, 2020.
With population growth and a rising demand for meat based diets and energy, the global water demand will grow significantly over the next two decades. Agriculture is the largest global consumer of the available water resources, responsible for almost 70% of annual water withdrawals. Therefore, a pivotal step in addressing the alarming water-scarcity problem is improving water use efficiency in agriculture. This is a complex problem that many farmers face yearly: how to distribute available water optimally to maximize seasonal yield, while considering the uncertainty in future water resources (eg. seasonal rainfall).
In our work, we consider the general problem of optimal soil moisture regulation of multiple fields (e.g., a plantation) for a full growth season, where allocating water optimally over the growth season is considered together with daily irrigation scheduling for multiple fields. This adds complexity to the control problem, as operational constraints need to be included (such as a limited number of fields that can be irrigated in a day) and trade-offs need to be made between irrigation and potential yield of the different fields. Furthermore, the growth stages of the fields can be different, as often not all fields can be planted and harvested at the same time.
We propose a methodology to reduce this complex problem into two separate optimisation problems, which are solved using a two-level structure consisting of a scheduler for seasonal allocation and a model predictive controller for daily irrigation. In this approach, the scheduler determines the optimal allocation of water over the fields for the entire growth season to maximize the summation of each field’s crop yield, by considering a linear approximation of the multiplicative crop productivity function. In addition, the model predictive controller minimizes the daily water stress by regulating the soil moisture of the fields within a water-stress-free zone. This requires a model of the interaction between the soil, the atmosphere, and the crop. A simple water balance model is created for which the saturation dynamics are modeled explicitly using conditionally switched depletion dynamics to improve model quality. To further improve the controller's performance, we create an evapotranspiration model by considering the expected development of the crop over the season using remote-sensing-based measurements of the canopy cover. The presented methodology can handle resource and hydraulic infrastructure constraints. Therefore, our approach is generic as it is not restricted to a specific irrigation method, crop, soil type, or local environment. The performance of the two-level approach is evaluated through a closed-loop simulation in AquaCrop-OS of a real sugarcane plantation in Mozambique. Our optimal control approach boosts water productivity by up to 30% compared to local heuristics and can respect water use constraints that arise in times of drought.
How to cite: Kassing, R., de Schutter, B., and Abraham, E.: Optimal seasonal water allocation and model predictive control for precision irrigation, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11270, https://doi.org/10.5194/egusphere-egu2020-11270, 2020.
EGU2020-22608 | Displays | ITS4.5/GI1.4
Commercial wheat fertilization based on nitrogen nutrition index and yield forecastCarmen Plaza, María Calera, Jaime Campoy, Anna Osann, Alfonso Calera, and Vicente Bodas
This work describes the practical application on commercial wheat plots of the methodology developed and evaluated in Albacete, Spain, in the framework of the project FATIMA (http://fatima-h2020.eu/). The application considers two different methodologies for the prescription of nitrogen management prior to the flowering season, based on the diagnosis of crop nitrogen status based on nitrogen nutrition index (NNI) maps and the yield forecast spatially distributed. The NNI is the ratio between the actual nitrogen concentration (Na) over the critical nitrogen concentration (Nc) for the crop analysed (Justes et al 1997). The nitrogen uptake was determined from relationship between Nc and biomass, where biomass was estimated by a crop growth model based on the water productivity. The Na was derived from the relationship between the amount of nitrogen in the canopy, estimated from spectral vegetation index based on the Red-edge and the biomass. The knowledge about the NNI allows fertilizing at critical moments throughout the wheat campaign. The NNI maps for the analysed plots, were obtained throughout wheat development to flowering, of eight dates in the study campaign. The yield forecast is calculated through the relationship between biomass and the harvest index. The spatially distributed yield relies in the use of management zone maps (MZM) based on temporal series of remote sensing data. The MZMs were calculated for pre-flowering state to estimate yield, and capture the within-field variability of wheat production. Thus, the classical N balance model is used to calculate the N requirements at pixel scale, varying the target yield according to the MZM. The practical application was made in wheat commercial plots in the study area, analysing the performance of the proposed nitrogen fertilization strategies. The results indicated the possible optimization of the N application, maintaining or increasing the wheat productivity and reaching the higher levels of protein content in the area.
Keywords: Remote sensing, wheat, biomass, nitrogen nutrition index (NNI), fertilization.
How to cite: Plaza, C., Calera, M., Campoy, J., Osann, A., Calera, A., and Bodas, V.: Commercial wheat fertilization based on nitrogen nutrition index and yield forecast, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22608, https://doi.org/10.5194/egusphere-egu2020-22608, 2020.
This work describes the practical application on commercial wheat plots of the methodology developed and evaluated in Albacete, Spain, in the framework of the project FATIMA (http://fatima-h2020.eu/). The application considers two different methodologies for the prescription of nitrogen management prior to the flowering season, based on the diagnosis of crop nitrogen status based on nitrogen nutrition index (NNI) maps and the yield forecast spatially distributed. The NNI is the ratio between the actual nitrogen concentration (Na) over the critical nitrogen concentration (Nc) for the crop analysed (Justes et al 1997). The nitrogen uptake was determined from relationship between Nc and biomass, where biomass was estimated by a crop growth model based on the water productivity. The Na was derived from the relationship between the amount of nitrogen in the canopy, estimated from spectral vegetation index based on the Red-edge and the biomass. The knowledge about the NNI allows fertilizing at critical moments throughout the wheat campaign. The NNI maps for the analysed plots, were obtained throughout wheat development to flowering, of eight dates in the study campaign. The yield forecast is calculated through the relationship between biomass and the harvest index. The spatially distributed yield relies in the use of management zone maps (MZM) based on temporal series of remote sensing data. The MZMs were calculated for pre-flowering state to estimate yield, and capture the within-field variability of wheat production. Thus, the classical N balance model is used to calculate the N requirements at pixel scale, varying the target yield according to the MZM. The practical application was made in wheat commercial plots in the study area, analysing the performance of the proposed nitrogen fertilization strategies. The results indicated the possible optimization of the N application, maintaining or increasing the wheat productivity and reaching the higher levels of protein content in the area.
Keywords: Remote sensing, wheat, biomass, nitrogen nutrition index (NNI), fertilization.
How to cite: Plaza, C., Calera, M., Campoy, J., Osann, A., Calera, A., and Bodas, V.: Commercial wheat fertilization based on nitrogen nutrition index and yield forecast, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22608, https://doi.org/10.5194/egusphere-egu2020-22608, 2020.
EGU2020-22058 * | Displays | ITS4.5/GI1.4 | Highlight
Nature 4.0 – Intelligent networked systems for ecosystem monitoringNicolas Friess, Marvin Ludwig, Christoph Reudenbach, and Thomas Nauss and the Nature 4.0-Team
Successful conservation strategies and adaptive management require frequent observations and assessments of ecosystems. Depending on the conservation target this is commonly achieved by monitoring schemes carried out locally by experts. In general, these expert surveys provide a high level of detail which however is traded-off against the limited spatial coverage and repetition with which they are commonly executed. Thus, it is common practice to spatially expand these observations by remote sensing techniques. For a resilient monitoring both the expert observations and the spatio-temporal upscaling have to be extended by automated measurements and reproducible modelling. Therefore, Nature 4.0 is developing a prototype of a modular environmental monitoring system for spatially and temporally high-resolution observations of species, habitats and key processes. This prototype system is being developed in the Marburg Open Forest, an open research, education and development platform for environmental observation methods. Here, we present the experiences and challenges of the first year with a focus on the conceptual design and the first implementation of the core observation subsystems and their comparison with the data collected by classical field surveys and remote sensing. The spatially distributed acquisition of abiotic and biotic environmental parameters is based on self-developed as well as third party sensor technology. This includes an automated area-wide radiotracking system of bats and birds and sensor units for measurements of microclimatic conditions and tree sap flow as well as spectral imaging and soundscape recording. The backbone of the automated data collection and transmission is an autonomous LoRa and WiFi mesh network, which is connected to the internet via radio relay. By utilizing powerful data integration and analysis methods, the system will enable researchers, conservationists and the public to effectively observe landscapes through a set of diverse lenses. Here, we present first results as well as an outlook for future developments of intelligent networked systems for ecosystem monitoring.
How to cite: Friess, N., Ludwig, M., Reudenbach, C., and Nauss, T. and the Nature 4.0-Team: Nature 4.0 – Intelligent networked systems for ecosystem monitoring, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22058, https://doi.org/10.5194/egusphere-egu2020-22058, 2020.
Successful conservation strategies and adaptive management require frequent observations and assessments of ecosystems. Depending on the conservation target this is commonly achieved by monitoring schemes carried out locally by experts. In general, these expert surveys provide a high level of detail which however is traded-off against the limited spatial coverage and repetition with which they are commonly executed. Thus, it is common practice to spatially expand these observations by remote sensing techniques. For a resilient monitoring both the expert observations and the spatio-temporal upscaling have to be extended by automated measurements and reproducible modelling. Therefore, Nature 4.0 is developing a prototype of a modular environmental monitoring system for spatially and temporally high-resolution observations of species, habitats and key processes. This prototype system is being developed in the Marburg Open Forest, an open research, education and development platform for environmental observation methods. Here, we present the experiences and challenges of the first year with a focus on the conceptual design and the first implementation of the core observation subsystems and their comparison with the data collected by classical field surveys and remote sensing. The spatially distributed acquisition of abiotic and biotic environmental parameters is based on self-developed as well as third party sensor technology. This includes an automated area-wide radiotracking system of bats and birds and sensor units for measurements of microclimatic conditions and tree sap flow as well as spectral imaging and soundscape recording. The backbone of the automated data collection and transmission is an autonomous LoRa and WiFi mesh network, which is connected to the internet via radio relay. By utilizing powerful data integration and analysis methods, the system will enable researchers, conservationists and the public to effectively observe landscapes through a set of diverse lenses. Here, we present first results as well as an outlook for future developments of intelligent networked systems for ecosystem monitoring.
How to cite: Friess, N., Ludwig, M., Reudenbach, C., and Nauss, T. and the Nature 4.0-Team: Nature 4.0 – Intelligent networked systems for ecosystem monitoring, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22058, https://doi.org/10.5194/egusphere-egu2020-22058, 2020.
EGU2020-5700 | Displays | ITS4.5/GI1.4
Image-by-image calibration of thermal infrared dataRene Heim, Xiaolei Guo, Alina Zare, and Diane Rowland
Surface temperature retrieval through thermal infrared imaging (TIR) is currently being applied in a multitude of natural science disciplines (e.g. agriculture and ecology). Enormous progress in sensor design, electronics, and computer science render TIR systems as easily applicable and accessible for research, industry and even private use. However, despite the existence of factory-set theoretical calibration models that are supposed to facilitate accurate conversion of digital pixel values into temperature values, complex environmental noise and handling parameters hamper the reliable and stable collection of temperature data. Here, we present an image-by-image calibration method that can potentially account for such environmental noise.
We used custom-built thermal calibration panels, a close-range thermal camera and a thermocouple to collect thermal images and ground truth temperatures of peanut plants (Arachis hypogaea L.) in an open and sheltered agricultural setting. Linear models were trained and tested to investigate whether an image-by-image calibration approach improves the accuracy of digital value to temperature conversion over calibrating once before an experiment, as well as before and after an experiment. For both, the sheltered and open setting, we collected data on multiple days.
Our data indicate that there are marked differences in calibration model stability and accuracy between the open and sheltered setting. For the open setting the image-by-image calibration resulted in lower mean absolute temperature errors (MAE = 0.9°C) compared to the sheltered setting (MAE = 4.37°C). We also found that the intercept and slope of the image-by-image calibration models varied substantially under open conditions. Between two images, both captured less than two minutes apart, the digital number to temperature conversion (intercept) could vary by up to 15°C. By contrast, the intercepts derived from the sheltered scenario rarely varied by more than 5 °C.
Our results show that an image-by-image calibration can be preferable to obtain reliable and accurate temperature data. Such data can be crucial to monitor and detect abiotic and biotic stress in animal and plant food production systems where differences in temperature can be very subtle. A reduction of stressors in such systems is often coupled with an increase in yield. At the EGU 2020, we would like to share our research, and some extensions of it, to receive constructive feedback to drive future research on how to reliably and accurately collect sensitive surface temperatures in industry and research.
How to cite: Heim, R., Guo, X., Zare, A., and Rowland, D.: Image-by-image calibration of thermal infrared data, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5700, https://doi.org/10.5194/egusphere-egu2020-5700, 2020.
Surface temperature retrieval through thermal infrared imaging (TIR) is currently being applied in a multitude of natural science disciplines (e.g. agriculture and ecology). Enormous progress in sensor design, electronics, and computer science render TIR systems as easily applicable and accessible for research, industry and even private use. However, despite the existence of factory-set theoretical calibration models that are supposed to facilitate accurate conversion of digital pixel values into temperature values, complex environmental noise and handling parameters hamper the reliable and stable collection of temperature data. Here, we present an image-by-image calibration method that can potentially account for such environmental noise.
We used custom-built thermal calibration panels, a close-range thermal camera and a thermocouple to collect thermal images and ground truth temperatures of peanut plants (Arachis hypogaea L.) in an open and sheltered agricultural setting. Linear models were trained and tested to investigate whether an image-by-image calibration approach improves the accuracy of digital value to temperature conversion over calibrating once before an experiment, as well as before and after an experiment. For both, the sheltered and open setting, we collected data on multiple days.
Our data indicate that there are marked differences in calibration model stability and accuracy between the open and sheltered setting. For the open setting the image-by-image calibration resulted in lower mean absolute temperature errors (MAE = 0.9°C) compared to the sheltered setting (MAE = 4.37°C). We also found that the intercept and slope of the image-by-image calibration models varied substantially under open conditions. Between two images, both captured less than two minutes apart, the digital number to temperature conversion (intercept) could vary by up to 15°C. By contrast, the intercepts derived from the sheltered scenario rarely varied by more than 5 °C.
Our results show that an image-by-image calibration can be preferable to obtain reliable and accurate temperature data. Such data can be crucial to monitor and detect abiotic and biotic stress in animal and plant food production systems where differences in temperature can be very subtle. A reduction of stressors in such systems is often coupled with an increase in yield. At the EGU 2020, we would like to share our research, and some extensions of it, to receive constructive feedback to drive future research on how to reliably and accurately collect sensitive surface temperatures in industry and research.
How to cite: Heim, R., Guo, X., Zare, A., and Rowland, D.: Image-by-image calibration of thermal infrared data, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5700, https://doi.org/10.5194/egusphere-egu2020-5700, 2020.
EGU2020-21611 | Displays | ITS4.5/GI1.4
Sentinel 2 data and fuzzy algorithm for mapping burned areas and fire severity in the Vesuvio National Park, ItalyErika Piaser, Giovanna Sona, Matteo Sali, Mirco Boschetti, Pietro Alessandro Brivio, Gloria Bordogna, and Daniela Stroppiana
Sentinel-2 Multi-Spectral Instrument (MSI) (S-2) images have been used for mapping burned areas within the borders of the Vesuvio National park, Italy, severity affected by fires during summer 2017. A fuzzy algorithm, previously developed for Mediterranean ecosystems and Landsat data, have been adapted and applied to S-2 images. Major improvements with respect to the previous algorithm characteristics are i) the use of S-2 band reflectance in post-fire images and as temporal difference (delta pre- and post-fire) and ii) the definition of fuzzy membership function based on statistics (percentiles) of reflectance as derived from training areas.
The following input bands were selected based on their ability to discriminate burned vs. unburned areas: post-fire NIR (Near Infrared, S-2 band 8), post-fire RE (Red Edge, S-2 bands 6 and 7) and temporal difference (delta post-pre fire) of the same bands and additionally of SWIR2 (ShortWave Infrared, S-2 band 12).
For each input, a sigmoid function has been defined based on percentiles of the unburned and burned histogram distributions, respectively, derived from training data. In this way, and with respect to previous formulation of the algorithm, membership function can be defined in an automated way when ancillary layer are provided for extracting statistics of burned and unburned surfaces.
Input membership degrees for the selected bands have been integrated to derived pixel-based synthetic scores of burned likelihood with Ordered Weighted Averaging (OWA) operators. Different operators were tested to represent different attitudes/needs of the stakeholders between pessimistic (the maximum extent of the phenomenon to minimise the chance of underestimating) and optimistic (minimise the chance of overestimating).
Output score maps provided as continuous values in the [0,1] domain have been segmented to extract burned/unburned areas; the performance of the combined threshold and OWA operator has been evaluated by comparison with Copernicus fire damage layers from the Emergency Management Service (EMS) (https://emergency.copernicus.eu/). Error matrix, F-score and omission and commission error metrics have been analysed.
Finally, the correlation between fuzzy score derived by applying OWA operators has been analysed by comparison with Copernicus EMS fire damage layers as well as fire severity computed as temporal difference of the NBR index. Results show satisfactory accuracy is achieved for the identification of the most severely affected areas while lower performance is observed for those areas identified as slightly damage and probably affected by fires of lower intensity. Moreover, some discrepancies have been observed between different layers of fire severity due to the non-unique definition of the criteria used for assessing the impact of fires on the vegetation layer.
How to cite: Piaser, E., Sona, G., Sali, M., Boschetti, M., Brivio, P. A., Bordogna, G., and Stroppiana, D.: Sentinel 2 data and fuzzy algorithm for mapping burned areas and fire severity in the Vesuvio National Park, Italy, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21611, https://doi.org/10.5194/egusphere-egu2020-21611, 2020.
Sentinel-2 Multi-Spectral Instrument (MSI) (S-2) images have been used for mapping burned areas within the borders of the Vesuvio National park, Italy, severity affected by fires during summer 2017. A fuzzy algorithm, previously developed for Mediterranean ecosystems and Landsat data, have been adapted and applied to S-2 images. Major improvements with respect to the previous algorithm characteristics are i) the use of S-2 band reflectance in post-fire images and as temporal difference (delta pre- and post-fire) and ii) the definition of fuzzy membership function based on statistics (percentiles) of reflectance as derived from training areas.
The following input bands were selected based on their ability to discriminate burned vs. unburned areas: post-fire NIR (Near Infrared, S-2 band 8), post-fire RE (Red Edge, S-2 bands 6 and 7) and temporal difference (delta post-pre fire) of the same bands and additionally of SWIR2 (ShortWave Infrared, S-2 band 12).
For each input, a sigmoid function has been defined based on percentiles of the unburned and burned histogram distributions, respectively, derived from training data. In this way, and with respect to previous formulation of the algorithm, membership function can be defined in an automated way when ancillary layer are provided for extracting statistics of burned and unburned surfaces.
Input membership degrees for the selected bands have been integrated to derived pixel-based synthetic scores of burned likelihood with Ordered Weighted Averaging (OWA) operators. Different operators were tested to represent different attitudes/needs of the stakeholders between pessimistic (the maximum extent of the phenomenon to minimise the chance of underestimating) and optimistic (minimise the chance of overestimating).
Output score maps provided as continuous values in the [0,1] domain have been segmented to extract burned/unburned areas; the performance of the combined threshold and OWA operator has been evaluated by comparison with Copernicus fire damage layers from the Emergency Management Service (EMS) (https://emergency.copernicus.eu/). Error matrix, F-score and omission and commission error metrics have been analysed.
Finally, the correlation between fuzzy score derived by applying OWA operators has been analysed by comparison with Copernicus EMS fire damage layers as well as fire severity computed as temporal difference of the NBR index. Results show satisfactory accuracy is achieved for the identification of the most severely affected areas while lower performance is observed for those areas identified as slightly damage and probably affected by fires of lower intensity. Moreover, some discrepancies have been observed between different layers of fire severity due to the non-unique definition of the criteria used for assessing the impact of fires on the vegetation layer.
How to cite: Piaser, E., Sona, G., Sali, M., Boschetti, M., Brivio, P. A., Bordogna, G., and Stroppiana, D.: Sentinel 2 data and fuzzy algorithm for mapping burned areas and fire severity in the Vesuvio National Park, Italy, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21611, https://doi.org/10.5194/egusphere-egu2020-21611, 2020.
EGU2020-259 | Displays | ITS4.5/GI1.4
Geodetical and seismological evidences of stress transfer between Mauna Loa and KilaueaMonika Przeor, Luca D'Auria, Susi Pepe, and Pietro Tizzani
Different studies evidenced an anticorrelated pattern behavior the activity of Mauna Loa and Kilauea volcanoes. We quantitatively demonstrate the existence of this pattern by using DInSAR SBAS time series, areal strain of horizontal GPS components and the spatial distribution of hypocenters. The DInSAR time series have been studied by using the Independent Component Analysis (ICA) statistical algorithm revealing an anticorrelated ground deformation pattern between sources located at shallow depths beneath Mauna Loa and Kilauea. Furthermore, ICA showed another independent source beneath Kilauea alone, being located at greater depth. A similar pattern was observed in the time series of areal strain of GPS data as well as by spatial distribution of earthquakes depths.
The anticorrelated behaviour of both volcanoes, has been explained by the crustal-level interaction of pulses of magma that cause pressure variations in shallow magma system [1]. Another explanation for this peculiar behaviour is due to the interaction by pore pressure diffusion in a thin accumulation layer of the asthenosphere [2]. Geochemical and petrological studies [5] however, points at the existence of separate reservoirs for Mauna Loa and Kilauea.
The aim of this work is to explain the mechanism that allows the crustal-level relationship between shallow ground deformation sources of both volcanoes. We applied inverse modelling to determine the geometries of the magmatic reservoirs beneath Mauna Loa and Kilauea and their dynamics. This method revealed to be a useful tool to better understand the dynamics and represent the interaction between Mauna Loa and Kilauea.
Our results indicate that the interaction between ground deformation sources of Mauna Loa and Kilauea occurs at shallower depths, therefore we excluded a direct interconnection between their magmatic systems and, instead, we postulate a stress transfer mechanism that explain this interaction. This mechanism has been postulated by several authors to explain the intrusions along rift zones and the interaction between earthquakes and eruptions in these two volcanoes [3, 4]. The magma ascent in Mauna Loa edifice creates a stress field in Kilauea which makes more difficult for the magma to ascent into its shallower reservoir. The same mechanisms could act in an opposite scenario.
[1] A. Miklius and P. Cervelli, “Interaction between Kilauea and Mauna Loa,” Nature, vol. 421, no. 6920, pp. 229–229, 2003.
[2] H. M. Gonnermann, J. H. Foster, M. Poland, C. J. Wolfe, and B. A. Brooks, “Coupling at Mauna Loa and Kilauea by stress transfer in an asthenospheric melt layer,” Nat. Geosci., vol. 5, no. 11, pp. 826–829, 2012.
[3] P. Amelung, F., Yun, S.H, Walter, T. and Segall, “Stress Control of Deep Rift Intrusion at Mauna Loa Volcano, Hawaii,” Science (80-. )., vol. 316, no. MAY, pp. 1026–1030, 2007.
[4] D.A. Swanson, W. A. Duffield, and R.S. Fiske, “Displacement of the south flank of Kilauea Volcano: the result of forceful intrusion of magma into the rift zones,” U.S. Geol. Surv. Prof. Pap. 963, p. 39 p.1976.
[5] J.M. Rhodes and S. R. Hart, “Episodic trace element and isotopic variations in historical mauna loa lavas: Implications for magma and plume dynamics,” Geophys. Monogr. Ser.,vol. 92, pp. 263–288,1995.
How to cite: Przeor, M., D'Auria, L., Pepe, S., and Tizzani, P.: Geodetical and seismological evidences of stress transfer between Mauna Loa and Kilauea , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-259, https://doi.org/10.5194/egusphere-egu2020-259, 2020.
Different studies evidenced an anticorrelated pattern behavior the activity of Mauna Loa and Kilauea volcanoes. We quantitatively demonstrate the existence of this pattern by using DInSAR SBAS time series, areal strain of horizontal GPS components and the spatial distribution of hypocenters. The DInSAR time series have been studied by using the Independent Component Analysis (ICA) statistical algorithm revealing an anticorrelated ground deformation pattern between sources located at shallow depths beneath Mauna Loa and Kilauea. Furthermore, ICA showed another independent source beneath Kilauea alone, being located at greater depth. A similar pattern was observed in the time series of areal strain of GPS data as well as by spatial distribution of earthquakes depths.
The anticorrelated behaviour of both volcanoes, has been explained by the crustal-level interaction of pulses of magma that cause pressure variations in shallow magma system [1]. Another explanation for this peculiar behaviour is due to the interaction by pore pressure diffusion in a thin accumulation layer of the asthenosphere [2]. Geochemical and petrological studies [5] however, points at the existence of separate reservoirs for Mauna Loa and Kilauea.
The aim of this work is to explain the mechanism that allows the crustal-level relationship between shallow ground deformation sources of both volcanoes. We applied inverse modelling to determine the geometries of the magmatic reservoirs beneath Mauna Loa and Kilauea and their dynamics. This method revealed to be a useful tool to better understand the dynamics and represent the interaction between Mauna Loa and Kilauea.
Our results indicate that the interaction between ground deformation sources of Mauna Loa and Kilauea occurs at shallower depths, therefore we excluded a direct interconnection between their magmatic systems and, instead, we postulate a stress transfer mechanism that explain this interaction. This mechanism has been postulated by several authors to explain the intrusions along rift zones and the interaction between earthquakes and eruptions in these two volcanoes [3, 4]. The magma ascent in Mauna Loa edifice creates a stress field in Kilauea which makes more difficult for the magma to ascent into its shallower reservoir. The same mechanisms could act in an opposite scenario.
[1] A. Miklius and P. Cervelli, “Interaction between Kilauea and Mauna Loa,” Nature, vol. 421, no. 6920, pp. 229–229, 2003.
[2] H. M. Gonnermann, J. H. Foster, M. Poland, C. J. Wolfe, and B. A. Brooks, “Coupling at Mauna Loa and Kilauea by stress transfer in an asthenospheric melt layer,” Nat. Geosci., vol. 5, no. 11, pp. 826–829, 2012.
[3] P. Amelung, F., Yun, S.H, Walter, T. and Segall, “Stress Control of Deep Rift Intrusion at Mauna Loa Volcano, Hawaii,” Science (80-. )., vol. 316, no. MAY, pp. 1026–1030, 2007.
[4] D.A. Swanson, W. A. Duffield, and R.S. Fiske, “Displacement of the south flank of Kilauea Volcano: the result of forceful intrusion of magma into the rift zones,” U.S. Geol. Surv. Prof. Pap. 963, p. 39 p.1976.
[5] J.M. Rhodes and S. R. Hart, “Episodic trace element and isotopic variations in historical mauna loa lavas: Implications for magma and plume dynamics,” Geophys. Monogr. Ser.,vol. 92, pp. 263–288,1995.
How to cite: Przeor, M., D'Auria, L., Pepe, S., and Tizzani, P.: Geodetical and seismological evidences of stress transfer between Mauna Loa and Kilauea , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-259, https://doi.org/10.5194/egusphere-egu2020-259, 2020.
EGU2020-1487 | Displays | ITS4.5/GI1.4
Quantification of heavy metals in agricultural soils: the influence of sieving in standard analytical methodsAnne Karine Boulet, Adelcia Veiga, Carla Ferreira, and António Ferreira
Conservation of agriculture soils is a topic of major concern, namely through the increase of soil organic matter. SoilCare project (https://www.soilcare-project.eu/) aims to enhance the quality of agricultural soils in Europe, through the implementation and testing of Soil Improving Cropping Systems in 16 study sites. In Portugal, the application of urban sewage sludge amendments in agriculture soils has been investigated. However, this application is a sensitive topic, due to the risk of long term accumulation of heavy metals and consequent contamination of the soil. The recent Portuguese legislation (Decret-Law 103/2015) is more restrictive than the precedent one (Decret-Law 276/2009) in terms of maximum concentrations of heavy metals in agricultural soils. The analytical quantification of heavy metals, however, raises some methodological questions associated with soil sample pre-treatment, due to some imprecisions in standard analytical methods. For example, the ISO 11466 regarding the extraction in Aqua Regia provides two pre-treatment options: (i) sieve the soil sample with a 2 mm mesh (but if mass for analyses is <2g, mill and sieve the sample <250µm is required), or (ii) mill and sieve the soil sample through a 150µm mesh. On the other hand, the EN 13650 requests soil samples to be sieved at 500µm. Since heavy metals in the soil are usually associated with finer particles, the mesh size used during the pre-treatment of soil samples may affect their quantification.
This study aims to assess the impact of soil particle size on total heavy metal concentrations in the soil. Soil samples were collected at 0-30cm depth in an agricultural field with sandy loam texture, fertilized with urban sludge amendment for 3 years. These samples were then divided in four subsamples and sieved with 2mm, 500µm, 250µm and 106µm meshes (soil aggregates were broken softly but soil wasn’t milled). Finer and coarser fractions were weighted and analyzed separately. Heavy metals were extracted with Aqua Regia method, using a mass for analyze of 3g, and quantified by atomic absorption spectrophotometer with graphite furnace (Cd) and flame (Cu, Ni, Pb, Zn and Cr).
Except for Cu, heavy metals concentrations increase linearly with the decline of the coarser fraction. This means that analyzing heavy metals content only in the finest fractions of the soil leads to an over estimation of their concentrations in the total soil. Results also show that coarser fractions of soil comprise lower, but not negligible, concentrations of heavy metals. Calculating heavy metal concentrations in the soil based on the weighted average of both fine and coarse fractions and associated concentrations, provide similar results to those driven by the analyses of heavy metals in the <2mm fraction. This indicates that milling and analyzing finer fractions of the soil did not influence the quantification of heavy metals in total soil. Clearer indications on analytical procedures should be provided in analytical standards, in order to properly assess heavy metal concentrations and compare the results with soil quality standards legislated.
How to cite: Boulet, A. K., Veiga, A., Ferreira, C., and Ferreira, A.: Quantification of heavy metals in agricultural soils: the influence of sieving in standard analytical methods, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1487, https://doi.org/10.5194/egusphere-egu2020-1487, 2020.
Conservation of agriculture soils is a topic of major concern, namely through the increase of soil organic matter. SoilCare project (https://www.soilcare-project.eu/) aims to enhance the quality of agricultural soils in Europe, through the implementation and testing of Soil Improving Cropping Systems in 16 study sites. In Portugal, the application of urban sewage sludge amendments in agriculture soils has been investigated. However, this application is a sensitive topic, due to the risk of long term accumulation of heavy metals and consequent contamination of the soil. The recent Portuguese legislation (Decret-Law 103/2015) is more restrictive than the precedent one (Decret-Law 276/2009) in terms of maximum concentrations of heavy metals in agricultural soils. The analytical quantification of heavy metals, however, raises some methodological questions associated with soil sample pre-treatment, due to some imprecisions in standard analytical methods. For example, the ISO 11466 regarding the extraction in Aqua Regia provides two pre-treatment options: (i) sieve the soil sample with a 2 mm mesh (but if mass for analyses is <2g, mill and sieve the sample <250µm is required), or (ii) mill and sieve the soil sample through a 150µm mesh. On the other hand, the EN 13650 requests soil samples to be sieved at 500µm. Since heavy metals in the soil are usually associated with finer particles, the mesh size used during the pre-treatment of soil samples may affect their quantification.
This study aims to assess the impact of soil particle size on total heavy metal concentrations in the soil. Soil samples were collected at 0-30cm depth in an agricultural field with sandy loam texture, fertilized with urban sludge amendment for 3 years. These samples were then divided in four subsamples and sieved with 2mm, 500µm, 250µm and 106µm meshes (soil aggregates were broken softly but soil wasn’t milled). Finer and coarser fractions were weighted and analyzed separately. Heavy metals were extracted with Aqua Regia method, using a mass for analyze of 3g, and quantified by atomic absorption spectrophotometer with graphite furnace (Cd) and flame (Cu, Ni, Pb, Zn and Cr).
Except for Cu, heavy metals concentrations increase linearly with the decline of the coarser fraction. This means that analyzing heavy metals content only in the finest fractions of the soil leads to an over estimation of their concentrations in the total soil. Results also show that coarser fractions of soil comprise lower, but not negligible, concentrations of heavy metals. Calculating heavy metal concentrations in the soil based on the weighted average of both fine and coarse fractions and associated concentrations, provide similar results to those driven by the analyses of heavy metals in the <2mm fraction. This indicates that milling and analyzing finer fractions of the soil did not influence the quantification of heavy metals in total soil. Clearer indications on analytical procedures should be provided in analytical standards, in order to properly assess heavy metal concentrations and compare the results with soil quality standards legislated.
How to cite: Boulet, A. K., Veiga, A., Ferreira, C., and Ferreira, A.: Quantification of heavy metals in agricultural soils: the influence of sieving in standard analytical methods, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1487, https://doi.org/10.5194/egusphere-egu2020-1487, 2020.
EGU2020-4318 | Displays | ITS4.5/GI1.4
Exploring options for improving maize productivity for small farmers in the Loess PlateauXiukang Wang, Yaguang Gao, and Yingying Xing
Millions of small farmers rely on maize (Zea mays L.) produced in the Loess Plateau of China. However, little has been reported on the effects of plastic mulch and maize cultivar on crop yield in the check dam environment. The objectives of this experiment were to determine the effects of maize cultivar and plastic mulch on photosynthetic characteristics and grain yield when grown in the check dam environment. Three maize cultivars were assessed with and without plastic mulch in 2016 and 2017 in Ansai County, Shaanxi Province, China. Results showed that mulch increased grain yield by 10.5% in 2016 and 11.3% in 2017 across all cultivars. Among all cultivars, ‘Xianyu335’ had the highest grain yield under both mulch and no mulch. Grain yield was significantly correlated with soil water content in the 0-20 cm layer. Soil temperature under mulch decreased with increasing soil depth. Averaged over soil depths, mulch increased soil temperature from 0.2 to 1.9 °C over the entire growing season. Maize cultivar directly determined photosynthetic characteristics. Grain yield was more closely related to photosynthetic rate in July than in August, and was significantly associated with stomatal conductance and transpiration rate. Our findings suggest that photosynthetic characteristics is an important index affecting maize grain yield for small farmers using check dams in the Loess Plateau.
How to cite: Wang, X., Gao, Y., and Xing, Y.: Exploring options for improving maize productivity for small farmers in the Loess Plateau, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4318, https://doi.org/10.5194/egusphere-egu2020-4318, 2020.
Millions of small farmers rely on maize (Zea mays L.) produced in the Loess Plateau of China. However, little has been reported on the effects of plastic mulch and maize cultivar on crop yield in the check dam environment. The objectives of this experiment were to determine the effects of maize cultivar and plastic mulch on photosynthetic characteristics and grain yield when grown in the check dam environment. Three maize cultivars were assessed with and without plastic mulch in 2016 and 2017 in Ansai County, Shaanxi Province, China. Results showed that mulch increased grain yield by 10.5% in 2016 and 11.3% in 2017 across all cultivars. Among all cultivars, ‘Xianyu335’ had the highest grain yield under both mulch and no mulch. Grain yield was significantly correlated with soil water content in the 0-20 cm layer. Soil temperature under mulch decreased with increasing soil depth. Averaged over soil depths, mulch increased soil temperature from 0.2 to 1.9 °C over the entire growing season. Maize cultivar directly determined photosynthetic characteristics. Grain yield was more closely related to photosynthetic rate in July than in August, and was significantly associated with stomatal conductance and transpiration rate. Our findings suggest that photosynthetic characteristics is an important index affecting maize grain yield for small farmers using check dams in the Loess Plateau.
How to cite: Wang, X., Gao, Y., and Xing, Y.: Exploring options for improving maize productivity for small farmers in the Loess Plateau, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4318, https://doi.org/10.5194/egusphere-egu2020-4318, 2020.
EGU2020-264 | Displays | ITS4.5/GI1.4
Using GIS tools for the Play Fairway Analysis in geothermal explorationFrancisco Airam Morales González, Luca D'Auria, Fátima Rodríguez, Eleazar Padrón, and Nemesio Pérez
The Canary Islands archipelago, due to their recent volcanism, are the only Spanish territory with high enthalpy geothermal resources. However, there is no evidence in the islands of endogenous fluids manifestations with the exception of the Teide fumaroles, in Tenerife. Although some efforts have been made to investigate the geothermal resources from the 1970s to the 1990s and later during the past decade, the final goal has not yet been achieved, which is to locate and define the size, shape and structure of the geothermal resource, and determine their characteristics and capacity to produce energy (Rodríguez et al. 2015). For this reason it is extremely important to use new tools that allow a better understanding of the geothermal resource. In this work we describe a probabilistic evaluation of the geothermal potential of the island of Tenerife using Geographical Information Systems (GIS) through a collection of geological, geophysical and geochemical data.
The Play Fairway Analysis (PFA) was used, as illustrated by Lautze et al. (2017) in a similar study for an environment having similar characteristics: the Hawaiian Archipelago. The PFA approach consists of joining information coming from multidisciplinary datasets within a probabilistic framework. Basically, the probabilities related to the presence of heat (H), fluids (F) and permeability (P) are computed quantitatively from the starting datasets and combined to obtain the probability of presence of geothermal resources and its confidence.
In the present study this probabilistic method have been implemented using GIS geoprocessing tools and raster image analysis using geological (Holocene vents, volcano-tectonic structures), geophysical (seismicity, resistivity data, gravity data) and geochemical (hydrogeochemistry, soil gas emission and geochemistry, etc…).
The main result of this work is a cartographic set that allow showing the areas of Tenerife with the greatest potential for geothermal exploration. Furthermore, using the statistical framework of PFA analysis, we obtained also confidence intervals on the retrieved probability maps.
How to cite: Morales González, F. A., D'Auria, L., Rodríguez, F., Padrón, E., and Pérez, N.: Using GIS tools for the Play Fairway Analysis in geothermal exploration, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-264, https://doi.org/10.5194/egusphere-egu2020-264, 2020.
The Canary Islands archipelago, due to their recent volcanism, are the only Spanish territory with high enthalpy geothermal resources. However, there is no evidence in the islands of endogenous fluids manifestations with the exception of the Teide fumaroles, in Tenerife. Although some efforts have been made to investigate the geothermal resources from the 1970s to the 1990s and later during the past decade, the final goal has not yet been achieved, which is to locate and define the size, shape and structure of the geothermal resource, and determine their characteristics and capacity to produce energy (Rodríguez et al. 2015). For this reason it is extremely important to use new tools that allow a better understanding of the geothermal resource. In this work we describe a probabilistic evaluation of the geothermal potential of the island of Tenerife using Geographical Information Systems (GIS) through a collection of geological, geophysical and geochemical data.
The Play Fairway Analysis (PFA) was used, as illustrated by Lautze et al. (2017) in a similar study for an environment having similar characteristics: the Hawaiian Archipelago. The PFA approach consists of joining information coming from multidisciplinary datasets within a probabilistic framework. Basically, the probabilities related to the presence of heat (H), fluids (F) and permeability (P) are computed quantitatively from the starting datasets and combined to obtain the probability of presence of geothermal resources and its confidence.
In the present study this probabilistic method have been implemented using GIS geoprocessing tools and raster image analysis using geological (Holocene vents, volcano-tectonic structures), geophysical (seismicity, resistivity data, gravity data) and geochemical (hydrogeochemistry, soil gas emission and geochemistry, etc…).
The main result of this work is a cartographic set that allow showing the areas of Tenerife with the greatest potential for geothermal exploration. Furthermore, using the statistical framework of PFA analysis, we obtained also confidence intervals on the retrieved probability maps.
How to cite: Morales González, F. A., D'Auria, L., Rodríguez, F., Padrón, E., and Pérez, N.: Using GIS tools for the Play Fairway Analysis in geothermal exploration, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-264, https://doi.org/10.5194/egusphere-egu2020-264, 2020.
EGU2020-4764 | Displays | ITS4.5/GI1.4 | Highlight
Wavelet analysis of geochemical time series: continuous CO2 flux measurements at the summit cone of Teide volcano, Tenerife, Canary IslandsGermán D. Padilla, Luca D'Auria, Nemesio M. Peréz, Pedro A. Hernández, Eleazar Padrón, José Barrancos, Gladys Melián, and María Asensio-Ramos
Tenerife Island (2034 km2) is the largest of Canarian archipelago and is characterized by three main volcano-tectonic axis: the NS, NE and NW dorsals and a central caldera, Las Cañadas, hosting the twin stratovolcanoes Pico Viejo and Teide. Although Teide volcano shows a weak fumarolic system, volcanic gas emissions observed in the summit cone consist mostly of diffuse CO2 degassing. The first continuous automatic geochemical station in Canary Islands was installed at the south-eastern foot of summit cone of Teide volcano in 1999, with the aim of improving the volcanic monitoring system and providing a multidisciplinary approach to the surveillance program of Teide volcano. The 1999-2011 time series show anomalous changes of the diffuse CO2 emission with values ranging between 0 and 62.8 kg m-2d-1, with a mean value of 4.7 kg m-2d-1. The CO2 efflux increases remained after filtering the time series with multiple regression analysis (MRA), were soil temperature, soil water content, wind speed and barometric pressure explained 16.7% of variability.
We analysed the CO2 efflux time series by using the Continuous Wavelet Transform, with the Ricker wavelet, to detect relevant time-frequency patterns in the signal. The wavelet analysis showed, at low frequencies, quasi-periodical oscillations with periods of 3-4 years. Moreover, during the intervals of highest levels of CO2 efflux the analysis evidenced also oscillations with a period of about 6 months.
Our data show in 2002 a marked peak of the filtered CO2 signal. The beginning of this increase is nearly coincident with a similar signal on the data of CO2 emission, coming from periodic surveys performed yearly on the area of Teide summit cone since 1997. We interpret these signals as an “early warning” associated to the 2004 seismo-volcanic unrest in Tenerife. A similar coincidence was observed also for the interval 2006-2009, which was followed by an increase in the local seismicity of Tenerife as well, characterized both by an increasing number of small earthquakes occurring, respectively, mostly along the NW dorsal and in the southern part of the NE dorsal of Tenerife.
Our study reveals that wavelet analysis on the continuous CO2 efflux measurement could help to detect anomalous degassing periods, possibly indicating impending seismo-volcanic unrest episodes and/or eruptions. Finally, it is important to remark that the data presented in this work, constitute one of the longest time series of continuous CO2 efflux measurements in an active volcanic area, hence providing an important benchmark for similar measurements worldwide.
How to cite: Padilla, G. D., D'Auria, L., Peréz, N. M., Hernández, P. A., Padrón, E., Barrancos, J., Melián, G., and Asensio-Ramos, M.: Wavelet analysis of geochemical time series: continuous CO2 flux measurements at the summit cone of Teide volcano, Tenerife, Canary Islands, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4764, https://doi.org/10.5194/egusphere-egu2020-4764, 2020.
Tenerife Island (2034 km2) is the largest of Canarian archipelago and is characterized by three main volcano-tectonic axis: the NS, NE and NW dorsals and a central caldera, Las Cañadas, hosting the twin stratovolcanoes Pico Viejo and Teide. Although Teide volcano shows a weak fumarolic system, volcanic gas emissions observed in the summit cone consist mostly of diffuse CO2 degassing. The first continuous automatic geochemical station in Canary Islands was installed at the south-eastern foot of summit cone of Teide volcano in 1999, with the aim of improving the volcanic monitoring system and providing a multidisciplinary approach to the surveillance program of Teide volcano. The 1999-2011 time series show anomalous changes of the diffuse CO2 emission with values ranging between 0 and 62.8 kg m-2d-1, with a mean value of 4.7 kg m-2d-1. The CO2 efflux increases remained after filtering the time series with multiple regression analysis (MRA), were soil temperature, soil water content, wind speed and barometric pressure explained 16.7% of variability.
We analysed the CO2 efflux time series by using the Continuous Wavelet Transform, with the Ricker wavelet, to detect relevant time-frequency patterns in the signal. The wavelet analysis showed, at low frequencies, quasi-periodical oscillations with periods of 3-4 years. Moreover, during the intervals of highest levels of CO2 efflux the analysis evidenced also oscillations with a period of about 6 months.
Our data show in 2002 a marked peak of the filtered CO2 signal. The beginning of this increase is nearly coincident with a similar signal on the data of CO2 emission, coming from periodic surveys performed yearly on the area of Teide summit cone since 1997. We interpret these signals as an “early warning” associated to the 2004 seismo-volcanic unrest in Tenerife. A similar coincidence was observed also for the interval 2006-2009, which was followed by an increase in the local seismicity of Tenerife as well, characterized both by an increasing number of small earthquakes occurring, respectively, mostly along the NW dorsal and in the southern part of the NE dorsal of Tenerife.
Our study reveals that wavelet analysis on the continuous CO2 efflux measurement could help to detect anomalous degassing periods, possibly indicating impending seismo-volcanic unrest episodes and/or eruptions. Finally, it is important to remark that the data presented in this work, constitute one of the longest time series of continuous CO2 efflux measurements in an active volcanic area, hence providing an important benchmark for similar measurements worldwide.
How to cite: Padilla, G. D., D'Auria, L., Peréz, N. M., Hernández, P. A., Padrón, E., Barrancos, J., Melián, G., and Asensio-Ramos, M.: Wavelet analysis of geochemical time series: continuous CO2 flux measurements at the summit cone of Teide volcano, Tenerife, Canary Islands, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4764, https://doi.org/10.5194/egusphere-egu2020-4764, 2020.
EGU2020-4899 | Displays | ITS4.5/GI1.4
Moving Average Convergence/Divergence analysis of geophysical and geochemical time series: application to the 2011-2012 El Hierro eruptionRobabeh Salehiozoumchelouei, Yousef Rajaeitabrizi, José Luis Sánchez de la Rosa, Luca D'Auria, and Nemesio M. Pérez
Financial markets specialists often use multiscale analysis on different kind of time series. Many tools have been developed for these tasks. Two of them, widely used, are: candlestick charts and technical indicators. Our approach consists in using both tools to analyze geophysical and geochemical time series.
In this work we represent signals using candlesticks at user selected time scales. In our case we use four summary quantities of the signal: the amplitude of the first sample, the maximum amplitude within the candle, the minimum amplitude and the amplitude of the last sample used in the candle. We show how the graphical candlestick representation alone is able to emphasize representative changes within the time-series in a multiscale fashion.
On the other hand, many technical indicators have been defined to extract further information from such type of charts. Among the most commonly used technical indicators are: Simple Moving Average (SMA), Exponential Moving Average (EMA) and Moving Average Convergence/Divergence (MACD). EMA is a temporal smoothing with an exponential weighting determined by a time scale factor. MACD is the difference between EMA realized at a short scale with another EMA at a larger scale. For instance, a commonly used MACD in financial markets is computed using scales of 12 and 26 days. In the case of actual geophysical and geochemical datasets, such scales should be selected on the basis of the time scales of interest.
Using tests realized on synthetic datasets we demonstrate that MACD is a proxy for the derivative of time-series, event with a very high noise level. This is of course of great interest when analyzing geophysical and geochemical time series, with the aim of detecting changes in their trends. We applied candlestick analysis to various seismological and geochemical datasets, in particular we show an example application to recent 2011-2012 eruption of the island of El Hierro in the Canary Islands, highlighting the capability of this method to detect changes in the trend of time-series earlier and better that other simpler techniques.
How to cite: Salehiozoumchelouei, R., Rajaeitabrizi, Y., Sánchez de la Rosa, J. L., D'Auria, L., and Pérez, N. M.: Moving Average Convergence/Divergence analysis of geophysical and geochemical time series: application to the 2011-2012 El Hierro eruption, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4899, https://doi.org/10.5194/egusphere-egu2020-4899, 2020.
Financial markets specialists often use multiscale analysis on different kind of time series. Many tools have been developed for these tasks. Two of them, widely used, are: candlestick charts and technical indicators. Our approach consists in using both tools to analyze geophysical and geochemical time series.
In this work we represent signals using candlesticks at user selected time scales. In our case we use four summary quantities of the signal: the amplitude of the first sample, the maximum amplitude within the candle, the minimum amplitude and the amplitude of the last sample used in the candle. We show how the graphical candlestick representation alone is able to emphasize representative changes within the time-series in a multiscale fashion.
On the other hand, many technical indicators have been defined to extract further information from such type of charts. Among the most commonly used technical indicators are: Simple Moving Average (SMA), Exponential Moving Average (EMA) and Moving Average Convergence/Divergence (MACD). EMA is a temporal smoothing with an exponential weighting determined by a time scale factor. MACD is the difference between EMA realized at a short scale with another EMA at a larger scale. For instance, a commonly used MACD in financial markets is computed using scales of 12 and 26 days. In the case of actual geophysical and geochemical datasets, such scales should be selected on the basis of the time scales of interest.
Using tests realized on synthetic datasets we demonstrate that MACD is a proxy for the derivative of time-series, event with a very high noise level. This is of course of great interest when analyzing geophysical and geochemical time series, with the aim of detecting changes in their trends. We applied candlestick analysis to various seismological and geochemical datasets, in particular we show an example application to recent 2011-2012 eruption of the island of El Hierro in the Canary Islands, highlighting the capability of this method to detect changes in the trend of time-series earlier and better that other simpler techniques.
How to cite: Salehiozoumchelouei, R., Rajaeitabrizi, Y., Sánchez de la Rosa, J. L., D'Auria, L., and Pérez, N. M.: Moving Average Convergence/Divergence analysis of geophysical and geochemical time series: application to the 2011-2012 El Hierro eruption, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4899, https://doi.org/10.5194/egusphere-egu2020-4899, 2020.
EGU2020-5750 | Displays | ITS4.5/GI1.4
Analyzing the 2011 eruption of Nabro volcano using satellite remote sensing and numerical modeling of lava flowsCiro Del Negro, Gaetana Ganci, Annalisa Cappello, Giuseppe Bilotta, and Claudia Corradino
The 2011 eruption of Nabro volcano, situated at the southeast end of the Danakil Alps in Eritrea, has been the first historical on record and one of the largest eruptions of the last decade. Due to the remote location of the Nabro volcano and the lack of data from ground monitoring networks at the time of the eruption, satellite remote sensing gives the first global view of the event, providing insights on its evolution over time. Here we used numerical modeling and high spatial resolution satellite data (i.e. EO-ALI, ASTER, PlanetScope) to track the path and velocity of lava flows and to reconstruct the pre- and post-eruptive topographies in order to quantify the total bulk volume emitted. High temporal resolution images (i.e. SEVIRI and MODIS) were exploited to estimate the time-averaged discharge rate (TADR) and assess the dense rock equivalent (DRE) lava volumes constrained by the topographic approach. Finally, satellite-derived parameters were used as input and validation tags for the numerical modelling of lava flow scenarios, offering further insights into the eruption and emplacement dynamics. We found that the total volume of deposits, calculated from differences of digital elevation models (DEMs), is about 580 × 106 m3, of which about 336 × 106 m3 is the volume of the main lava flow that advanced eastward beyond the caldera. Multi-spectral satellite observations indicate that the main lava flow had reached its maximum extent (∼16 km) within about 4 days of the eruption onset on midnight 12 June. Lava flow simulations driven by satellite-derived parameters allow building an understanding of the advance rate and maximum extent of the main lava flow showing that it is likely to have reached 10.5 km in one day with a maximum speed of ~0.44 km/h.
How to cite: Del Negro, C., Ganci, G., Cappello, A., Bilotta, G., and Corradino, C.: Analyzing the 2011 eruption of Nabro volcano using satellite remote sensing and numerical modeling of lava flows, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5750, https://doi.org/10.5194/egusphere-egu2020-5750, 2020.
The 2011 eruption of Nabro volcano, situated at the southeast end of the Danakil Alps in Eritrea, has been the first historical on record and one of the largest eruptions of the last decade. Due to the remote location of the Nabro volcano and the lack of data from ground monitoring networks at the time of the eruption, satellite remote sensing gives the first global view of the event, providing insights on its evolution over time. Here we used numerical modeling and high spatial resolution satellite data (i.e. EO-ALI, ASTER, PlanetScope) to track the path and velocity of lava flows and to reconstruct the pre- and post-eruptive topographies in order to quantify the total bulk volume emitted. High temporal resolution images (i.e. SEVIRI and MODIS) were exploited to estimate the time-averaged discharge rate (TADR) and assess the dense rock equivalent (DRE) lava volumes constrained by the topographic approach. Finally, satellite-derived parameters were used as input and validation tags for the numerical modelling of lava flow scenarios, offering further insights into the eruption and emplacement dynamics. We found that the total volume of deposits, calculated from differences of digital elevation models (DEMs), is about 580 × 106 m3, of which about 336 × 106 m3 is the volume of the main lava flow that advanced eastward beyond the caldera. Multi-spectral satellite observations indicate that the main lava flow had reached its maximum extent (∼16 km) within about 4 days of the eruption onset on midnight 12 June. Lava flow simulations driven by satellite-derived parameters allow building an understanding of the advance rate and maximum extent of the main lava flow showing that it is likely to have reached 10.5 km in one day with a maximum speed of ~0.44 km/h.
How to cite: Del Negro, C., Ganci, G., Cappello, A., Bilotta, G., and Corradino, C.: Analyzing the 2011 eruption of Nabro volcano using satellite remote sensing and numerical modeling of lava flows, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5750, https://doi.org/10.5194/egusphere-egu2020-5750, 2020.
EGU2020-4408 | Displays | ITS4.5/GI1.4
Lava flow risk assessment on Mount Etna through hazard and exposure modellingAnnalisa Cappello, Giuseppe Bilotta, Claudia Corradino, Gaetana Ganci, Alexis Hérault, Vito Zago, and Ciro Del Negro
Lava flows represent the greatest threat to exposed population and infrastructure on Mt Etna volcano (Italy). The increasing exposure of a larger population, which has almost tripled in the area around Mt Etna during the last 150 years, has resulted from poor assessment of the volcanic hazard, allowing inappropriate land use in vulnerable areas. We present a new methodology to quantify the lava flow risk on Etna’s flanks using a GIS-based approach that integrates the hazard with the exposure of elements at stake. The hazard, showing the long-term probability related to lava flow inundation, is obtained by combining three different kinds of information: the spatiotemporal probability of the future opening of new flank eruptive vents, the event probability associated with classes of expected eruptions, and the overlapping of lava flow paths simulated by the MAGFLOW model. Data including all exposed elements have been gathered from institutional web portals and high-resolution satellite imagery, and organized in four thematic layers: population, buildings, service networks, and land use. The total exposure is given by a weighted linear combination of the four thematic layers, where weights are calculated using the Analytic Hierarchy Process (AHP). The resulting risk map shows the likely damage caused by a lava flow eruption, allowing rapid visualization of the areas subject to the greatest losses if a flank eruption were to occur on Etna.
How to cite: Cappello, A., Bilotta, G., Corradino, C., Ganci, G., Hérault, A., Zago, V., and Del Negro, C.: Lava flow risk assessment on Mount Etna through hazard and exposure modelling, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4408, https://doi.org/10.5194/egusphere-egu2020-4408, 2020.
Lava flows represent the greatest threat to exposed population and infrastructure on Mt Etna volcano (Italy). The increasing exposure of a larger population, which has almost tripled in the area around Mt Etna during the last 150 years, has resulted from poor assessment of the volcanic hazard, allowing inappropriate land use in vulnerable areas. We present a new methodology to quantify the lava flow risk on Etna’s flanks using a GIS-based approach that integrates the hazard with the exposure of elements at stake. The hazard, showing the long-term probability related to lava flow inundation, is obtained by combining three different kinds of information: the spatiotemporal probability of the future opening of new flank eruptive vents, the event probability associated with classes of expected eruptions, and the overlapping of lava flow paths simulated by the MAGFLOW model. Data including all exposed elements have been gathered from institutional web portals and high-resolution satellite imagery, and organized in four thematic layers: population, buildings, service networks, and land use. The total exposure is given by a weighted linear combination of the four thematic layers, where weights are calculated using the Analytic Hierarchy Process (AHP). The resulting risk map shows the likely damage caused by a lava flow eruption, allowing rapid visualization of the areas subject to the greatest losses if a flank eruption were to occur on Etna.
How to cite: Cappello, A., Bilotta, G., Corradino, C., Ganci, G., Hérault, A., Zago, V., and Del Negro, C.: Lava flow risk assessment on Mount Etna through hazard and exposure modelling, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4408, https://doi.org/10.5194/egusphere-egu2020-4408, 2020.
EGU2020-7667 | Displays | ITS4.5/GI1.4
The Holocene volcanism of El Hierro, Canary IslandsAlejandro Rodríguez-González, Meritxell Aulinas, Francisco José Perez-Torrado, Constantino Criado Hernández, Maria del Carmen Cabrera, and Jose-Luis Fernandez-Turiel
El Hierro is, together with La Palma, the youngest island of the Canarian Archipelago. Both islands are in the shield stage of their volcanic growth, which implies a high volcanic activity during the Holocene period. The submarine eruption occurred in October 2011 in the SSE rift of El Hierro evidenced the active volcanic character of the island. Even so, despite the numerous scientific works published following the submarine eruption (most of them centered to understand such volcanic event), there is still a lack of precise knowledge about the Holocene subaerial volcanism of this island. The LAJIAL Project focuses on solving this knowledge gap.
The Holocene subaerial volcanism of El Hierro generates fields of monogenetic volcanoes linked to the three systems of rifts present on the island. Its eruptive mechanisms are typically Strombolian although there are also phreato-Strombolian events. The most recent eruptions frequently form lava on coastal platforms, which are considered after the last glacial maximum (approx. 20 ka BP). The most developed coastal platforms in El Hierro are at the ends of the rifts and in the interior of the El Golfo depression. This geomorphological criterion shows that more than thirty subaerial eruptions have taken place in El Hierro since approx. 20 ka BP. In addition, there are many apparently recent volcanic edifices far from the coast.
The research of the most recent volcanism of the island, the last 11,700 years of the Holocene, covers a long enough period whereas it is close to the present day. Thus, this period is the best to model the eruptive processes that will allow us to evaluate the future scenarios of the eruptive dynamics in El Hierro. The Project LAJIAL combines methodologies of geological mapping, geomorphology, GIS, chronostratigraphy, paleomagnetism, petrology and geochemistry to solve the Holocene eruptive recurrence rate in El Hierro, and to constrain the rift model of intraplate ocean volcanic islands.
Financial support was provided by the Project LAJIAL (ref. PGC2018-101027-B-I00, MCIU/AEI/FEDER, EU). This study was carried out in the framework of the Research Consolidated Groups GEOVOL (Canary Islands Government, ULPGC) and GEOPAM (Generalitat de Catalunya, 2017 SGR 1494).
How to cite: Rodríguez-González, A., Aulinas, M., Perez-Torrado, F. J., Criado Hernández, C., Cabrera, M. C., and Fernandez-Turiel, J.-L.: The Holocene volcanism of El Hierro, Canary Islands, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7667, https://doi.org/10.5194/egusphere-egu2020-7667, 2020.
El Hierro is, together with La Palma, the youngest island of the Canarian Archipelago. Both islands are in the shield stage of their volcanic growth, which implies a high volcanic activity during the Holocene period. The submarine eruption occurred in October 2011 in the SSE rift of El Hierro evidenced the active volcanic character of the island. Even so, despite the numerous scientific works published following the submarine eruption (most of them centered to understand such volcanic event), there is still a lack of precise knowledge about the Holocene subaerial volcanism of this island. The LAJIAL Project focuses on solving this knowledge gap.
The Holocene subaerial volcanism of El Hierro generates fields of monogenetic volcanoes linked to the three systems of rifts present on the island. Its eruptive mechanisms are typically Strombolian although there are also phreato-Strombolian events. The most recent eruptions frequently form lava on coastal platforms, which are considered after the last glacial maximum (approx. 20 ka BP). The most developed coastal platforms in El Hierro are at the ends of the rifts and in the interior of the El Golfo depression. This geomorphological criterion shows that more than thirty subaerial eruptions have taken place in El Hierro since approx. 20 ka BP. In addition, there are many apparently recent volcanic edifices far from the coast.
The research of the most recent volcanism of the island, the last 11,700 years of the Holocene, covers a long enough period whereas it is close to the present day. Thus, this period is the best to model the eruptive processes that will allow us to evaluate the future scenarios of the eruptive dynamics in El Hierro. The Project LAJIAL combines methodologies of geological mapping, geomorphology, GIS, chronostratigraphy, paleomagnetism, petrology and geochemistry to solve the Holocene eruptive recurrence rate in El Hierro, and to constrain the rift model of intraplate ocean volcanic islands.
Financial support was provided by the Project LAJIAL (ref. PGC2018-101027-B-I00, MCIU/AEI/FEDER, EU). This study was carried out in the framework of the Research Consolidated Groups GEOVOL (Canary Islands Government, ULPGC) and GEOPAM (Generalitat de Catalunya, 2017 SGR 1494).
How to cite: Rodríguez-González, A., Aulinas, M., Perez-Torrado, F. J., Criado Hernández, C., Cabrera, M. C., and Fernandez-Turiel, J.-L.: The Holocene volcanism of El Hierro, Canary Islands, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7667, https://doi.org/10.5194/egusphere-egu2020-7667, 2020.
EGU2020-8153 | Displays | ITS4.5/GI1.4
Contemporary challenges for Shoreline Change AnalysisSue Brooks, Jamie Pollard, and Tom Spencer
Shoreline change analysis has been deployed across a range of spatio-temporal scales. Accordingly, shoreline change studies have sought to capture shoreline dynamics at a variety of scales, ranging from the local impacts of individual storms to global trends measured over multiple decades. The scale at which we can approach the issue of shoreline change is, to a large extent, determined by the availability of data over time and space. With existing threats from the interactions between accelerated sea level rise, changing storminess and human intervention, shoreline change analysis has never been more relevant or challenging. Historic, centennial-scale shoreline change analysis relies on historic maps where there is normally just a single proxy indicator for consistent shoreline position; the mean water level of ordinary tides on UK Ordnance Survey maps, for example. Occasionally where there are specific coastal landforms that can be mapped, there might be a second proxy such as cliff top position. Shoreline change rates can be determined by extracting these proxies from sequential map surveys, provided the survey dates (ie: not the map publication date) are known.
Shoreline change quantification for more recent decadal-scale periods has been greatly enhanced by increased data availability. This is exemplified by analyses that use widespread coverage available from aerial photographs (past 3 decades). Even more recently on near-annual scales Light Detection and Ranging (LiDAR) data are becoming the norm for capturing storm impacts and shoreline change, enabling volumetric assessments of change in addition to the more traditional linear approaches. LiDAR is enhanced by ground survey Real Time Kinematic (RTK) Instrumentation that can be timed to coincide with storms. As the frequency of dataset capture has increased so has the spatial scale of coverage. Hence the latest shoreline change assessments are global in scale and use Landsat images to focus on hotspots of shoreline change (advance as well as retreat) over the past 30 years. Considering all scales together raises three central questions for shoreline change analysis and these are addressed in this paper.
Firstly, what methodological approach is most suitable for delimiting shorelines and generating the underpinning digitised shorelines for shoreline change assessment?
Secondly, what lessons can be learnt from using an approach that combines both proxy-based (visually discernible signatures) and datum-based (related to a particular water level) shorelines that change differentially with respect to different process-drivers?
Thirdly, given the current state-of-the-art around data availability, what is the most appropriate scale to approach shoreline change assessments?
How to cite: Brooks, S., Pollard, J., and Spencer, T.: Contemporary challenges for Shoreline Change Analysis, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8153, https://doi.org/10.5194/egusphere-egu2020-8153, 2020.
Shoreline change analysis has been deployed across a range of spatio-temporal scales. Accordingly, shoreline change studies have sought to capture shoreline dynamics at a variety of scales, ranging from the local impacts of individual storms to global trends measured over multiple decades. The scale at which we can approach the issue of shoreline change is, to a large extent, determined by the availability of data over time and space. With existing threats from the interactions between accelerated sea level rise, changing storminess and human intervention, shoreline change analysis has never been more relevant or challenging. Historic, centennial-scale shoreline change analysis relies on historic maps where there is normally just a single proxy indicator for consistent shoreline position; the mean water level of ordinary tides on UK Ordnance Survey maps, for example. Occasionally where there are specific coastal landforms that can be mapped, there might be a second proxy such as cliff top position. Shoreline change rates can be determined by extracting these proxies from sequential map surveys, provided the survey dates (ie: not the map publication date) are known.
Shoreline change quantification for more recent decadal-scale periods has been greatly enhanced by increased data availability. This is exemplified by analyses that use widespread coverage available from aerial photographs (past 3 decades). Even more recently on near-annual scales Light Detection and Ranging (LiDAR) data are becoming the norm for capturing storm impacts and shoreline change, enabling volumetric assessments of change in addition to the more traditional linear approaches. LiDAR is enhanced by ground survey Real Time Kinematic (RTK) Instrumentation that can be timed to coincide with storms. As the frequency of dataset capture has increased so has the spatial scale of coverage. Hence the latest shoreline change assessments are global in scale and use Landsat images to focus on hotspots of shoreline change (advance as well as retreat) over the past 30 years. Considering all scales together raises three central questions for shoreline change analysis and these are addressed in this paper.
Firstly, what methodological approach is most suitable for delimiting shorelines and generating the underpinning digitised shorelines for shoreline change assessment?
Secondly, what lessons can be learnt from using an approach that combines both proxy-based (visually discernible signatures) and datum-based (related to a particular water level) shorelines that change differentially with respect to different process-drivers?
Thirdly, given the current state-of-the-art around data availability, what is the most appropriate scale to approach shoreline change assessments?
How to cite: Brooks, S., Pollard, J., and Spencer, T.: Contemporary challenges for Shoreline Change Analysis, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8153, https://doi.org/10.5194/egusphere-egu2020-8153, 2020.
EGU2020-8632 | Displays | ITS4.5/GI1.4
Assessment of maximum flow and emplacement temperatures reached by PDCs using charred wood fragmentsAlessandra Pensa, Sveva Corrado, and Guido Giordano
Temperature evaluation of PDCs has been recently performed using optical analysis of charred wood (Reflectance analysis - Ro%) embedded within the pyroclastic deposits.
The validity of this proxy for the emplacement temperature assessment, has been established in different case studies (Fogo Volcano, Laacher See volcano, Merapi Volcano, Colima Volcano, Doña Juana Volcano, Ercolano-Vesuvius Volcano), resulting comparable with the already well know paleomagnetic analysis (pTRM).
Due to its not retrograde nature, the process of carbonification records over time the maximum temperatures experienced by the wood fragment/tree trunk/furniture. This peculiarity has great importance in terms of timing of charring events, as the charred wood can record the possible temperature fluctuations in case of multiple pulse events. This allows us to reconstruct the thermal and dynamic of PDCs history at different steps.
Reflectance analysis (Ro%) results display samples with homogeneous charring temperature (same Ro% values) from rim to core and others with different charring temperatures throughout the sample. Ro% of the latter usually infer higher temperature on the edge of the fragment/tree trunk than in the inner part. This bimodal reflectance distribution can be attributable to multiple temperature exposure, occurred during diachronous events of flow and deposition. Therefore, within the same fragment/tree trunk we can extrapolate PDCs temperature information related not only to equilibrium (emplacement) condition but, more importantly, to dynamic (flow) regime.
This study constitutes a pioneering attempt for the indirect estimation of the temperature of the PDCs not only for volcanic hazard estimation, but also in the archaeological field. In fact, the numerous remains of charred wooden artefacts found in the archaeological sites of Pompeii, Herculaneum and in the Meurin quarry (Eiffel-Germany), allowed the reconstruction of temperature variation based on the vent distance and the presence of buildings which may have interacted with the depositional processes of pyroclastic flows. This study opens a promising new frontier to evaluate the maximum temperature of the PDCs, based on the degree of carbonization of the organic matter incorporated during volcanic events. Estimating the temperature of the dynamic temperature of the PDC has important implications in terms of volcanic risk assessment.
How to cite: Pensa, A., Corrado, S., and Giordano, G.: Assessment of maximum flow and emplacement temperatures reached by PDCs using charred wood fragments , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8632, https://doi.org/10.5194/egusphere-egu2020-8632, 2020.
Temperature evaluation of PDCs has been recently performed using optical analysis of charred wood (Reflectance analysis - Ro%) embedded within the pyroclastic deposits.
The validity of this proxy for the emplacement temperature assessment, has been established in different case studies (Fogo Volcano, Laacher See volcano, Merapi Volcano, Colima Volcano, Doña Juana Volcano, Ercolano-Vesuvius Volcano), resulting comparable with the already well know paleomagnetic analysis (pTRM).
Due to its not retrograde nature, the process of carbonification records over time the maximum temperatures experienced by the wood fragment/tree trunk/furniture. This peculiarity has great importance in terms of timing of charring events, as the charred wood can record the possible temperature fluctuations in case of multiple pulse events. This allows us to reconstruct the thermal and dynamic of PDCs history at different steps.
Reflectance analysis (Ro%) results display samples with homogeneous charring temperature (same Ro% values) from rim to core and others with different charring temperatures throughout the sample. Ro% of the latter usually infer higher temperature on the edge of the fragment/tree trunk than in the inner part. This bimodal reflectance distribution can be attributable to multiple temperature exposure, occurred during diachronous events of flow and deposition. Therefore, within the same fragment/tree trunk we can extrapolate PDCs temperature information related not only to equilibrium (emplacement) condition but, more importantly, to dynamic (flow) regime.
This study constitutes a pioneering attempt for the indirect estimation of the temperature of the PDCs not only for volcanic hazard estimation, but also in the archaeological field. In fact, the numerous remains of charred wooden artefacts found in the archaeological sites of Pompeii, Herculaneum and in the Meurin quarry (Eiffel-Germany), allowed the reconstruction of temperature variation based on the vent distance and the presence of buildings which may have interacted with the depositional processes of pyroclastic flows. This study opens a promising new frontier to evaluate the maximum temperature of the PDCs, based on the degree of carbonization of the organic matter incorporated during volcanic events. Estimating the temperature of the dynamic temperature of the PDC has important implications in terms of volcanic risk assessment.
How to cite: Pensa, A., Corrado, S., and Giordano, G.: Assessment of maximum flow and emplacement temperatures reached by PDCs using charred wood fragments , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8632, https://doi.org/10.5194/egusphere-egu2020-8632, 2020.
EGU2020-8735 | Displays | ITS4.5/GI1.4
Optimizing barrier placement for lava flow hazard and risk mitigationGiuseppe Bilotta, Annalisa Cappello, Veronica Centorrino, Claudia Corradino, Gaetana Ganci, and Ciro Del Negro
Mitigating hazards when lava flows threaten infrastructure is one of the most challenging fields of volcanology, and has an immediate and practical impact on society. Lava flow hazard is determined by the probability of inundation, and essentially controlled by the topography of the area of interest. The most common actions of intervention for lava flow hazard mitigation are therefore the construction of artificial barriers and ditches that can control the flow direction and advancement speed. Estimating the effect a barrier or ditch can have on lava flow paths is non-trivial, but numerical modelling can provide a powerful tool by simulating the eruptive scenario and thus assess the effectiveness of the mitigation action. We present a numerical method for the design of optimal artificial barriers, in terms of location and geometric features, aimed at minimizing the impact of lava flows based on the spatial distribution of exposed elements. First, an exposure analysis collects information about elements at risk from different datasets: population per municipality, distribution of buildings, infrastructure, routes, gas and electricity networks, and land use; numerical simulations are used to compute the probability for these elements to be inundated by lava flows from a number of possible eruptive scenarios (hazard assessment) and computing the associated economic loss and potential destruction of key facilities (risk assessment). We then generate several intervention scenarios, defined by the location, orientation and geometry (width, length, thickness and even shape) of multiple barriers, and compute the corresponding variation in economic loss. Optimality of the barrier placement is thus considered as a minimization problem for the economic loss, controlled by the barrier placement and constrained by the associated costs. We demonstrate the operation of this system by using a retrospective analysis of some recent effusive eruptions at Mount Etna, Sicily.
How to cite: Bilotta, G., Cappello, A., Centorrino, V., Corradino, C., Ganci, G., and Del Negro, C.: Optimizing barrier placement for lava flow hazard and risk mitigation, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8735, https://doi.org/10.5194/egusphere-egu2020-8735, 2020.
Mitigating hazards when lava flows threaten infrastructure is one of the most challenging fields of volcanology, and has an immediate and practical impact on society. Lava flow hazard is determined by the probability of inundation, and essentially controlled by the topography of the area of interest. The most common actions of intervention for lava flow hazard mitigation are therefore the construction of artificial barriers and ditches that can control the flow direction and advancement speed. Estimating the effect a barrier or ditch can have on lava flow paths is non-trivial, but numerical modelling can provide a powerful tool by simulating the eruptive scenario and thus assess the effectiveness of the mitigation action. We present a numerical method for the design of optimal artificial barriers, in terms of location and geometric features, aimed at minimizing the impact of lava flows based on the spatial distribution of exposed elements. First, an exposure analysis collects information about elements at risk from different datasets: population per municipality, distribution of buildings, infrastructure, routes, gas and electricity networks, and land use; numerical simulations are used to compute the probability for these elements to be inundated by lava flows from a number of possible eruptive scenarios (hazard assessment) and computing the associated economic loss and potential destruction of key facilities (risk assessment). We then generate several intervention scenarios, defined by the location, orientation and geometry (width, length, thickness and even shape) of multiple barriers, and compute the corresponding variation in economic loss. Optimality of the barrier placement is thus considered as a minimization problem for the economic loss, controlled by the barrier placement and constrained by the associated costs. We demonstrate the operation of this system by using a retrospective analysis of some recent effusive eruptions at Mount Etna, Sicily.
How to cite: Bilotta, G., Cappello, A., Centorrino, V., Corradino, C., Ganci, G., and Del Negro, C.: Optimizing barrier placement for lava flow hazard and risk mitigation, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8735, https://doi.org/10.5194/egusphere-egu2020-8735, 2020.
EGU2020-9120 | Displays | ITS4.5/GI1.4
The VEI 2 Christmas 2018 Etna Eruption: A small but intense eruptive event or the starting phase of a larger one?Sonia Calvari, Giuseppe Bilotta, Alessandro Bonaccorso, Tommaso Caltabiano, Annalisa Cappello, Claudia Corradino, Ciro Del Negro, Gaetana Ganci, Marco Neri, Emilio Pecora, Giuseppe G. Salerno, and Letizia Spampinato
The Etna flank eruption started on 24 December 2018 lasted a few days and involved the opening of an eruptive fissure, accompanied by a seismic swarm and shallow earthquakes, and by large and widespread ground deformation especially on the eastern flank of the volcano. Lava fountains and ash plume from the uppermost eruptive fissure have accompanied the opening stage causing disruption of Catania international airport, and have been followed by a quiet lava effusion within the barren Valle del Bove depression until 27 December. This is the first flank eruption occurring at Etna in the last decade, during which eruptive activity was confined to the summit craters and resulted in lava fountains and lava flow output from the crater rims. In this paper we use ground and satellite remote sensing techniques to describe the sequence of events, quantify the erupted volumes of lava, gas and tephra, and assess volcanic hazard.
How to cite: Calvari, S., Bilotta, G., Bonaccorso, A., Caltabiano, T., Cappello, A., Corradino, C., Del Negro, C., Ganci, G., Neri, M., Pecora, E., Salerno, G. G., and Spampinato, L.: The VEI 2 Christmas 2018 Etna Eruption: A small but intense eruptive event or the starting phase of a larger one?, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9120, https://doi.org/10.5194/egusphere-egu2020-9120, 2020.
The Etna flank eruption started on 24 December 2018 lasted a few days and involved the opening of an eruptive fissure, accompanied by a seismic swarm and shallow earthquakes, and by large and widespread ground deformation especially on the eastern flank of the volcano. Lava fountains and ash plume from the uppermost eruptive fissure have accompanied the opening stage causing disruption of Catania international airport, and have been followed by a quiet lava effusion within the barren Valle del Bove depression until 27 December. This is the first flank eruption occurring at Etna in the last decade, during which eruptive activity was confined to the summit craters and resulted in lava fountains and lava flow output from the crater rims. In this paper we use ground and satellite remote sensing techniques to describe the sequence of events, quantify the erupted volumes of lava, gas and tephra, and assess volcanic hazard.
How to cite: Calvari, S., Bilotta, G., Bonaccorso, A., Caltabiano, T., Cappello, A., Corradino, C., Del Negro, C., Ganci, G., Neri, M., Pecora, E., Salerno, G. G., and Spampinato, L.: The VEI 2 Christmas 2018 Etna Eruption: A small but intense eruptive event or the starting phase of a larger one?, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9120, https://doi.org/10.5194/egusphere-egu2020-9120, 2020.
EGU2020-9672 | Displays | ITS4.5/GI1.4
Using explanatory crop models to help decision support system in controlled environment agriculture (CEA)Chiara Amitrano, Giovanni Battista Chirico, Youssef Rouphael, Stefania De Pascale, and Veronica De Micco
Lettuce (Lactuca sativa L.) is a popular leafy vegetable, widely grown and consumed throughout the world. Growing Lettuce plants in controlled environment, it is useful to increase the yield and obtain production year-round. In CEA (Controlled Environment Agriculture), computer technology is an integral part in the production and different sensors used to monitor environmental parameters and activate environmental control, are necessary. With the advent of technology, proximal sensors and plant phenotyping (in terms of physiological measurements of plant status) can help farmers in crop management. However, these kinds of tools are often expensive or inaccessible for stakeholders. The application of these tools to small-scale cultivation trials, could provide data for the implementation of mathematical models capable of predicting changes possibly happening during the cultivation. These models could then be applied at larger scales, as extensive farm production and be used to help in the cultivation management.
In this study, green and red cultivars of Lactuca sativa L. ‘Salanova’ were grown in a growth chamber under controlled environmental condition (T, RH, light intensity and quality) in two trials under different vapour pressure deficit (VPD) : 1) VPD of 0.70 kPa (Low VPD; nominal condition) and 2) VPD of 1.76 (High VPD; off nominal condition). Plants were irrigated to field-capacity and weighted every-day in order to record daily ET; infra-red measurements were carried out to record leaf temperature and pictures were taken to monitor growth during the cultivation. Furthermore, after 23 days, on fully developed leaves, eco-physiological analyses (gas exchange and chlorophyll “a” measurements) were performed to assess the plant physiological behaviour in response to the different environmental conditions. Environmental data, were used as inputs in an energy cascade model (MEC) to predict changes in the plant daily growth, photosynthesis and evapotranspiration. The original model, was implemented with a few variations: leaf temperature (T) was used in place of air T for computing the stomatal conductance (gs) and the model parameters maxCUE and maxQY, were differentiated for the nominal and off-nominal scenarios and for green and red lettuce cultivars. After the validation against experimental data, this model appears to be a promising tool that can be implemented for forecasting variations triggered by anomalies in the environmental control. However, a next step will be to add a few parameters that will consider the intrinsic morpho-physiological variability of plants during leaf development.
How to cite: Amitrano, C., Chirico, G. B., Rouphael, Y., De Pascale, S., and De Micco, V.: Using explanatory crop models to help decision support system in controlled environment agriculture (CEA), EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9672, https://doi.org/10.5194/egusphere-egu2020-9672, 2020.
Lettuce (Lactuca sativa L.) is a popular leafy vegetable, widely grown and consumed throughout the world. Growing Lettuce plants in controlled environment, it is useful to increase the yield and obtain production year-round. In CEA (Controlled Environment Agriculture), computer technology is an integral part in the production and different sensors used to monitor environmental parameters and activate environmental control, are necessary. With the advent of technology, proximal sensors and plant phenotyping (in terms of physiological measurements of plant status) can help farmers in crop management. However, these kinds of tools are often expensive or inaccessible for stakeholders. The application of these tools to small-scale cultivation trials, could provide data for the implementation of mathematical models capable of predicting changes possibly happening during the cultivation. These models could then be applied at larger scales, as extensive farm production and be used to help in the cultivation management.
In this study, green and red cultivars of Lactuca sativa L. ‘Salanova’ were grown in a growth chamber under controlled environmental condition (T, RH, light intensity and quality) in two trials under different vapour pressure deficit (VPD) : 1) VPD of 0.70 kPa (Low VPD; nominal condition) and 2) VPD of 1.76 (High VPD; off nominal condition). Plants were irrigated to field-capacity and weighted every-day in order to record daily ET; infra-red measurements were carried out to record leaf temperature and pictures were taken to monitor growth during the cultivation. Furthermore, after 23 days, on fully developed leaves, eco-physiological analyses (gas exchange and chlorophyll “a” measurements) were performed to assess the plant physiological behaviour in response to the different environmental conditions. Environmental data, were used as inputs in an energy cascade model (MEC) to predict changes in the plant daily growth, photosynthesis and evapotranspiration. The original model, was implemented with a few variations: leaf temperature (T) was used in place of air T for computing the stomatal conductance (gs) and the model parameters maxCUE and maxQY, were differentiated for the nominal and off-nominal scenarios and for green and red lettuce cultivars. After the validation against experimental data, this model appears to be a promising tool that can be implemented for forecasting variations triggered by anomalies in the environmental control. However, a next step will be to add a few parameters that will consider the intrinsic morpho-physiological variability of plants during leaf development.
How to cite: Amitrano, C., Chirico, G. B., Rouphael, Y., De Pascale, S., and De Micco, V.: Using explanatory crop models to help decision support system in controlled environment agriculture (CEA), EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9672, https://doi.org/10.5194/egusphere-egu2020-9672, 2020.
EGU2020-11456 | Displays | ITS4.5/GI1.4
Groundwater and soil CO2 efflux weekly monitoring network for the surveillance of Cumbre Vieja volcano, Canary IslandsCecilia Amonte, Alana Mulliss, Elizabeth Sampson, Alba Martín-Lorenzo, Claudia Rodríguez-Pérez, Daniel Di Nardo, Gladys V. Melián, José M. Santana-dLeón, Pedro A. Hernández, and Nemesio M. Pérez
La Palma Island (708.32 km2) is located at the north-western end of the Canary Archipelago and is one of the youngest of the archipelago. In the last 123 ka, volcanic activity has taken place exclusively at Cumbre Vieja, the most active basaltic volcano in the Canaries, which is located at the southern part of the island. Since no visible geothermal manifestations occur at the surface environment of this volcano, during the last 20 years there has been considerable interest in the study of diffuse degassing as a powerful tool in the volcano monitoring program. In this study we have used two different geochemical approaches for volcano monitoring from October 2017 to November 2019. First, we have developed a network of 21 closed static chambers to determine soil CO2 effluxes. Additionally, we have monitored physical-chemical parameters (temperature, pH, electrical conductivity -EC-) and chemical/isotopic composition and dissolved gases in the water of two galleries (Peña Horeb and Trasvase Oeste) and one water well (Las Salinas). Soil CO2 effluxes for the alkaline traps showed an average value of 7.4 g·m-2·d-1 for the entire Cumbre Vieja volcano. The gas sampled on the head space of the traps can be considered as CO2-enriched air, showing an average value of 1,942 ppmV of CO2. Regarding the CO2 isotopic composition (δ13C-CO2), most of the stations exhibited CO2 composed by different mixing degrees between atmospheric and biogenic CO2 with slight contributions of deep-seated CO2, with an average value of -19.3‰. The results of the physical-chemical parameters measured in waters showed mean temperature values of 23.7ºC, 19.6ºC and 22.1ºC, 7.40, 6.27 and 6.60 for the pH and 1,710 µS·cm-1, 411 µS·cm-1 and 41,100 µS·cm-1 for the EC, for Peña Horeb, Trasvase Oeste and Las Salinas, respectively. The δ13C-CO2 composition of the dissolved gas has a mean value of -7.8‰, -10.2‰ and -3.8‰ vs. VPDB for Peña Horeb, Trasvase Oeste and Salinas, respectively. The highest values of CO2 efflux coincided with the stations showing highest CO2 concentration values located at the southern end of Cumbre Vieja, where the most recent volcanic eruption took place, and also on the northwest flank. This is in accordance with the results obtained for Las Salinas well, located in the south of the island, which show a high concentration of dissolved CO2 and δ13C-CO2 values with a strong deep-seated CO2 contribution. This study represents an interesting contribution to detect early warning signals of future unrest episodes at Cumbre Vieja.
How to cite: Amonte, C., Mulliss, A., Sampson, E., Martín-Lorenzo, A., Rodríguez-Pérez, C., Di Nardo, D., Melián, G. V., Santana-dLeón, J. M., Hernández, P. A., and Pérez, N. M.: Groundwater and soil CO2 efflux weekly monitoring network for the surveillance of Cumbre Vieja volcano, Canary Islands, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11456, https://doi.org/10.5194/egusphere-egu2020-11456, 2020.
La Palma Island (708.32 km2) is located at the north-western end of the Canary Archipelago and is one of the youngest of the archipelago. In the last 123 ka, volcanic activity has taken place exclusively at Cumbre Vieja, the most active basaltic volcano in the Canaries, which is located at the southern part of the island. Since no visible geothermal manifestations occur at the surface environment of this volcano, during the last 20 years there has been considerable interest in the study of diffuse degassing as a powerful tool in the volcano monitoring program. In this study we have used two different geochemical approaches for volcano monitoring from October 2017 to November 2019. First, we have developed a network of 21 closed static chambers to determine soil CO2 effluxes. Additionally, we have monitored physical-chemical parameters (temperature, pH, electrical conductivity -EC-) and chemical/isotopic composition and dissolved gases in the water of two galleries (Peña Horeb and Trasvase Oeste) and one water well (Las Salinas). Soil CO2 effluxes for the alkaline traps showed an average value of 7.4 g·m-2·d-1 for the entire Cumbre Vieja volcano. The gas sampled on the head space of the traps can be considered as CO2-enriched air, showing an average value of 1,942 ppmV of CO2. Regarding the CO2 isotopic composition (δ13C-CO2), most of the stations exhibited CO2 composed by different mixing degrees between atmospheric and biogenic CO2 with slight contributions of deep-seated CO2, with an average value of -19.3‰. The results of the physical-chemical parameters measured in waters showed mean temperature values of 23.7ºC, 19.6ºC and 22.1ºC, 7.40, 6.27 and 6.60 for the pH and 1,710 µS·cm-1, 411 µS·cm-1 and 41,100 µS·cm-1 for the EC, for Peña Horeb, Trasvase Oeste and Las Salinas, respectively. The δ13C-CO2 composition of the dissolved gas has a mean value of -7.8‰, -10.2‰ and -3.8‰ vs. VPDB for Peña Horeb, Trasvase Oeste and Salinas, respectively. The highest values of CO2 efflux coincided with the stations showing highest CO2 concentration values located at the southern end of Cumbre Vieja, where the most recent volcanic eruption took place, and also on the northwest flank. This is in accordance with the results obtained for Las Salinas well, located in the south of the island, which show a high concentration of dissolved CO2 and δ13C-CO2 values with a strong deep-seated CO2 contribution. This study represents an interesting contribution to detect early warning signals of future unrest episodes at Cumbre Vieja.
How to cite: Amonte, C., Mulliss, A., Sampson, E., Martín-Lorenzo, A., Rodríguez-Pérez, C., Di Nardo, D., Melián, G. V., Santana-dLeón, J. M., Hernández, P. A., and Pérez, N. M.: Groundwater and soil CO2 efflux weekly monitoring network for the surveillance of Cumbre Vieja volcano, Canary Islands, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11456, https://doi.org/10.5194/egusphere-egu2020-11456, 2020.
EGU2020-11929 | Displays | ITS4.5/GI1.4
Global data-base of co-seismic interferograms generated via unsupervised Sentinel-1 DInSAR processingFernando Monterroso, Manuela Bonano, Claudio De Luca, Vincenzo De Novellis, Riccardo Lanari, Michele Manunta, Mariarosaria Manzo, Giovanni Onorato, Emanuela Valerio, Ivana Zinno, and Francesco Casu
Differential Synthetic Aperture Radar Interferometry (DInSAR) is one of the key methods to investigate, with centimeters to millimeters accuracy, the Earth surface displacements, as those occurred during natural and man-made hazards.
Nowadays, with the increasing of SAR data availability provided by Sentinel-1 (S1) constellation of Copernicus European Program, the radar Earth Observation (EO) scenario is moving from the historical analysis to operational functionalities. Indeed, the S1 mission, by using the Terrain Observation by Progressive Scans (TOPS) technique, has been designed with the specific aim of natural hazards monitoring via SAR Interferometry guaranteeing a very large coverage of the illuminated scene (250km of swath). These characteristics sum up with the free & open access data policy, the global scale acquisition plan and the high system reliability thus providing a set of peculiarities that make S1 a game changer in the context of operational EO scenario.
By taking benefit of the S1 characteristics, an unsupervised and cloud-based tool for the automatic generation of co-seismic ground displacement maps has been recently proposed. The tool is triggered by the significant (i.e. bigger than a defined magnitude) seismic events reported in the online catalogues of the United States Geological Survey (USGS) and the National Institute of Geophysics and Volcanology of Italy (INGV). The system permits to generate not only the co-seismic displacement maps but also the pre- and post- seismic ones, up to 30 days after the monitored event.
Although it was conceived to generate displacement maps relevant to the upcoming earthquakes, as an operational service for the Civil Protection departments, the implemented tool has also been applied to the study of historical events imaged by the S1 data. This allowed us to generate a global data-base of DInSAR-based co-seismic displacement maps.
Accordingly, the implementation of such data-base will be presented, with particular emphasis on the exploited computing infrastructure solutions (namely the AWS Cloud Computing environment), the used algorithmic strategies and the achieved interferometric results.
Moreover, the whole data-base of DInSAR products will be made available through the European Plate Observing System (EPOS) Research Infrastructure, thus making them freely and openly accessible to the European and international solid Earth community.
The implemented global data-base will be helpful for investigating the dynamics of surface deformation in the seismic zones around the Earth. Indeed, it will contribute to the study of global tectonic earthquake activity through the integration of DInSAR information with other geophysical parameters.
This work has been partially supported by the 2019-2021 IREA-CNR and Italian Civil Protection Department agreement, the EPOS-IP and EPOS-SP projects of the European Union Horizon 2020 R&I program (grant agreement 676564 and 871121) and the I-AMICA (PONa3_00363) project.
How to cite: Monterroso, F., Bonano, M., De Luca, C., De Novellis, V., Lanari, R., Manunta, M., Manzo, M., Onorato, G., Valerio, E., Zinno, I., and Casu, F.: Global data-base of co-seismic interferograms generated via unsupervised Sentinel-1 DInSAR processing, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11929, https://doi.org/10.5194/egusphere-egu2020-11929, 2020.
Differential Synthetic Aperture Radar Interferometry (DInSAR) is one of the key methods to investigate, with centimeters to millimeters accuracy, the Earth surface displacements, as those occurred during natural and man-made hazards.
Nowadays, with the increasing of SAR data availability provided by Sentinel-1 (S1) constellation of Copernicus European Program, the radar Earth Observation (EO) scenario is moving from the historical analysis to operational functionalities. Indeed, the S1 mission, by using the Terrain Observation by Progressive Scans (TOPS) technique, has been designed with the specific aim of natural hazards monitoring via SAR Interferometry guaranteeing a very large coverage of the illuminated scene (250km of swath). These characteristics sum up with the free & open access data policy, the global scale acquisition plan and the high system reliability thus providing a set of peculiarities that make S1 a game changer in the context of operational EO scenario.
By taking benefit of the S1 characteristics, an unsupervised and cloud-based tool for the automatic generation of co-seismic ground displacement maps has been recently proposed. The tool is triggered by the significant (i.e. bigger than a defined magnitude) seismic events reported in the online catalogues of the United States Geological Survey (USGS) and the National Institute of Geophysics and Volcanology of Italy (INGV). The system permits to generate not only the co-seismic displacement maps but also the pre- and post- seismic ones, up to 30 days after the monitored event.
Although it was conceived to generate displacement maps relevant to the upcoming earthquakes, as an operational service for the Civil Protection departments, the implemented tool has also been applied to the study of historical events imaged by the S1 data. This allowed us to generate a global data-base of DInSAR-based co-seismic displacement maps.
Accordingly, the implementation of such data-base will be presented, with particular emphasis on the exploited computing infrastructure solutions (namely the AWS Cloud Computing environment), the used algorithmic strategies and the achieved interferometric results.
Moreover, the whole data-base of DInSAR products will be made available through the European Plate Observing System (EPOS) Research Infrastructure, thus making them freely and openly accessible to the European and international solid Earth community.
The implemented global data-base will be helpful for investigating the dynamics of surface deformation in the seismic zones around the Earth. Indeed, it will contribute to the study of global tectonic earthquake activity through the integration of DInSAR information with other geophysical parameters.
This work has been partially supported by the 2019-2021 IREA-CNR and Italian Civil Protection Department agreement, the EPOS-IP and EPOS-SP projects of the European Union Horizon 2020 R&I program (grant agreement 676564 and 871121) and the I-AMICA (PONa3_00363) project.
How to cite: Monterroso, F., Bonano, M., De Luca, C., De Novellis, V., Lanari, R., Manunta, M., Manzo, M., Onorato, G., Valerio, E., Zinno, I., and Casu, F.: Global data-base of co-seismic interferograms generated via unsupervised Sentinel-1 DInSAR processing, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11929, https://doi.org/10.5194/egusphere-egu2020-11929, 2020.
EGU2020-12259 | Displays | ITS4.5/GI1.4
Design of an optimal seismic network for monitoring geothermal fieldsLeonarda Isabel Esquivel Mendiola, Marco Calò, and Anna Tramelli
The reliable monitoring and location of the seismic activity at a local and regional scale is a key factor for hazard assessment. The exploitation of a geothermal field can be affected by natural and induced seismicity, hence optimal planning of a seismic network is of great interest for geothermal development.
Seismic monitoring depends on two main aspects: i) sensitivity of the seismic network and ii) the effectiveness of detection and location methods.
In this study, we focus on the first aspect proposing and improvement of an algorithm for the optimization of seismic networks designed for monitoring the seismic activity related to the injection test that will be performed in a geothermal well.
The algorithm is based on the method proposed by Tramelli et al. (2013) that tries to find the optimal station positions minimizing the volume of the error ellipsoid of the location for synthetic events using the D-criterion (Rabinowitz and Steinberg, 2000).
In this version of the program we improve the algorithm to find an optimal seismic network considering several prior information such as: 1) maps of seismic noise levels at different frequency bands, 2) three-dimensional seismic models and 3) topographic gradient of the study region. This information is usually produced during the exploration stage of a geothermal site and available prior an injection test.
We applied the methodology to the Acoculco geothermal field (Mexico) where an injection test is planned. In this work, we show a comparison between the standard approach that uses 1D seismic models, constant values of noise levels, and no topographic effects, with the new one showing how important is to consider these parameters for a more suitable optimization of the seismic network.
This work is performed in the framework of the Mexican European consortium GeMex (Cooperation in Geothermal energy research Europe-Mexico, PT5.2 N: 267084 funded by CONACyT-SENER : S0019, 2015-04, and of the joint agreement between UNAM and INGV on the development of seismological research of volcanic and geothermal field (N:44753-1023-22-IV-16/1).
How to cite: Esquivel Mendiola, L. I., Calò, M., and Tramelli, A.: Design of an optimal seismic network for monitoring geothermal fields, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12259, https://doi.org/10.5194/egusphere-egu2020-12259, 2020.
The reliable monitoring and location of the seismic activity at a local and regional scale is a key factor for hazard assessment. The exploitation of a geothermal field can be affected by natural and induced seismicity, hence optimal planning of a seismic network is of great interest for geothermal development.
Seismic monitoring depends on two main aspects: i) sensitivity of the seismic network and ii) the effectiveness of detection and location methods.
In this study, we focus on the first aspect proposing and improvement of an algorithm for the optimization of seismic networks designed for monitoring the seismic activity related to the injection test that will be performed in a geothermal well.
The algorithm is based on the method proposed by Tramelli et al. (2013) that tries to find the optimal station positions minimizing the volume of the error ellipsoid of the location for synthetic events using the D-criterion (Rabinowitz and Steinberg, 2000).
In this version of the program we improve the algorithm to find an optimal seismic network considering several prior information such as: 1) maps of seismic noise levels at different frequency bands, 2) three-dimensional seismic models and 3) topographic gradient of the study region. This information is usually produced during the exploration stage of a geothermal site and available prior an injection test.
We applied the methodology to the Acoculco geothermal field (Mexico) where an injection test is planned. In this work, we show a comparison between the standard approach that uses 1D seismic models, constant values of noise levels, and no topographic effects, with the new one showing how important is to consider these parameters for a more suitable optimization of the seismic network.
This work is performed in the framework of the Mexican European consortium GeMex (Cooperation in Geothermal energy research Europe-Mexico, PT5.2 N: 267084 funded by CONACyT-SENER : S0019, 2015-04, and of the joint agreement between UNAM and INGV on the development of seismological research of volcanic and geothermal field (N:44753-1023-22-IV-16/1).
How to cite: Esquivel Mendiola, L. I., Calò, M., and Tramelli, A.: Design of an optimal seismic network for monitoring geothermal fields, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12259, https://doi.org/10.5194/egusphere-egu2020-12259, 2020.
EGU2020-12879 | Displays | ITS4.5/GI1.4
PI-COSMOS: An Open Source Python based High Speed Coastal Video Monitoring SystemRamesh Madipally, Sheela Nair L, and Rui Taborda
In recent years, Coastal video monitoring methods have been widely accepted tools for continuous monitoring of complex coastal processes. In this paper, the progress made on a new python based coastal video monitoring system, PI-COSMOS (Portuguese Indian COaStal MOnitoring System) which is being developed and tested jointly in India and Portuguese coasts is presented. PI-COSMOS system aims at providing open source, high speed video monitoring toolboxes for the coastal community that can be used anywhere in the world. PI-COSMOS is camera independent system and comprises four modules viz. PI-Calib for camera calibration, RectiPI for video imagery rectification, PI-ImageStacks for image product and pixel product generation and PI- DB for efficient database management. The applicability of PICOSMOS system under different coastal environment conditions has been tested using the data collected from the India as well as the Portugal coast. The results from one of the Indian stations installed at Kozhikode beach, Kerala, India situated at 11°15'14.12" N, 75° 46'15.40" E are presented here to demonstrate the capabilities of the newly developed PI-COSMOS system. the performance of PI-COSMOS is evaluated by conducting a comparative study among PICOSMOS and existing video monitoring toolboxes like UAV processing toolbox provided by Coastal Research Imaging Network and RectifyExtreme provided by the University of Lisbon and it is found that the processing speed of PI-COSMOS is very much high i.e. more than 5 times when compared to UAV processing toolbox and RectifyExtreme. The high speed performance, camera independent nature and easiness in the operation made PI-COSMOS as the simplest and advanced open source video monitoring system.
How to cite: Madipally, R., Nair L, S., and Taborda, R.: PI-COSMOS: An Open Source Python based High Speed Coastal Video Monitoring System , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12879, https://doi.org/10.5194/egusphere-egu2020-12879, 2020.
In recent years, Coastal video monitoring methods have been widely accepted tools for continuous monitoring of complex coastal processes. In this paper, the progress made on a new python based coastal video monitoring system, PI-COSMOS (Portuguese Indian COaStal MOnitoring System) which is being developed and tested jointly in India and Portuguese coasts is presented. PI-COSMOS system aims at providing open source, high speed video monitoring toolboxes for the coastal community that can be used anywhere in the world. PI-COSMOS is camera independent system and comprises four modules viz. PI-Calib for camera calibration, RectiPI for video imagery rectification, PI-ImageStacks for image product and pixel product generation and PI- DB for efficient database management. The applicability of PICOSMOS system under different coastal environment conditions has been tested using the data collected from the India as well as the Portugal coast. The results from one of the Indian stations installed at Kozhikode beach, Kerala, India situated at 11°15'14.12" N, 75° 46'15.40" E are presented here to demonstrate the capabilities of the newly developed PI-COSMOS system. the performance of PI-COSMOS is evaluated by conducting a comparative study among PICOSMOS and existing video monitoring toolboxes like UAV processing toolbox provided by Coastal Research Imaging Network and RectifyExtreme provided by the University of Lisbon and it is found that the processing speed of PI-COSMOS is very much high i.e. more than 5 times when compared to UAV processing toolbox and RectifyExtreme. The high speed performance, camera independent nature and easiness in the operation made PI-COSMOS as the simplest and advanced open source video monitoring system.
How to cite: Madipally, R., Nair L, S., and Taborda, R.: PI-COSMOS: An Open Source Python based High Speed Coastal Video Monitoring System , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12879, https://doi.org/10.5194/egusphere-egu2020-12879, 2020.
EGU2020-13218 | Displays | ITS4.5/GI1.4
Fast short-term lava flow hazard assessment with graph theoryVeronica Centorrino, Giuseppe Bilotta, Annalisa Cappello, Gaetana Ganci, Claudia Corradino, and Ciro Del Negro
We explore the use of graph theory to assess short-term hazard of lava flow inundation, with Mt Etna as a case study. In the preparation stage, we convert into a graph the long-term hazard map produced using about 30,000 possible eruptive scenarios calculated by simulating lava flow paths with the physics-based MAGFLOW model. Cells in the original DEM-based representation are merged into graph vertices if reached by the same scenarios, and for each pair of vertices, a directed edge is defined, with an associated lava conductance (probability of lava flowing from one vertex to the other) computed from the number of scenarios that reach both the start and end vertex. In the application stage, the graph representation can be used to extract short-term lava flow hazard maps in case of unrest. When a potential vent opening area is identified e.g. from monitoring data, the corresponding vertices in the graph are activated, and the information about lava inundation probability is iteratively propagated to neighboring vertices through the edges, weighted according to the associated lava conductance. This allows quick identification of potentially inundated areas with little computational time. A comparison with the deterministic approach of subsetting and recomputing the weights in the long-term hazard map is also presented to illustrate benefits and downsides of the graph-based approach.
How to cite: Centorrino, V., Bilotta, G., Cappello, A., Ganci, G., Corradino, C., and Del Negro, C.: Fast short-term lava flow hazard assessment with graph theory, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13218, https://doi.org/10.5194/egusphere-egu2020-13218, 2020.
We explore the use of graph theory to assess short-term hazard of lava flow inundation, with Mt Etna as a case study. In the preparation stage, we convert into a graph the long-term hazard map produced using about 30,000 possible eruptive scenarios calculated by simulating lava flow paths with the physics-based MAGFLOW model. Cells in the original DEM-based representation are merged into graph vertices if reached by the same scenarios, and for each pair of vertices, a directed edge is defined, with an associated lava conductance (probability of lava flowing from one vertex to the other) computed from the number of scenarios that reach both the start and end vertex. In the application stage, the graph representation can be used to extract short-term lava flow hazard maps in case of unrest. When a potential vent opening area is identified e.g. from monitoring data, the corresponding vertices in the graph are activated, and the information about lava inundation probability is iteratively propagated to neighboring vertices through the edges, weighted according to the associated lava conductance. This allows quick identification of potentially inundated areas with little computational time. A comparison with the deterministic approach of subsetting and recomputing the weights in the long-term hazard map is also presented to illustrate benefits and downsides of the graph-based approach.
How to cite: Centorrino, V., Bilotta, G., Cappello, A., Ganci, G., Corradino, C., and Del Negro, C.: Fast short-term lava flow hazard assessment with graph theory, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13218, https://doi.org/10.5194/egusphere-egu2020-13218, 2020.
EGU2020-13672 | Displays | ITS4.5/GI1.4
A public-private collaboration initiative for innovative Earth Observation (EO) technologies and methodologies for investigating climate change impacts by means of an inter-disciplinary approach: the OT4CLIMA projectNicola Pergola, Carmine Serio, Francesco Ripullone, Francesco Marchese, Giuseppe Naviglio, Pietro Tizzani, and Angelo Donvito and the OT4CLIMA Team
The OT4CLIMA project, funded by the Italian Ministry of Education, University and Research, within the PON 2014-2020 Industrial Research program, “Aerospace” thematic domain, aims at developing advanced Earth Observation (EO) technologies and methodologies for improving our capability to better understand the effects of Climate Change (CC) and our capability to mitigate them at the regional and sub-regional scale. Both medium-to-long term impacts (e.g. vegetation stress, drought) and extreme events with rapid dynamics (e.g. intense meteorological phenomena, fires) will be investigated, trying a twofold (i.e. interesting both “products” and “processes”) technological innovation: a) through the design and the implementation of advanced sensors to be mounted on multiplatform EO systems; b) through the development of advanced methodologies for EO data analysis, interpretation, integration and fusion.
Activities will focus on two of the major natural processes strictly related to Climate Change, namely the Carbon and Water Cycles by using an inter-disciplinary approach.
As an example, the project will make it possible the measurements, with an unprecedented accuracy of atmospheric (e.g. OCS, carbon-sulphide) and surface (e.g. soil moisture) parameters that are crucial in determining the vegetation contribution to the CO2 balance, suggesting at the same time solutions based on the analysis and integration of satellite, airborne and unmanned data, in order to significantly improve the capability of local communities to face the short- and long-term CC-related effects.
OT4CLIMA benefits from a strong scientific expertise (14 CNR institutes, ASI, INGV, CIRA, 3 Universities), considerable research infrastructures and a wide industrial partnership (including both big national players, i.e. E-Geos and IDS companies and well-established italian SMEs consortia, i.e. CREATEC, CORISTA and SIIT, and a spin-off company, Survey Lab) specifically focused on the technological innovation frontier.
This contribution would summarize the project main objectives and show some activities so far carried out.
How to cite: Pergola, N., Serio, C., Ripullone, F., Marchese, F., Naviglio, G., Tizzani, P., and Donvito, A. and the OT4CLIMA Team: A public-private collaboration initiative for innovative Earth Observation (EO) technologies and methodologies for investigating climate change impacts by means of an inter-disciplinary approach: the OT4CLIMA project, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13672, https://doi.org/10.5194/egusphere-egu2020-13672, 2020.
The OT4CLIMA project, funded by the Italian Ministry of Education, University and Research, within the PON 2014-2020 Industrial Research program, “Aerospace” thematic domain, aims at developing advanced Earth Observation (EO) technologies and methodologies for improving our capability to better understand the effects of Climate Change (CC) and our capability to mitigate them at the regional and sub-regional scale. Both medium-to-long term impacts (e.g. vegetation stress, drought) and extreme events with rapid dynamics (e.g. intense meteorological phenomena, fires) will be investigated, trying a twofold (i.e. interesting both “products” and “processes”) technological innovation: a) through the design and the implementation of advanced sensors to be mounted on multiplatform EO systems; b) through the development of advanced methodologies for EO data analysis, interpretation, integration and fusion.
Activities will focus on two of the major natural processes strictly related to Climate Change, namely the Carbon and Water Cycles by using an inter-disciplinary approach.
As an example, the project will make it possible the measurements, with an unprecedented accuracy of atmospheric (e.g. OCS, carbon-sulphide) and surface (e.g. soil moisture) parameters that are crucial in determining the vegetation contribution to the CO2 balance, suggesting at the same time solutions based on the analysis and integration of satellite, airborne and unmanned data, in order to significantly improve the capability of local communities to face the short- and long-term CC-related effects.
OT4CLIMA benefits from a strong scientific expertise (14 CNR institutes, ASI, INGV, CIRA, 3 Universities), considerable research infrastructures and a wide industrial partnership (including both big national players, i.e. E-Geos and IDS companies and well-established italian SMEs consortia, i.e. CREATEC, CORISTA and SIIT, and a spin-off company, Survey Lab) specifically focused on the technological innovation frontier.
This contribution would summarize the project main objectives and show some activities so far carried out.
How to cite: Pergola, N., Serio, C., Ripullone, F., Marchese, F., Naviglio, G., Tizzani, P., and Donvito, A. and the OT4CLIMA Team: A public-private collaboration initiative for innovative Earth Observation (EO) technologies and methodologies for investigating climate change impacts by means of an inter-disciplinary approach: the OT4CLIMA project, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13672, https://doi.org/10.5194/egusphere-egu2020-13672, 2020.
EGU2020-14473 | Displays | ITS4.5/GI1.4
Quantifying asset and visitor risk at Mt. Taranaki, New Zealand from multiple volcanic hazardsJonathan Procter, Stuart Mead, and Mark Bebbington
We present a probabilistic quantification of multiple volcanic hazards in an assessment of risk to visitors and assets in Egmont National Park, New Zealand. The probability of impact to proposed park infrastructure from volcanic activity (originating from Mt. Taranaki) is quantified using a combination of statistical and numerical techniques. While single (volcanic) hazard assessments typically follow a methodology where the hazard source (e.g. pyroclastic flow, ashfall, debris avalanche) is the focus and defines an area of impact, our multi-volcanic hazard assessment uses a location-centred methodology where critical locations are used to define the range of hazard sources that affect risk over park asset lifetimes. Key to this process is creating fast (i.e. linear/functional) mappings between hazard source parameters such as volume and impact parameters such as depth. These mappings can then be combined with stochastic models to find the probability of input parameters and the probability of eruptions generating these input parameters. For some hazards, such as ash fall, statistical models are available to map intensity to probability. However, mass flow hazards required the use of Gaussian process emulation to develop a computationally cheap surrogate to numerical simulations that can be efficiently sampled for probabilistic hazard assessment. This was a suitable alternative when statistical models for the hazard are unavailable. Our study demonstrates the use of these techniques to integrate stochastic and deterministic models for probabilistic volcano multi-hazard assessment.
How to cite: Procter, J., Mead, S., and Bebbington, M.: Quantifying asset and visitor risk at Mt. Taranaki, New Zealand from multiple volcanic hazards, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-14473, https://doi.org/10.5194/egusphere-egu2020-14473, 2020.
We present a probabilistic quantification of multiple volcanic hazards in an assessment of risk to visitors and assets in Egmont National Park, New Zealand. The probability of impact to proposed park infrastructure from volcanic activity (originating from Mt. Taranaki) is quantified using a combination of statistical and numerical techniques. While single (volcanic) hazard assessments typically follow a methodology where the hazard source (e.g. pyroclastic flow, ashfall, debris avalanche) is the focus and defines an area of impact, our multi-volcanic hazard assessment uses a location-centred methodology where critical locations are used to define the range of hazard sources that affect risk over park asset lifetimes. Key to this process is creating fast (i.e. linear/functional) mappings between hazard source parameters such as volume and impact parameters such as depth. These mappings can then be combined with stochastic models to find the probability of input parameters and the probability of eruptions generating these input parameters. For some hazards, such as ash fall, statistical models are available to map intensity to probability. However, mass flow hazards required the use of Gaussian process emulation to develop a computationally cheap surrogate to numerical simulations that can be efficiently sampled for probabilistic hazard assessment. This was a suitable alternative when statistical models for the hazard are unavailable. Our study demonstrates the use of these techniques to integrate stochastic and deterministic models for probabilistic volcano multi-hazard assessment.
How to cite: Procter, J., Mead, S., and Bebbington, M.: Quantifying asset and visitor risk at Mt. Taranaki, New Zealand from multiple volcanic hazards, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-14473, https://doi.org/10.5194/egusphere-egu2020-14473, 2020.
EGU2020-17991 | Displays | ITS4.5/GI1.4
Self-potential (SP) soil monitoring tool by numbers and vectors characteristicVincenzo Di Fiore, Francesca Bianco, Giuseppe Cavuoto, Girolamo Milano, Nicola Pelosi, Michele Punzo, Daniela Tarallo, Pietro Tizzani, Michele Iavarone, Paolo Scotto di Vettimo, and Costantino Di Gregorio
This paper attempts to provide a contribute to electric field monitoring on soil by Self-potential (SP) survey utilizing matrix determinant and eigenvalues. SP is connected to the electrical conductivity of soil that is an indirect measurement and correlates very well with several physical and chemical properties. The purpose of this method is to map the electrical potential to reveal one or several polarization mechanisms at play in the ground. In some cases, the self-potential signals are monitored with an electrodes network which provides the possibility to discriminate between various sources.
Our study provides synthetic and experiment cases that carried out a semi-quantitative method to estimate the variation of the electric field vector in the soil starting from the measurement of the SP. The experimental case is referred to a site located in the Campania Region in southern Italy. SP measurements can be performed by array dipoles oriented N-S, E-W and vertical direction. In this way, we can define the contribution of the electric field in both time and spatial domain. Now, if we denote with V(t) the potential difference between two electrodes, for each dipole, we will have 3 values Vx(tn), Vy(tn) and Vz(tn) relative to the dipoles in the three directions. Assuming that the electric currents associated with the potentials are continuous or in any case at very low frequency, we can with good approximation assume that the resulting electric field associated with such currents is conservative. Remembering that in the case of a conservative field the electric field vector, we can be expressed as a gradient of the scalar potential V. The SP data were obtained using an Arduino acquisition system with internal voltmeter impedance of 10 MOhm and resolution of 0.1 mV. In order to provide reliable SP measurements, the impedance of the voltmeter needs to be substantially higher than the impedance of the soil between the electrodes because of the small bias current used to measure the voltage. The electric potentials were measured between each electrode and a reference electrode was connected by ground. The electrode consisted of 8 electrodes spaced 1m and arranged in a cross array which form 6 dipoles. The cross array was orientated in N-S and E-W direction.
Because the SP depend on the electrical conductivity of the soil and therefore on the sources and the medium, any variation of the chemical-physical soil variation implies a variation of the SP. We calculated the determinant and the eigenvalues of the matrix whose columns consist of the components of the measured electric field and therefore, by such parameters it was possible to observe the variations of the electric field in the time domain.
How to cite: Di Fiore, V., Bianco, F., Cavuoto, G., Milano, G., Pelosi, N., Punzo, M., Tarallo, D., Tizzani, P., Iavarone, M., Scotto di Vettimo, P., and Di Gregorio, C.: Self-potential (SP) soil monitoring tool by numbers and vectors characteristic, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-17991, https://doi.org/10.5194/egusphere-egu2020-17991, 2020.
This paper attempts to provide a contribute to electric field monitoring on soil by Self-potential (SP) survey utilizing matrix determinant and eigenvalues. SP is connected to the electrical conductivity of soil that is an indirect measurement and correlates very well with several physical and chemical properties. The purpose of this method is to map the electrical potential to reveal one or several polarization mechanisms at play in the ground. In some cases, the self-potential signals are monitored with an electrodes network which provides the possibility to discriminate between various sources.
Our study provides synthetic and experiment cases that carried out a semi-quantitative method to estimate the variation of the electric field vector in the soil starting from the measurement of the SP. The experimental case is referred to a site located in the Campania Region in southern Italy. SP measurements can be performed by array dipoles oriented N-S, E-W and vertical direction. In this way, we can define the contribution of the electric field in both time and spatial domain. Now, if we denote with V(t) the potential difference between two electrodes, for each dipole, we will have 3 values Vx(tn), Vy(tn) and Vz(tn) relative to the dipoles in the three directions. Assuming that the electric currents associated with the potentials are continuous or in any case at very low frequency, we can with good approximation assume that the resulting electric field associated with such currents is conservative. Remembering that in the case of a conservative field the electric field vector, we can be expressed as a gradient of the scalar potential V. The SP data were obtained using an Arduino acquisition system with internal voltmeter impedance of 10 MOhm and resolution of 0.1 mV. In order to provide reliable SP measurements, the impedance of the voltmeter needs to be substantially higher than the impedance of the soil between the electrodes because of the small bias current used to measure the voltage. The electric potentials were measured between each electrode and a reference electrode was connected by ground. The electrode consisted of 8 electrodes spaced 1m and arranged in a cross array which form 6 dipoles. The cross array was orientated in N-S and E-W direction.
Because the SP depend on the electrical conductivity of the soil and therefore on the sources and the medium, any variation of the chemical-physical soil variation implies a variation of the SP. We calculated the determinant and the eigenvalues of the matrix whose columns consist of the components of the measured electric field and therefore, by such parameters it was possible to observe the variations of the electric field in the time domain.
How to cite: Di Fiore, V., Bianco, F., Cavuoto, G., Milano, G., Pelosi, N., Punzo, M., Tarallo, D., Tizzani, P., Iavarone, M., Scotto di Vettimo, P., and Di Gregorio, C.: Self-potential (SP) soil monitoring tool by numbers and vectors characteristic, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-17991, https://doi.org/10.5194/egusphere-egu2020-17991, 2020.
EGU2020-18497 | Displays | ITS4.5/GI1.4
IRON: a permanent dense nationwide radon network to approach the challenge of monitoring seismic regionsGaia Soldati and the IRON working team
The deployment of multi-station and multi-parameter networks is considered fundamental in view of the investigation of Earth’s internal processes from which volcanic and seismic activity originate. The different changes often observed before the occurrence of strong earthquakes or eruptions (anomalies in sub-soil gas emission, hydrothermal discharge, chemical composition of groundwaters, Earth’s electromagnetic field) highlight the key role of fluids in the generation of these natural phenomena. Since they transfer from the underground to the surface messages about how the natural systems work, geochemistry can actively interact in a multidisciplinary context for investigating natural processes. While observational seismology has witnessed tremendous advances in the last twenty years, thanks to the development of very dense networks of stations measuring ground displacement, deformation and acceleration, the system of geochemical observations did not follow the same growth. The creation, ten years ago, of the Italian Radon mOnitoring Network (IRON) was motivated by the need for a permanent and dense network of stations aimed to make radon time series analysis a complement to traditional seismological tools. In fact, its radioactive nature makes radon a powerful tracer for fluid movements in the crust. The further step was the integration of IRON into a nationwide multi-parameter monitoring network, consisting so far of 10 homogenous sites including velocimeters, accelerometers, GPS sensors, and instruments measuring the Earth’s electromagnetic field. The potential of IRON as a tool to study the relationship between radon variability and the preparation process of earthquakes is discussed by means of two practical applications: to the 2016 Amatrice-Visso-Norcia seismic sequence and to the shorter sequence following the Ml 4.4 earthquake of 7 November 2019 in the Frusinate region.
How to cite: Soldati, G. and the IRON working team: IRON: a permanent dense nationwide radon network to approach the challenge of monitoring seismic regions, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18497, https://doi.org/10.5194/egusphere-egu2020-18497, 2020.
The deployment of multi-station and multi-parameter networks is considered fundamental in view of the investigation of Earth’s internal processes from which volcanic and seismic activity originate. The different changes often observed before the occurrence of strong earthquakes or eruptions (anomalies in sub-soil gas emission, hydrothermal discharge, chemical composition of groundwaters, Earth’s electromagnetic field) highlight the key role of fluids in the generation of these natural phenomena. Since they transfer from the underground to the surface messages about how the natural systems work, geochemistry can actively interact in a multidisciplinary context for investigating natural processes. While observational seismology has witnessed tremendous advances in the last twenty years, thanks to the development of very dense networks of stations measuring ground displacement, deformation and acceleration, the system of geochemical observations did not follow the same growth. The creation, ten years ago, of the Italian Radon mOnitoring Network (IRON) was motivated by the need for a permanent and dense network of stations aimed to make radon time series analysis a complement to traditional seismological tools. In fact, its radioactive nature makes radon a powerful tracer for fluid movements in the crust. The further step was the integration of IRON into a nationwide multi-parameter monitoring network, consisting so far of 10 homogenous sites including velocimeters, accelerometers, GPS sensors, and instruments measuring the Earth’s electromagnetic field. The potential of IRON as a tool to study the relationship between radon variability and the preparation process of earthquakes is discussed by means of two practical applications: to the 2016 Amatrice-Visso-Norcia seismic sequence and to the shorter sequence following the Ml 4.4 earthquake of 7 November 2019 in the Frusinate region.
How to cite: Soldati, G. and the IRON working team: IRON: a permanent dense nationwide radon network to approach the challenge of monitoring seismic regions, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18497, https://doi.org/10.5194/egusphere-egu2020-18497, 2020.
EGU2020-18692 | Displays | ITS4.5/GI1.4
Electrical resistivity variation connected to volcanic earthquake in the Campi Flegrei, Italydaniela tarallo, Giuseppe Cavuoto, Vincenzo Di Fiore, Nicola Pelosi, Michele Punzo, Maurizio Milano, Massimo Contiero, Michele Iavarone, and Marina Iorio
In this study we show an 2D Electrical Resistivity Tomography (ERT) survey acquired in Agnano site pre (Dec 5th, 2019) and post (Dec 12th, 2019) earthquake events occurred in Pisciarelli-Solfatara areas. This earthquake swarm consisted of sequence of 34 earthquakes with Magnitude (Md) -1.1≤Md≤2.8 at depths between 0.9 and 2.3 km. In particular, the earthquake of Dec 06th, 2019 at 00:17 UTC with Md = 2.8 (depth 2 km) was the maximum recorded event since bradyseismic crisis began in 2005.
The ERT survey allow us to identify the main structural boundaries (and their associated fluid circulations) defining the shallow architecture of the Agnano volcano. The hydrothermal system is identified by very low values of the electrical resistivity (<20 Ω m). Its downwards extension is clearly limited by the lava and pyroclastic fragments, which are relatively resistive (>100 Ω m). The resistivity values are increased after the main shock. This increase in resistivity may have been caused by a change in the state of stress and a decrease in pore pressure (subsequent depressurization). Previously to the earthquake, an increase in pressurized fluids has been observed which have reduced the resistivity values. The present observation suggests that the temporal variation of the resistivity values is related to the variation of the pore fluid pressure in the source area of the swarm, facilitated by earthquake and the subsequent fluid diffusion. The combination of these qualitative results with structural analysis leads to a synthetic model of magmatic and hydrothermal fluids circulation inside the Agnano area, which may be useful for the assessment of potential hazards associated with a renewal of fluid pressurization, and a possibly associated partial flank-failure.
How to cite: tarallo, D., Cavuoto, G., Di Fiore, V., Pelosi, N., Punzo, M., Milano, M., Contiero, M., Iavarone, M., and Iorio, M.: Electrical resistivity variation connected to volcanic earthquake in the Campi Flegrei, Italy, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18692, https://doi.org/10.5194/egusphere-egu2020-18692, 2020.
In this study we show an 2D Electrical Resistivity Tomography (ERT) survey acquired in Agnano site pre (Dec 5th, 2019) and post (Dec 12th, 2019) earthquake events occurred in Pisciarelli-Solfatara areas. This earthquake swarm consisted of sequence of 34 earthquakes with Magnitude (Md) -1.1≤Md≤2.8 at depths between 0.9 and 2.3 km. In particular, the earthquake of Dec 06th, 2019 at 00:17 UTC with Md = 2.8 (depth 2 km) was the maximum recorded event since bradyseismic crisis began in 2005.
The ERT survey allow us to identify the main structural boundaries (and their associated fluid circulations) defining the shallow architecture of the Agnano volcano. The hydrothermal system is identified by very low values of the electrical resistivity (<20 Ω m). Its downwards extension is clearly limited by the lava and pyroclastic fragments, which are relatively resistive (>100 Ω m). The resistivity values are increased after the main shock. This increase in resistivity may have been caused by a change in the state of stress and a decrease in pore pressure (subsequent depressurization). Previously to the earthquake, an increase in pressurized fluids has been observed which have reduced the resistivity values. The present observation suggests that the temporal variation of the resistivity values is related to the variation of the pore fluid pressure in the source area of the swarm, facilitated by earthquake and the subsequent fluid diffusion. The combination of these qualitative results with structural analysis leads to a synthetic model of magmatic and hydrothermal fluids circulation inside the Agnano area, which may be useful for the assessment of potential hazards associated with a renewal of fluid pressurization, and a possibly associated partial flank-failure.
How to cite: tarallo, D., Cavuoto, G., Di Fiore, V., Pelosi, N., Punzo, M., Milano, M., Contiero, M., Iavarone, M., and Iorio, M.: Electrical resistivity variation connected to volcanic earthquake in the Campi Flegrei, Italy, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18692, https://doi.org/10.5194/egusphere-egu2020-18692, 2020.
EGU2020-18727 | Displays | ITS4.5/GI1.4
Understanding changes in environmental time series with time-frequency causality analysisMaha Shadaydeh, Yanira Guanche García, Miguel Mahecha, and Joachim Denzler
Understanding causal effect relationships between the different variables in dynamical systems is an important and challenging problem in different areas of research such as attribution of climate change, brain neural connectivity analysis, psychology, among many others. These relationships are guided by the process generating them. Hence, detecting changes or new patterns in the causal effect relationships can be used not only for the detection but also for the diagnosis and attribution of changes in the underlying process.
Time series of environmental time series most often contain multiple periodical components, e.g. daily and seasonal cycles, induced by the meteorological forcing variables. This can significantly mask the underlying endogenous causality structure when using time-domain analysis and therefore results in several spurious links. Filtering these periodic components as preprocessing step might degrade causal inference. This motivates the use of time-frequency processing techniques such as Wavelet or short-time Fourier transform where the causality structure can be examined at each frequency component and on multiple time scales.
In this study, we use a parametric time-frequency representation of vector autoregressive Granger causality for causal inference. We first show that causal inference using time-frequency domain analysis outperforms time-domain analysis when dealing with time series that contain periodic components, trends, or noise. The proposed approach allows for the estimation of the causal effect interaction between each pair of variables in the system on multiple time scales and hence for excluding links that result from periodic components.
Second, we investigate whether anomalous events can be identified based on the observed changes in causal relationships. We consider two representative examples in environmental systems: land-atmosphere ecosystem and marine climate. Through these two examples, we show that an anomalous event can indeed be identified as the event where the causal intensities differ according to a distance measure from the average causal intensities. Two different methods are used for testing the statistical significance of the causal-effect intensity at each frequency component.
Once the anomalous event is detected, the driver of the event can be identified based on the analysis of changes in the obtained causal effect relationships during the time duration of the event and consequently provide an explanation of the detected anomalous event. Current research efforts are directed towards the extension of this work by using nonlinear state-space models, both statistical and deep learning-based ones.
How to cite: Shadaydeh, M., Guanche García, Y., Mahecha, M., and Denzler, J.: Understanding changes in environmental time series with time-frequency causality analysis, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18727, https://doi.org/10.5194/egusphere-egu2020-18727, 2020.
Understanding causal effect relationships between the different variables in dynamical systems is an important and challenging problem in different areas of research such as attribution of climate change, brain neural connectivity analysis, psychology, among many others. These relationships are guided by the process generating them. Hence, detecting changes or new patterns in the causal effect relationships can be used not only for the detection but also for the diagnosis and attribution of changes in the underlying process.
Time series of environmental time series most often contain multiple periodical components, e.g. daily and seasonal cycles, induced by the meteorological forcing variables. This can significantly mask the underlying endogenous causality structure when using time-domain analysis and therefore results in several spurious links. Filtering these periodic components as preprocessing step might degrade causal inference. This motivates the use of time-frequency processing techniques such as Wavelet or short-time Fourier transform where the causality structure can be examined at each frequency component and on multiple time scales.
In this study, we use a parametric time-frequency representation of vector autoregressive Granger causality for causal inference. We first show that causal inference using time-frequency domain analysis outperforms time-domain analysis when dealing with time series that contain periodic components, trends, or noise. The proposed approach allows for the estimation of the causal effect interaction between each pair of variables in the system on multiple time scales and hence for excluding links that result from periodic components.
Second, we investigate whether anomalous events can be identified based on the observed changes in causal relationships. We consider two representative examples in environmental systems: land-atmosphere ecosystem and marine climate. Through these two examples, we show that an anomalous event can indeed be identified as the event where the causal intensities differ according to a distance measure from the average causal intensities. Two different methods are used for testing the statistical significance of the causal-effect intensity at each frequency component.
Once the anomalous event is detected, the driver of the event can be identified based on the analysis of changes in the obtained causal effect relationships during the time duration of the event and consequently provide an explanation of the detected anomalous event. Current research efforts are directed towards the extension of this work by using nonlinear state-space models, both statistical and deep learning-based ones.
How to cite: Shadaydeh, M., Guanche García, Y., Mahecha, M., and Denzler, J.: Understanding changes in environmental time series with time-frequency causality analysis, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18727, https://doi.org/10.5194/egusphere-egu2020-18727, 2020.
EGU2020-18739 | Displays | ITS4.5/GI1.4
Monitoring system at etna volcano during seismo-volcanic crisis of december 2018 based on multiorbits SBAS-DInSAR analysisSusi Pepe, Manuela Bonano, Raffaele Castaldo, Francesco Casu, Claudio De luca, Vincenzo De novellis, Riccardo Lanari, Michele Manunta, Mariarosaria Manzo, Giuseppe Solaro, Pietro Tizzani, Emanuela Valerio, Giovanni Zeni, and Ivana Zinno
An intense volcano-tectonic crisis affected a part of Etna volcano from 24 to 27 December 2018; such event was analyzed and monitored by using the DInSAR technique, taking advantage from Sentinel-1 constellation and COSMO-SkyMed measurements to retrieve the observed deformation pattern.
In particular, we used Sentinel-1 datasets acquired by ascending and descending orbits by with a 6 days revisit time and with the Interferometric Wide Swath (IWS) acquisition mode, referred to as Terrain Observation with Progressive Scans (TOPS). This volcano-tectonic crisis generated an intrusion dyke with an intense eruptive activity at summit craters accompanied by explosions and lava flows, modifying also part of the volcanic edifice and on 26 December 2018 an earthquake with M=4.9 localized on the lower part of the southeastern flank. We generated long-term deformation mean deformation velocity maps and the corresponding time series relevant to pre-event (April 2015-24 December 2018) and post-event (28 December 2018-March 2020). We exploited for the crisis two ascending and one descending orbits interferograms acquired from Sentinel-1 undergo to a multilook operation (5 and 20 pixels along the azimuth direction and range, respectively) to finally lead to a ground pixel size of about 70 by 70 meters. Furthermore, we combined ascending and descending orbits to obtain the East-West and Vertical components of the volcano edifice displacements.
The main results show that the deformation on Etna summit craters and on eastern flank of edifice astride the eruptive event, causing a vertical deformation of about 50 cm and a jump of about 40 cm on horizontal component. These evidences are confirmed by East-West interferogram, whose maximum values exceed 30 cm towards the West and 40 towards the East on the summit of the volcano. Instead In the area in correspondence of the 26 December main shock, a maximum eastward and westward displacement of 12-14 cm and 15-17 cm is observed, respectively. In general, after December 27th the velocity map vertical and horizontal show a progressive attenuation of movements over time. For the eastern flank, horizontal displacements (eastwards) until to 10 cm were achieved in the months following the seismic-volcanic events of December 2018. In the region south-west of the Fiandaca structure, affected by the 4.9 MW earthquake, there is an almost stationary trend of the movement in the post-event period with a small movement of 1.5 cm towards the west in the last month. Finally, even the deformation of the area around the Elachea island currently shows a positive stationary trend (towards the east). On the western side, the trend of post-event displacement showed increases compared to the period preceding the event, although with generally smaller entities than on the eastern side. he progressive Tattenuation of the extent of the movement towards the west with time reaching about 7 cm in the last year is highlighted.
This analysis allowed to Italian Civil Protection to follow the evolution in the last two years of volcano - tectonic crises and the scientific community to take relevant decisions about the level of emergency for the local population.
How to cite: Pepe, S., Bonano, M., Castaldo, R., Casu, F., De luca, C., De novellis, V., Lanari, R., Manunta, M., Manzo, M., Solaro, G., Tizzani, P., Valerio, E., Zeni, G., and Zinno, I.: Monitoring system at etna volcano during seismo-volcanic crisis of december 2018 based on multiorbits SBAS-DInSAR analysis, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18739, https://doi.org/10.5194/egusphere-egu2020-18739, 2020.
An intense volcano-tectonic crisis affected a part of Etna volcano from 24 to 27 December 2018; such event was analyzed and monitored by using the DInSAR technique, taking advantage from Sentinel-1 constellation and COSMO-SkyMed measurements to retrieve the observed deformation pattern.
In particular, we used Sentinel-1 datasets acquired by ascending and descending orbits by with a 6 days revisit time and with the Interferometric Wide Swath (IWS) acquisition mode, referred to as Terrain Observation with Progressive Scans (TOPS). This volcano-tectonic crisis generated an intrusion dyke with an intense eruptive activity at summit craters accompanied by explosions and lava flows, modifying also part of the volcanic edifice and on 26 December 2018 an earthquake with M=4.9 localized on the lower part of the southeastern flank. We generated long-term deformation mean deformation velocity maps and the corresponding time series relevant to pre-event (April 2015-24 December 2018) and post-event (28 December 2018-March 2020). We exploited for the crisis two ascending and one descending orbits interferograms acquired from Sentinel-1 undergo to a multilook operation (5 and 20 pixels along the azimuth direction and range, respectively) to finally lead to a ground pixel size of about 70 by 70 meters. Furthermore, we combined ascending and descending orbits to obtain the East-West and Vertical components of the volcano edifice displacements.
The main results show that the deformation on Etna summit craters and on eastern flank of edifice astride the eruptive event, causing a vertical deformation of about 50 cm and a jump of about 40 cm on horizontal component. These evidences are confirmed by East-West interferogram, whose maximum values exceed 30 cm towards the West and 40 towards the East on the summit of the volcano. Instead In the area in correspondence of the 26 December main shock, a maximum eastward and westward displacement of 12-14 cm and 15-17 cm is observed, respectively. In general, after December 27th the velocity map vertical and horizontal show a progressive attenuation of movements over time. For the eastern flank, horizontal displacements (eastwards) until to 10 cm were achieved in the months following the seismic-volcanic events of December 2018. In the region south-west of the Fiandaca structure, affected by the 4.9 MW earthquake, there is an almost stationary trend of the movement in the post-event period with a small movement of 1.5 cm towards the west in the last month. Finally, even the deformation of the area around the Elachea island currently shows a positive stationary trend (towards the east). On the western side, the trend of post-event displacement showed increases compared to the period preceding the event, although with generally smaller entities than on the eastern side. he progressive Tattenuation of the extent of the movement towards the west with time reaching about 7 cm in the last year is highlighted.
This analysis allowed to Italian Civil Protection to follow the evolution in the last two years of volcano - tectonic crises and the scientific community to take relevant decisions about the level of emergency for the local population.
How to cite: Pepe, S., Bonano, M., Castaldo, R., Casu, F., De luca, C., De novellis, V., Lanari, R., Manunta, M., Manzo, M., Solaro, G., Tizzani, P., Valerio, E., Zeni, G., and Zinno, I.: Monitoring system at etna volcano during seismo-volcanic crisis of december 2018 based on multiorbits SBAS-DInSAR analysis, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18739, https://doi.org/10.5194/egusphere-egu2020-18739, 2020.
EGU2020-19751 | Displays | ITS4.5/GI1.4
A-posteriori Analyses of Pattern Recognition ResultsHorst Langer, Susanna Falsaperla, and Conny Hammer
Data-driven approaches applied to to large and complex data sets are intriguing, however the results must be revised with a critical attitude. For example, a diagnostic tool may provide hints for a serious disease, or for anomalous conditions potentially indicating an impending natural risk. The demand of a high score of identified anomalies – true positives - comes together with the request of a low percentage of false positives. Indeed, a high rate of false positives can ruin the diagnostics. Receiver Operation Curves (ROC) allows us to find a reasonable compromise between the need of accuracy of the diagnostics and robustness with respect to false alerts.
In multiclass problems success is commonly measured as the score for which calculated and target classification of patterns matches at best. A high score does not automatically mean that a method is truly effective. Its value becomes questionable, when a random guess leads to a high score as well. The so called “Kappa Statistics” is an elegant way to assess the quality of a classification scheme. We present some case studies demonstrating how such a-posteriori analysis helps corroborate the results.
Sometimes an approach does not lead to the desired success. In thes cases, a sound a-posteriori analysis of the reasons for the failure often provide interesting insights into the problem, Those problems may reside in an inappropriate definition of the targets, inadequate features, etc. Often the problems can be fixed just by adjusting some choices. Finally, a change of strategy may be necessary in order to achieve a more satisfying result. In the applications presented here, we highlight the pitfalls arising in particular from ill-defined targets and unsuitable feature selections.
The validation of unsupervised learning is still a matter of debate. Some formal criteria (e. g. Davies Bouldin Index, Silhouette Index or other) are available for centroid-based clustering where a unique metric valid for all clusters can be defined. Difficulties arise when metrics are defined individually for each single cluster (for instance, Gaussian Model clusters, adaptive criteria) as well as using schemes where centroids are essentially meaningless. This is the case in density based clustering. In all these cases, users are better off when asking themselves whether a clustering is meaningful for the problem in physical terms. In our presentation we discuss the problem of choosing a suitable number of clusters in cases in which formal criteria are not applicable. We demonstrate how the identification of groups of patterns helps the identification of elements which have a clear physical meaning, even when strict rules for assessing the clustering are not available.
How to cite: Langer, H., Falsaperla, S., and Hammer, C.: A-posteriori Analyses of Pattern Recognition Results, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19751, https://doi.org/10.5194/egusphere-egu2020-19751, 2020.
Data-driven approaches applied to to large and complex data sets are intriguing, however the results must be revised with a critical attitude. For example, a diagnostic tool may provide hints for a serious disease, or for anomalous conditions potentially indicating an impending natural risk. The demand of a high score of identified anomalies – true positives - comes together with the request of a low percentage of false positives. Indeed, a high rate of false positives can ruin the diagnostics. Receiver Operation Curves (ROC) allows us to find a reasonable compromise between the need of accuracy of the diagnostics and robustness with respect to false alerts.
In multiclass problems success is commonly measured as the score for which calculated and target classification of patterns matches at best. A high score does not automatically mean that a method is truly effective. Its value becomes questionable, when a random guess leads to a high score as well. The so called “Kappa Statistics” is an elegant way to assess the quality of a classification scheme. We present some case studies demonstrating how such a-posteriori analysis helps corroborate the results.
Sometimes an approach does not lead to the desired success. In thes cases, a sound a-posteriori analysis of the reasons for the failure often provide interesting insights into the problem, Those problems may reside in an inappropriate definition of the targets, inadequate features, etc. Often the problems can be fixed just by adjusting some choices. Finally, a change of strategy may be necessary in order to achieve a more satisfying result. In the applications presented here, we highlight the pitfalls arising in particular from ill-defined targets and unsuitable feature selections.
The validation of unsupervised learning is still a matter of debate. Some formal criteria (e. g. Davies Bouldin Index, Silhouette Index or other) are available for centroid-based clustering where a unique metric valid for all clusters can be defined. Difficulties arise when metrics are defined individually for each single cluster (for instance, Gaussian Model clusters, adaptive criteria) as well as using schemes where centroids are essentially meaningless. This is the case in density based clustering. In all these cases, users are better off when asking themselves whether a clustering is meaningful for the problem in physical terms. In our presentation we discuss the problem of choosing a suitable number of clusters in cases in which formal criteria are not applicable. We demonstrate how the identification of groups of patterns helps the identification of elements which have a clear physical meaning, even when strict rules for assessing the clustering are not available.
How to cite: Langer, H., Falsaperla, S., and Hammer, C.: A-posteriori Analyses of Pattern Recognition Results, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19751, https://doi.org/10.5194/egusphere-egu2020-19751, 2020.
EGU2020-19861 | Displays | ITS4.5/GI1.4
Scenario- based multi- risk assessment on exposed buildings to volcanic cascading hazardsMichael Langbein, Juan Camilo Gomez- Zapata, Theresa Frimberger, Nils Brinckmann, Roberto Torres- Corredor, Daniel Andrade, Camilo Zapata- Tapia, Massimiliano Pittore, and Elisabeth Schoepfer
In order to assess the building portfolio composition for a particular natural hazard risk assessment application, it is necessary to classify the built environment into schemas containing building classes. The building classes should also address the attributes which may control their vulnerability towards the different hazards associated with their failure mechanisms, which along with their respective fragility functions are representative of a particular study area. In the case of volcanic risk, former efforts have been carried out in developing volcanic related fragility functions, this has been done mostly for European, Atlantic islands and South Asian building types (SEDIMER, MIA VITA, VOLDIES, EXPLORIS, SAFELAND projects). However, in other parts of the globe, particular construction practices, materials, and even occupancies may describe very diverse building types with different degrees of vulnerability which may or not be compatible with the existing schemas and fragility functions (Spence et al. 2005, Zuccaro et al. 2013, Mavrouli et al. 2013, Jenkins et al. 2014, Torres-Corredor et al. 2017).
As highlighted by Zuccaro et al. 2018, since in the case of volcanic active areas, the built environment will not only be exposed to a single hazard but to several compound or cascading hazards (e.g. tephra fall, pyroclastic flows, lahars), with different time intervals between them, a dynamic vulnerability with cumulated damage on the physical assets would be the baseline upon a multi-risk- volcanic framework should be described. In this similar context, single- hazard but still multi-state fragility functions have been very recently used in order to set up damage descriptions independently on the reference building schema. We propose to generalize this novel approach and further extend it in the volcanic risk assessment context. To do so, the very first step was to generate a multi-hazard- building- taxonomy containing a set of exhaustive mutually exclusive building attributes. Upon that framework, a probabilistic mapping across single- hazards- building- schemas and damage states has been achieved.
This methodological approach has been tested under the RIESGOS project over a selected study area of the Latin American Andes Region. In this region, cities close to active volcanos have been experienced a non-structured grow, which is translated into a significantly vulnerable population living in non- engineering buildings that are highly exposed to volcanic hazards. The Cotopaxi region in Ecuador has been chosen in order to explore the ash falls and lahars damage contributions with several scenarios in terms of volcanic explosivity index (VEI). Local lahars simulations have been obtained at different resolutions. Moreover, probabilistic ash- fall maps have been recently obtained after exhaustive ash fall and wind direction measurements. Lahar flow- velocity and ash- fall load pressure were respectively used as intensity measures. Furthermore, local and foreign building schemas that define the building exposure models have been constrained through ancillary data, cadastral information, and remote individual building inspections, to then been associated with a multi-state fragility function. These ingredients have been integrated into this novel methodological scenario-based- multi-risk- volcanic assessment.
How to cite: Langbein, M., Gomez- Zapata, J. C., Frimberger, T., Brinckmann, N., Torres- Corredor, R., Andrade, D., Zapata- Tapia, C., Pittore, M., and Schoepfer, E.: Scenario- based multi- risk assessment on exposed buildings to volcanic cascading hazards, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19861, https://doi.org/10.5194/egusphere-egu2020-19861, 2020.
In order to assess the building portfolio composition for a particular natural hazard risk assessment application, it is necessary to classify the built environment into schemas containing building classes. The building classes should also address the attributes which may control their vulnerability towards the different hazards associated with their failure mechanisms, which along with their respective fragility functions are representative of a particular study area. In the case of volcanic risk, former efforts have been carried out in developing volcanic related fragility functions, this has been done mostly for European, Atlantic islands and South Asian building types (SEDIMER, MIA VITA, VOLDIES, EXPLORIS, SAFELAND projects). However, in other parts of the globe, particular construction practices, materials, and even occupancies may describe very diverse building types with different degrees of vulnerability which may or not be compatible with the existing schemas and fragility functions (Spence et al. 2005, Zuccaro et al. 2013, Mavrouli et al. 2013, Jenkins et al. 2014, Torres-Corredor et al. 2017).
As highlighted by Zuccaro et al. 2018, since in the case of volcanic active areas, the built environment will not only be exposed to a single hazard but to several compound or cascading hazards (e.g. tephra fall, pyroclastic flows, lahars), with different time intervals between them, a dynamic vulnerability with cumulated damage on the physical assets would be the baseline upon a multi-risk- volcanic framework should be described. In this similar context, single- hazard but still multi-state fragility functions have been very recently used in order to set up damage descriptions independently on the reference building schema. We propose to generalize this novel approach and further extend it in the volcanic risk assessment context. To do so, the very first step was to generate a multi-hazard- building- taxonomy containing a set of exhaustive mutually exclusive building attributes. Upon that framework, a probabilistic mapping across single- hazards- building- schemas and damage states has been achieved.
This methodological approach has been tested under the RIESGOS project over a selected study area of the Latin American Andes Region. In this region, cities close to active volcanos have been experienced a non-structured grow, which is translated into a significantly vulnerable population living in non- engineering buildings that are highly exposed to volcanic hazards. The Cotopaxi region in Ecuador has been chosen in order to explore the ash falls and lahars damage contributions with several scenarios in terms of volcanic explosivity index (VEI). Local lahars simulations have been obtained at different resolutions. Moreover, probabilistic ash- fall maps have been recently obtained after exhaustive ash fall and wind direction measurements. Lahar flow- velocity and ash- fall load pressure were respectively used as intensity measures. Furthermore, local and foreign building schemas that define the building exposure models have been constrained through ancillary data, cadastral information, and remote individual building inspections, to then been associated with a multi-state fragility function. These ingredients have been integrated into this novel methodological scenario-based- multi-risk- volcanic assessment.
How to cite: Langbein, M., Gomez- Zapata, J. C., Frimberger, T., Brinckmann, N., Torres- Corredor, R., Andrade, D., Zapata- Tapia, C., Pittore, M., and Schoepfer, E.: Scenario- based multi- risk assessment on exposed buildings to volcanic cascading hazards, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19861, https://doi.org/10.5194/egusphere-egu2020-19861, 2020.
EGU2020-20926 | Displays | ITS4.5/GI1.4
Crop Classification based Multi-Temporal Sentinel2 Data in the ShiYang River BasinZhiwei Yi
EGU2020-20931 | Displays | ITS4.5/GI1.4
Uncertainty quantification during seismic oceanography inversion with start Temperature-Salinity models of different lateral resolutionsWuxin Xiao, Katy Sheen, Qunshu Tang, Richard Hobbs, Jamie Shutler, and Jo Browse
Seismic oceanography (SO) has been widely used on the inversion of physical oceanographic properties due to its higher lateral resolution up to 10m, compared to conventional oceanographic measurement methods. Normally, the inversion process requires seismic data and in-situ hydrographic data, and the latter is acquired by deploying XBTs/XCTDs. Recently, due to the advantage of providing quantifiable uncertainties of the inverted parameters, a Markov chain Monte Carlo (MCMC) algorithm has been used for the temperature and salinity inversion from SO data. Based on the MCMC inversion method, this study investigates the effect of the lateral density of XBT deployments on the resultant uncertainties of inverted temperature and salinity. We analysed the seismic data acquired in the Gulf of Cadiz (SW Iberia) in 2007 in the framework of the Geophysical Oceanography project. A nonlinear Temperature-Salinity relation is modelled using a Genetic Algorithm from CTD casts collected in the research area. Combining the temperature data from XBTs with the T-S relation, smoothed temperature and salinity prior distributions are derived. Then the posterior distributions of temperature and salinity are estimated using the prior information and the field reflectivity data. In this study, priors are changed by controlling the amount of XBTs used, after which the corresponding uncertainties of the inverted temperature and salinity are calculated. The result quantifies the impact of the prior models with different XBT deployment densities on the uncertainties of inverted results. It is proposed that the acquisition of a reasonable temperature starting model is the prior consideration when deciding the XBT deployment strategy along the seismic oceanography survey.
How to cite: Xiao, W., Sheen, K., Tang, Q., Hobbs, R., Shutler, J., and Browse, J.: Uncertainty quantification during seismic oceanography inversion with start Temperature-Salinity models of different lateral resolutions, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20931, https://doi.org/10.5194/egusphere-egu2020-20931, 2020.
Seismic oceanography (SO) has been widely used on the inversion of physical oceanographic properties due to its higher lateral resolution up to 10m, compared to conventional oceanographic measurement methods. Normally, the inversion process requires seismic data and in-situ hydrographic data, and the latter is acquired by deploying XBTs/XCTDs. Recently, due to the advantage of providing quantifiable uncertainties of the inverted parameters, a Markov chain Monte Carlo (MCMC) algorithm has been used for the temperature and salinity inversion from SO data. Based on the MCMC inversion method, this study investigates the effect of the lateral density of XBT deployments on the resultant uncertainties of inverted temperature and salinity. We analysed the seismic data acquired in the Gulf of Cadiz (SW Iberia) in 2007 in the framework of the Geophysical Oceanography project. A nonlinear Temperature-Salinity relation is modelled using a Genetic Algorithm from CTD casts collected in the research area. Combining the temperature data from XBTs with the T-S relation, smoothed temperature and salinity prior distributions are derived. Then the posterior distributions of temperature and salinity are estimated using the prior information and the field reflectivity data. In this study, priors are changed by controlling the amount of XBTs used, after which the corresponding uncertainties of the inverted temperature and salinity are calculated. The result quantifies the impact of the prior models with different XBT deployment densities on the uncertainties of inverted results. It is proposed that the acquisition of a reasonable temperature starting model is the prior consideration when deciding the XBT deployment strategy along the seismic oceanography survey.
How to cite: Xiao, W., Sheen, K., Tang, Q., Hobbs, R., Shutler, J., and Browse, J.: Uncertainty quantification during seismic oceanography inversion with start Temperature-Salinity models of different lateral resolutions, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20931, https://doi.org/10.5194/egusphere-egu2020-20931, 2020.
ITS4.6/NH6.7 – Data Science and Machine Learning for Natural Hazards and Seismology
EGU2020-4146 | Displays | ITS4.6/NH6.7
Application of Deep Learning to Detect Ground Deformation in InSAR DataPui Anantrasirichai, Juliet Biggs, Fabien Albino, and David Bull
Satellite interferometric synthetic aperture radar (InSAR) can be used for measuring surface deformation for a variety of applications. Recent satellite missions, such as Sentinel-1, produce a large amount of data, meaning that visual inspection is impractical. Here we use deep learning, which has proved successful at object detection, to overcome this problem. Initially we present the use of convolutional neural networks (CNNs) for detecting rapid deformation events, which we test on a global dataset of over 30,000 wrapped interferograms at 900 volcanoes. We compare two potential training datasets: data augmentation applied to archive examples and synthetic models. Both are able to detect true positive results, but the data augmentation approach has a false positive rate of 0.205% and the synthetic approach has a false positive rate of 0.036%. Then, I will present an enhanced technique for measuring slow, sustained deformation over a range of scales from volcanic unrest to urban sources of deformation such as coalfields. By rewrapping cumulative time series, the detection performance is improved when the deformation rate is slow, as more fringes are generated without altering the signal to noise ratio. We adapt the method to use persistent scatterer InSAR data, which is sparse in nature, by using spatial interpolation methods such as modified matrix completion Finally, future perspectives for machine learning applications on InSAR data will be discussed.
How to cite: Anantrasirichai, P., Biggs, J., Albino, F., and Bull, D.: Application of Deep Learning to Detect Ground Deformation in InSAR Data, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4146, https://doi.org/10.5194/egusphere-egu2020-4146, 2020.
Satellite interferometric synthetic aperture radar (InSAR) can be used for measuring surface deformation for a variety of applications. Recent satellite missions, such as Sentinel-1, produce a large amount of data, meaning that visual inspection is impractical. Here we use deep learning, which has proved successful at object detection, to overcome this problem. Initially we present the use of convolutional neural networks (CNNs) for detecting rapid deformation events, which we test on a global dataset of over 30,000 wrapped interferograms at 900 volcanoes. We compare two potential training datasets: data augmentation applied to archive examples and synthetic models. Both are able to detect true positive results, but the data augmentation approach has a false positive rate of 0.205% and the synthetic approach has a false positive rate of 0.036%. Then, I will present an enhanced technique for measuring slow, sustained deformation over a range of scales from volcanic unrest to urban sources of deformation such as coalfields. By rewrapping cumulative time series, the detection performance is improved when the deformation rate is slow, as more fringes are generated without altering the signal to noise ratio. We adapt the method to use persistent scatterer InSAR data, which is sparse in nature, by using spatial interpolation methods such as modified matrix completion Finally, future perspectives for machine learning applications on InSAR data will be discussed.
How to cite: Anantrasirichai, P., Biggs, J., Albino, F., and Bull, D.: Application of Deep Learning to Detect Ground Deformation in InSAR Data, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4146, https://doi.org/10.5194/egusphere-egu2020-4146, 2020.
EGU2020-755 | Displays | ITS4.6/NH6.7
Classification of volcanic and tectonic earthquakes in Kamchatka (Russia) with different machine learning techniquesNatalia Galina, Nikolai Shapiro, Leonard Seydoux, and Dmitry Droznin
Kamchatka is an active subduction zone that exhibits intense seismic and volcanic activities. As a consequence, tectonic and volcanic earthquakes are often nearly simultaneously recorded at the same station. In this work, we consider seismograms recorded between December 2018 and April 2019. During this time period when the M=7.3 earthquake followed by an aftershock sequence occurred nearly simultaneously with a strong eruption of Shiveluch volcano. As a result, stations of the Kamchatka seismic monitoring network recorded up to several hundreds of earthquakes per day. In total, we detected almost 7000 events of different origin using a simple automatic detection algorithm based on signal envelope amplitudes. Then, for each detection different features have been extracted. We started from simple signal parameters (amplitude, duration, peak frequency, etc.), unsmoothed and smoothed spectra and finally used a multi-dimensional signal decomposition (scattering coefficients). For events classification both unsupervised (K-means, agglomerative clustering) and supervised (Support Vector Classification, Random Forest) classic machine learning techniques were performed on all types of extracted features. Obtained results are quite stable and do not vary significantly depending on features and method choice. As a result, the machine learning approaches allow us to clearly separate tectonic subduction-zone earthquakes and those associated with the Shiveluch volcano eruptions based on data of a single station.
How to cite: Galina, N., Shapiro, N., Seydoux, L., and Droznin, D.: Classification of volcanic and tectonic earthquakes in Kamchatka (Russia) with different machine learning techniques, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-755, https://doi.org/10.5194/egusphere-egu2020-755, 2020.
Kamchatka is an active subduction zone that exhibits intense seismic and volcanic activities. As a consequence, tectonic and volcanic earthquakes are often nearly simultaneously recorded at the same station. In this work, we consider seismograms recorded between December 2018 and April 2019. During this time period when the M=7.3 earthquake followed by an aftershock sequence occurred nearly simultaneously with a strong eruption of Shiveluch volcano. As a result, stations of the Kamchatka seismic monitoring network recorded up to several hundreds of earthquakes per day. In total, we detected almost 7000 events of different origin using a simple automatic detection algorithm based on signal envelope amplitudes. Then, for each detection different features have been extracted. We started from simple signal parameters (amplitude, duration, peak frequency, etc.), unsmoothed and smoothed spectra and finally used a multi-dimensional signal decomposition (scattering coefficients). For events classification both unsupervised (K-means, agglomerative clustering) and supervised (Support Vector Classification, Random Forest) classic machine learning techniques were performed on all types of extracted features. Obtained results are quite stable and do not vary significantly depending on features and method choice. As a result, the machine learning approaches allow us to clearly separate tectonic subduction-zone earthquakes and those associated with the Shiveluch volcano eruptions based on data of a single station.
How to cite: Galina, N., Shapiro, N., Seydoux, L., and Droznin, D.: Classification of volcanic and tectonic earthquakes in Kamchatka (Russia) with different machine learning techniques, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-755, https://doi.org/10.5194/egusphere-egu2020-755, 2020.
EGU2020-3696 | Displays | ITS4.6/NH6.7
A LSTM Neural Network for On-site Earthquake Early WarningChia Yu Wang, Ting Chung Huang, and Yih Min Wu
On-site Earthquake Early Warning (EEW) systems estimate possible destructive S-waves based on initial P-waves and issue warnings before large shaking arrives. On-site EEW plays a crucial role to fill up the “blind zone” of regional EEW systems near the epicenter, which often suffers from the most disastrous ground shaking. Previous studies show that peak P-wave displacement amplitude (Pd) may provide a possible indicator of destructive earthquakes. However, the attempt to use a single indicator with fixed thresholds suffers from inevitable misfits, since the diversity in travel paths and site effects for different stations introduce complex nonlinearities. To overcome the above problem, we present a deep learning approach using Long-Short Term Memory (LSTM) neural networks. By utilizing the properties of multi-layered LSTM, we are able to train a highly non-linear neural network that takes initial waveform as input and gives an alert probability as the output on every time step. It is then tested with several major earthquake events, giving the results of a missed alarm rate less than 0.03 percent and false alarm rate less than 15 percent. Our model shows promising outcomes in reducing both missed alarms and false alarms while also providing an improving warning time for hazard mitigation procedures.
How to cite: Wang, C. Y., Huang, T. C., and Wu, Y. M.: A LSTM Neural Network for On-site Earthquake Early Warning, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3696, https://doi.org/10.5194/egusphere-egu2020-3696, 2020.
On-site Earthquake Early Warning (EEW) systems estimate possible destructive S-waves based on initial P-waves and issue warnings before large shaking arrives. On-site EEW plays a crucial role to fill up the “blind zone” of regional EEW systems near the epicenter, which often suffers from the most disastrous ground shaking. Previous studies show that peak P-wave displacement amplitude (Pd) may provide a possible indicator of destructive earthquakes. However, the attempt to use a single indicator with fixed thresholds suffers from inevitable misfits, since the diversity in travel paths and site effects for different stations introduce complex nonlinearities. To overcome the above problem, we present a deep learning approach using Long-Short Term Memory (LSTM) neural networks. By utilizing the properties of multi-layered LSTM, we are able to train a highly non-linear neural network that takes initial waveform as input and gives an alert probability as the output on every time step. It is then tested with several major earthquake events, giving the results of a missed alarm rate less than 0.03 percent and false alarm rate less than 15 percent. Our model shows promising outcomes in reducing both missed alarms and false alarms while also providing an improving warning time for hazard mitigation procedures.
How to cite: Wang, C. Y., Huang, T. C., and Wu, Y. M.: A LSTM Neural Network for On-site Earthquake Early Warning, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3696, https://doi.org/10.5194/egusphere-egu2020-3696, 2020.
EGU2020-20009 | Displays | ITS4.6/NH6.7
Towards assessing the link between slow slip and seismicity with a Deep Learning approachGiuseppe Costantino, Mauro Dalla Mura, David Marsan, Sophie Giffard-Roisin, Mathilde Radiguet, and Anne Socquet
The deployment of increasingly dense geophysical networks in many geologically active regions on the Earth has given the possibility to reveal deformation signals that were not detectable beforehand. An example of these newly discovered signals are those associated with low-frequency earthquakes, which can be linked with the slow slip (aseismic slip) of faults. Aseismic fault slip is a crucial phenomenon as it might play a key role in the precursory phase before large earthquakes (in particular in subduction zones), during which the seismicity rate grows as well as does the ground deformation. Geodetic measurements, e.g. the Global Positioning System (GPS), are capable to track surface deformation transients likely induced by an episode of slow slip. However, very little is known about the mechanisms underlying this precursory phase, in particular regarding to how slow slip and seismicity relate.
The analysis done in this work focuses on recordings acquired by the Japan Meteorological Agency in the Boso area, Japan. In the Boso peninsula, interactions between seismicity and slow slip events can be observed over different time spans: regular slow slip events occur every 4 to 5 years, lasting about 10 days, and are associated with a burst of seismicity (Hirose et al. 2012, 2014, Gardonio et al. 2018), whereas an accelerated seismicity rate has been observed over decades that is likely associated with an increasing shear stress rate (i.e., tectonic loading) on the subduction interface (Ozawa et al. 2014, Reverso et al. 2016, Marsan et al. 2017).
This work aims to explore the potential of Deep Learning for better characterizing the interplay between seismicity and ground surface deformation. The analysis is based on a data-driven approach for building a model for assessing if a link seismicity – surface deformation exists and to characterize the nature of this link. This has potentially strong implications, as (small) earthquakes are the prime observable, so that better understanding the seismicity rate response to potentially small slow slip (so far undetected by GPS) could help monitoring those small slow slip events. The statistical problem is expressed as a regression between some features extracted from the seismic data and the GPS displacements registered at one or more stations.
The proposed method, based on a Long-Short Term Memory (LSTM) neural network, has been designed in a way that it is possible to estimate which features are more relevant in the estimation process. From a geophysical point of view, this can provide interesting insights for validating the results, assessing the robustness of the algorithms and giving insights on the underlying process. This kind of approach represents a novelty in this field, since it opens original perspectives for the joint analysis of seismic / aseismic phenomena with respect to traditional methods based on more classical geophysical data exploration.
How to cite: Costantino, G., Dalla Mura, M., Marsan, D., Giffard-Roisin, S., Radiguet, M., and Socquet, A.: Towards assessing the link between slow slip and seismicity with a Deep Learning approach, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20009, https://doi.org/10.5194/egusphere-egu2020-20009, 2020.
The deployment of increasingly dense geophysical networks in many geologically active regions on the Earth has given the possibility to reveal deformation signals that were not detectable beforehand. An example of these newly discovered signals are those associated with low-frequency earthquakes, which can be linked with the slow slip (aseismic slip) of faults. Aseismic fault slip is a crucial phenomenon as it might play a key role in the precursory phase before large earthquakes (in particular in subduction zones), during which the seismicity rate grows as well as does the ground deformation. Geodetic measurements, e.g. the Global Positioning System (GPS), are capable to track surface deformation transients likely induced by an episode of slow slip. However, very little is known about the mechanisms underlying this precursory phase, in particular regarding to how slow slip and seismicity relate.
The analysis done in this work focuses on recordings acquired by the Japan Meteorological Agency in the Boso area, Japan. In the Boso peninsula, interactions between seismicity and slow slip events can be observed over different time spans: regular slow slip events occur every 4 to 5 years, lasting about 10 days, and are associated with a burst of seismicity (Hirose et al. 2012, 2014, Gardonio et al. 2018), whereas an accelerated seismicity rate has been observed over decades that is likely associated with an increasing shear stress rate (i.e., tectonic loading) on the subduction interface (Ozawa et al. 2014, Reverso et al. 2016, Marsan et al. 2017).
This work aims to explore the potential of Deep Learning for better characterizing the interplay between seismicity and ground surface deformation. The analysis is based on a data-driven approach for building a model for assessing if a link seismicity – surface deformation exists and to characterize the nature of this link. This has potentially strong implications, as (small) earthquakes are the prime observable, so that better understanding the seismicity rate response to potentially small slow slip (so far undetected by GPS) could help monitoring those small slow slip events. The statistical problem is expressed as a regression between some features extracted from the seismic data and the GPS displacements registered at one or more stations.
The proposed method, based on a Long-Short Term Memory (LSTM) neural network, has been designed in a way that it is possible to estimate which features are more relevant in the estimation process. From a geophysical point of view, this can provide interesting insights for validating the results, assessing the robustness of the algorithms and giving insights on the underlying process. This kind of approach represents a novelty in this field, since it opens original perspectives for the joint analysis of seismic / aseismic phenomena with respect to traditional methods based on more classical geophysical data exploration.
How to cite: Costantino, G., Dalla Mura, M., Marsan, D., Giffard-Roisin, S., Radiguet, M., and Socquet, A.: Towards assessing the link between slow slip and seismicity with a Deep Learning approach, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20009, https://doi.org/10.5194/egusphere-egu2020-20009, 2020.
EGU2020-1484 | Displays | ITS4.6/NH6.7
TraML: separation of seismically-induced ground-motion signals with Autoencoder architectureArtemii Novoselov, Gerrit Hein, Goetz Bokelmann, and Florian Fuchs
Any time series can be represented as a sum of sine waves with the help of the Fourier transform. But such a transformation doesn’t answer whether the signal is coming from one source or several; neither it allows separation of such sources. In this work, we present a technique from the Machine Learning domain, called Auto-encoders that utilizes the ability of the neural network to generate signals from the latent space, which in turn allows us to identify signals from an arbitrary number of sources and can generate them as separate waveforms without any loss. We took ground motion records of passing trains and trams in the vicinity of the University of Vienna and trained the network to produce “clean” individual signals from “mixed” waveforms. This work proves the concept and steers the direction for further research of earthquake-induced source separation. It also benefits interference seismometry, since “noise” used for such research can be separated from the signal, thus reducing manual processing (cutting and clipping signals) of seismic records.
How to cite: Novoselov, A., Hein, G., Bokelmann, G., and Fuchs, F.: TraML: separation of seismically-induced ground-motion signals with Autoencoder architecture, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1484, https://doi.org/10.5194/egusphere-egu2020-1484, 2020.
Any time series can be represented as a sum of sine waves with the help of the Fourier transform. But such a transformation doesn’t answer whether the signal is coming from one source or several; neither it allows separation of such sources. In this work, we present a technique from the Machine Learning domain, called Auto-encoders that utilizes the ability of the neural network to generate signals from the latent space, which in turn allows us to identify signals from an arbitrary number of sources and can generate them as separate waveforms without any loss. We took ground motion records of passing trains and trams in the vicinity of the University of Vienna and trained the network to produce “clean” individual signals from “mixed” waveforms. This work proves the concept and steers the direction for further research of earthquake-induced source separation. It also benefits interference seismometry, since “noise” used for such research can be separated from the signal, thus reducing manual processing (cutting and clipping signals) of seismic records.
How to cite: Novoselov, A., Hein, G., Bokelmann, G., and Fuchs, F.: TraML: separation of seismically-induced ground-motion signals with Autoencoder architecture, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1484, https://doi.org/10.5194/egusphere-egu2020-1484, 2020.
EGU2020-4294 | Displays | ITS4.6/NH6.7
Application of Improved U-Net and ResU-Net Based Semantic Segmentation Method for Digitization of Analog SeismogramsJiahua Zhao, Miaki Ishii, Hiromi Ishii, and Thomas Lee
Analog seismograms contain rich and valuable information over nearly a century. However, these analog seismic records are difficult to analyze quantitatively using modern techniques that require digital time series. At the same time, because these seismograms are deteriorating with age and need substantial storage space, their future has become uncertain. Conversion of the analog seismograms to digital time series will allow more conventional access and storage of the data as well as making them available for exciting scientific discovery. The digitization software, DigitSeis, reads a scanned image of a seismogram and generates digitized and timed traces, but the initial step of recognizing trace and time mark segments, as well as other features such as hand-written notes, within the image poses certain challenges. Armed with manually processed analyses of image classification, we aim to automate this process using machine learning algorithms. The semantic segmentation methods have made breakthroughs in many fields. In order to solve the problem of accurate classification of scanned images for analog seismograms, we develop and test an improved deep convolutional neural network based on U-Net, Improved U-Net, and a deeper network segmentation method that adds the residual blocks, ResU-Net. There are two segmentation objects are the traces and time marks in scanned images, and the goal is to train a binary classification model for each type of segmentation object, i.e., there are two models, one for trace objects and another for time mark objects, for each of the neural networks. The networks are trained on the 300 images of the digitizated results of analog seismograms from Harvard-Adam Dziewoński Observatory from 1939. Application of the algorithms to a test data set results in the pixel accuracy (PA) for the Improved U-Net of 95% for traces and nearly 100% for time marks, with Intersection over Union (IoU) of 79% and 75% for traces and time marks, respectively. The PA of ResU-Net are 97% and nearly 100% for traces and time marks, with IoU of 83% and 74%. These experiments show that Improved U-Net is more effective for semantic segmentation of time marks, while ResU-Net is more suitable for traces. In general, both network models work well in separating and identifying objects, and provide a significant step forward in nearly automating digitizing analog seismograms.
How to cite: Zhao, J., Ishii, M., Ishii, H., and Lee, T.: Application of Improved U-Net and ResU-Net Based Semantic Segmentation Method for Digitization of Analog Seismograms, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4294, https://doi.org/10.5194/egusphere-egu2020-4294, 2020.
Analog seismograms contain rich and valuable information over nearly a century. However, these analog seismic records are difficult to analyze quantitatively using modern techniques that require digital time series. At the same time, because these seismograms are deteriorating with age and need substantial storage space, their future has become uncertain. Conversion of the analog seismograms to digital time series will allow more conventional access and storage of the data as well as making them available for exciting scientific discovery. The digitization software, DigitSeis, reads a scanned image of a seismogram and generates digitized and timed traces, but the initial step of recognizing trace and time mark segments, as well as other features such as hand-written notes, within the image poses certain challenges. Armed with manually processed analyses of image classification, we aim to automate this process using machine learning algorithms. The semantic segmentation methods have made breakthroughs in many fields. In order to solve the problem of accurate classification of scanned images for analog seismograms, we develop and test an improved deep convolutional neural network based on U-Net, Improved U-Net, and a deeper network segmentation method that adds the residual blocks, ResU-Net. There are two segmentation objects are the traces and time marks in scanned images, and the goal is to train a binary classification model for each type of segmentation object, i.e., there are two models, one for trace objects and another for time mark objects, for each of the neural networks. The networks are trained on the 300 images of the digitizated results of analog seismograms from Harvard-Adam Dziewoński Observatory from 1939. Application of the algorithms to a test data set results in the pixel accuracy (PA) for the Improved U-Net of 95% for traces and nearly 100% for time marks, with Intersection over Union (IoU) of 79% and 75% for traces and time marks, respectively. The PA of ResU-Net are 97% and nearly 100% for traces and time marks, with IoU of 83% and 74%. These experiments show that Improved U-Net is more effective for semantic segmentation of time marks, while ResU-Net is more suitable for traces. In general, both network models work well in separating and identifying objects, and provide a significant step forward in nearly automating digitizing analog seismograms.
How to cite: Zhao, J., Ishii, M., Ishii, H., and Lee, T.: Application of Improved U-Net and ResU-Net Based Semantic Segmentation Method for Digitization of Analog Seismograms, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4294, https://doi.org/10.5194/egusphere-egu2020-4294, 2020.
EGU2020-5107 | Displays | ITS4.6/NH6.7
End-to-end PGA estimation for earthquake early warning using transformer networksJannes Münchmeyer, Dino Bindi, Ulf Leser, and Frederik Tilmann
The key task of earthquake early warning is to provide timely and accurate estimates of the ground shaking at target sites. Current approaches use either source or propagation based methods. Source based methods calculate fast estimates of the earthquake source parameters and apply ground motion prediction equations to estimate shaking. They suffer from saturation effects for large events, simplified assumptions and the need for a well known hypocentral location, which usually requires arrivals at multiple stations. Propagation based methods estimate levels of shaking from the shaking at neighboring stations and therefore have short warning times and possibly large blind zones. Both methods only use specific features from the waveform. In contrast, we present a multi-station neural network method to estimate horizontal peak ground acceleration (PGA) anywhere in the target region directly from raw accelerometer waveforms in real time.
The three main components of our model are a convolutional neural network (CNN) for extracting features from the single-station three-component accelerograms, a transformer network for combining features from multiple stations and for transferring them to the target site features and a mixture density network to generate probabilistic PGA estimates. By using a transformer network, our model is able to handle a varying set and number of stations as well as target sites. We train our model end-to-end using recorded waveforms and PGAs. We use data augmentation to enable the model to provide estimations at targets without waveform recordings. Starting with the arrival of a P wave at any station of the network, our model issues real-time predictions at each new sample. The predictions are Gaussian mixtures, giving estimates of both expected value and uncertainties. The model can be used to predict PGA at specific target sites, as well as to generate ground motion maps.
We analyze the model on two strong motion data sets from Japan and Italy in terms of standard deviation and lead times. Through the probabilistic predictions we are able to give lead times for different levels of uncertainty and ground shaking. This allows to control the ratio of missed detections to false alerts. Preliminary analysis suggest that for levels between 1%g and 10%g our model achieves multi-second lead times even for the closest stations at a false-positive rate below 25%. For an example event at 50 km depth, lead times at the closest stations with epicentral distances below 20 km are 6 s and 7.5 s. This suggests that our model is able to effectively use the difference between P and S travel time and accurately assess the future level of ground shaking from the first parts of the P wave. It additionally makes effective use of the information contained in the absence of signal at other stations.
How to cite: Münchmeyer, J., Bindi, D., Leser, U., and Tilmann, F.: End-to-end PGA estimation for earthquake early warning using transformer networks, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5107, https://doi.org/10.5194/egusphere-egu2020-5107, 2020.
The key task of earthquake early warning is to provide timely and accurate estimates of the ground shaking at target sites. Current approaches use either source or propagation based methods. Source based methods calculate fast estimates of the earthquake source parameters and apply ground motion prediction equations to estimate shaking. They suffer from saturation effects for large events, simplified assumptions and the need for a well known hypocentral location, which usually requires arrivals at multiple stations. Propagation based methods estimate levels of shaking from the shaking at neighboring stations and therefore have short warning times and possibly large blind zones. Both methods only use specific features from the waveform. In contrast, we present a multi-station neural network method to estimate horizontal peak ground acceleration (PGA) anywhere in the target region directly from raw accelerometer waveforms in real time.
The three main components of our model are a convolutional neural network (CNN) for extracting features from the single-station three-component accelerograms, a transformer network for combining features from multiple stations and for transferring them to the target site features and a mixture density network to generate probabilistic PGA estimates. By using a transformer network, our model is able to handle a varying set and number of stations as well as target sites. We train our model end-to-end using recorded waveforms and PGAs. We use data augmentation to enable the model to provide estimations at targets without waveform recordings. Starting with the arrival of a P wave at any station of the network, our model issues real-time predictions at each new sample. The predictions are Gaussian mixtures, giving estimates of both expected value and uncertainties. The model can be used to predict PGA at specific target sites, as well as to generate ground motion maps.
We analyze the model on two strong motion data sets from Japan and Italy in terms of standard deviation and lead times. Through the probabilistic predictions we are able to give lead times for different levels of uncertainty and ground shaking. This allows to control the ratio of missed detections to false alerts. Preliminary analysis suggest that for levels between 1%g and 10%g our model achieves multi-second lead times even for the closest stations at a false-positive rate below 25%. For an example event at 50 km depth, lead times at the closest stations with epicentral distances below 20 km are 6 s and 7.5 s. This suggests that our model is able to effectively use the difference between P and S travel time and accurately assess the future level of ground shaking from the first parts of the P wave. It additionally makes effective use of the information contained in the absence of signal at other stations.
How to cite: Münchmeyer, J., Bindi, D., Leser, U., and Tilmann, F.: End-to-end PGA estimation for earthquake early warning using transformer networks, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5107, https://doi.org/10.5194/egusphere-egu2020-5107, 2020.
EGU2020-5376 | Displays | ITS4.6/NH6.7
Classification and Separation of Diffraction Energy on Pre-Migration Seismic Data using Deep LearningBrydon Lowney, Ivan Lokmer, Gareth Shane O'Brien, and Christopher Bean
Diffractions are a useful aspect of the seismic wavefield and are often underutilised. By separating the diffractions from the rest of the wavefield they can be used for various applications such as velocity analysis, structural imaging, and wavefront tomography. However, separating the diffractions is a challenging task due to the comparatively low amplitudes of diffractions as well as the overlap between reflection and diffraction energy. Whilst there are existing analytical methods for separation, these act to remove reflections, leaving a volume which contains diffractions and noise. On top of this, analytical separation techniques can be costly computationally as well as requiring manual parameterisation. To alleviate these issues, a deep neural network has been trained to automatically identify and separate diffractions from reflections and noise on pre-migration data.
Here, a Generative Adversarial Network (GAN) has been trained for the automated separation. This is a type of deep neural network architecture which contains two neural networks which compete against one another. One neural network acts as a generator, creating new data which appears visually similar to the real data, while a second neural network acts as a discriminator, trying to identify whether the given data is real or fake. As the generator improves, so too does the discriminator, giving a deeper understanding of the data. To avoid overfitting to a specific dataset as well as to improve the cross-data applicability of the network, data from several different seismic datasets from geologically distinct locations has been used in training. When comparing a network trained on a single dataset compared to one trained on several datasets, it is seen that providing additional data improves the separation on both the original and new datasets.
The automatic separation technique is then compared with a conventional, analytical, separation technique; plane-wave destruction (PWD). The computational cost of the GAN separation is vastly superior to that of PWD, performing a separation in minutes on a 3-D dataset in comparison to hours. Although in some complex areas the GAN separation is of a higher quality than the PWD separation, as it does not rely on the dip, there are also areas where the PWD outperforms the GAN separation. The GAN may be enhanced by adding more training data as well as by improving the initial separation used to create the training data, which is based around PWD and thus is imperfect and can introduce bias into the network. A potential for this is training the GAN entirely using synthetic data, which allows for a perfect separation as the points are known, however, it must be of sufficient volume for training and sufficient quality for real data applicability.
How to cite: Lowney, B., Lokmer, I., O'Brien, G. S., and Bean, C.: Classification and Separation of Diffraction Energy on Pre-Migration Seismic Data using Deep Learning, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5376, https://doi.org/10.5194/egusphere-egu2020-5376, 2020.
Diffractions are a useful aspect of the seismic wavefield and are often underutilised. By separating the diffractions from the rest of the wavefield they can be used for various applications such as velocity analysis, structural imaging, and wavefront tomography. However, separating the diffractions is a challenging task due to the comparatively low amplitudes of diffractions as well as the overlap between reflection and diffraction energy. Whilst there are existing analytical methods for separation, these act to remove reflections, leaving a volume which contains diffractions and noise. On top of this, analytical separation techniques can be costly computationally as well as requiring manual parameterisation. To alleviate these issues, a deep neural network has been trained to automatically identify and separate diffractions from reflections and noise on pre-migration data.
Here, a Generative Adversarial Network (GAN) has been trained for the automated separation. This is a type of deep neural network architecture which contains two neural networks which compete against one another. One neural network acts as a generator, creating new data which appears visually similar to the real data, while a second neural network acts as a discriminator, trying to identify whether the given data is real or fake. As the generator improves, so too does the discriminator, giving a deeper understanding of the data. To avoid overfitting to a specific dataset as well as to improve the cross-data applicability of the network, data from several different seismic datasets from geologically distinct locations has been used in training. When comparing a network trained on a single dataset compared to one trained on several datasets, it is seen that providing additional data improves the separation on both the original and new datasets.
The automatic separation technique is then compared with a conventional, analytical, separation technique; plane-wave destruction (PWD). The computational cost of the GAN separation is vastly superior to that of PWD, performing a separation in minutes on a 3-D dataset in comparison to hours. Although in some complex areas the GAN separation is of a higher quality than the PWD separation, as it does not rely on the dip, there are also areas where the PWD outperforms the GAN separation. The GAN may be enhanced by adding more training data as well as by improving the initial separation used to create the training data, which is based around PWD and thus is imperfect and can introduce bias into the network. A potential for this is training the GAN entirely using synthetic data, which allows for a perfect separation as the points are known, however, it must be of sufficient volume for training and sufficient quality for real data applicability.
How to cite: Lowney, B., Lokmer, I., O'Brien, G. S., and Bean, C.: Classification and Separation of Diffraction Energy on Pre-Migration Seismic Data using Deep Learning, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5376, https://doi.org/10.5194/egusphere-egu2020-5376, 2020.
EGU2020-5686 | Displays | ITS4.6/NH6.7
Extending near fault earthquakes catalogs using convolutional neural network and single-station waveformsJosipa Majstorović and Piero Poli
The machine learning (ML) algorithms have already found their application in standard seismological procedures, such as earthquake detection and localization, phase picking, earthquake early warning system, etc. They are progressively becoming superior methods since one can rapidly scan voluminous data and detect earthquakes, even if buried in highly noisy time series.
We here make use of ML algorithms to obtain more complete near fault seismic catalogs and thus better understand the long-term (decades) evolution of seismicity before large earthquakes occurrence. We focus on data recorded before the devastating L’Aquila earthquake (6 April 2009 01:32 UTC, Mw6.3) right beneath the city of L’Aquila in the Abruzzo region (Central Italy). Before this event sparse stations were available, reducing the magnitude completeness of standard catalogs.
We adapted existing convolutional neural networks (CNN) for earthquake detection, localisation and characterization using a single-station waveforms. The CNN is applied to 29 years of data (1990 to 2019) recorded at the AQU station, located near the city of L’Aquila (Italy). The pre-existing catalog maintained by Istituto nazionale di geofisica e vulcanologia is used to define labels and train and test the CNN. We are here interested in classifying the continuous three-component waveforms into four categories, noise/earthquakes, distance (location), magnitude and depth, where each category consists of several nodes. Existing seismic catalogs are used to label earthquakes, while the noise events are randomly selected between the catalog events, evenly represented by daytime and night-time periods.
We prefer CNN over other methods, since we can use seismograms directly with very minor pre-processing (e.g. filtering) and we do not need any prior knowledge of the region.
How to cite: Majstorović, J. and Poli, P.: Extending near fault earthquakes catalogs using convolutional neural network and single-station waveforms, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5686, https://doi.org/10.5194/egusphere-egu2020-5686, 2020.
The machine learning (ML) algorithms have already found their application in standard seismological procedures, such as earthquake detection and localization, phase picking, earthquake early warning system, etc. They are progressively becoming superior methods since one can rapidly scan voluminous data and detect earthquakes, even if buried in highly noisy time series.
We here make use of ML algorithms to obtain more complete near fault seismic catalogs and thus better understand the long-term (decades) evolution of seismicity before large earthquakes occurrence. We focus on data recorded before the devastating L’Aquila earthquake (6 April 2009 01:32 UTC, Mw6.3) right beneath the city of L’Aquila in the Abruzzo region (Central Italy). Before this event sparse stations were available, reducing the magnitude completeness of standard catalogs.
We adapted existing convolutional neural networks (CNN) for earthquake detection, localisation and characterization using a single-station waveforms. The CNN is applied to 29 years of data (1990 to 2019) recorded at the AQU station, located near the city of L’Aquila (Italy). The pre-existing catalog maintained by Istituto nazionale di geofisica e vulcanologia is used to define labels and train and test the CNN. We are here interested in classifying the continuous three-component waveforms into four categories, noise/earthquakes, distance (location), magnitude and depth, where each category consists of several nodes. Existing seismic catalogs are used to label earthquakes, while the noise events are randomly selected between the catalog events, evenly represented by daytime and night-time periods.
We prefer CNN over other methods, since we can use seismograms directly with very minor pre-processing (e.g. filtering) and we do not need any prior knowledge of the region.
How to cite: Majstorović, J. and Poli, P.: Extending near fault earthquakes catalogs using convolutional neural network and single-station waveforms, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5686, https://doi.org/10.5194/egusphere-egu2020-5686, 2020.
EGU2020-10937 | Displays | ITS4.6/NH6.7
Machine Learning based Multivariate Seismic Site Classification System for South KoreaHan-Saem Kim, Chang-Guk Sun, Hyung-Ik Cho, and Moon-Gyo Lee
Earthquake-induced land deformation and structure failure are more severe over soft soils than over firm soils and rocks owing to the seismic site effect and liquefaction. The site-specific seismic site effect related to the amplification of ground motion has spatial uncertainty depend on the local subsurface, surface geological, and topographic conditions. When the 2017 Pohang earthquake (M 5.4), South Korea’s second-strongest earthquake in decades, occurred, the severe damages influencing by variable site effect indicators were observed focusing on the basin or basin-edge region deposited unconsolidated Quaternary sediments. Thus, the site characterization is essential considering empirical correlations with geotechnical site response parameters and surface proxies. Furthermore, in the case of so many variables and tenuously related correlations, machine learning classification models can prove to be very precise than the parametric methods. In this study, the multivariate seismic site classification system was established using the machine learning technique based on the geospatial big data platform.
The supervised machine learning classification techniques and more specifically, random forest, support vector machine (SVM), and artificial neural network (ANN) algorithms have been adopted. Supervised machine learning algorithms analyze a set of labeled training data consisting of a set of input data and desired output values, and produce an inferred function which can be used for predictions from given input data. To optimize the classification criteria by considering the geotechnical uncertainty and local site effects, the training datasets applying principal component analysis (PCA) were verified with k-fold cross-validation. Moreover, the optimized training algorithm, proved by loss estimators (receiver operating characteristic curve (ROC), the area under the ROC curve (AUC)) based on the confusion matrix, was selected.
For the southeastern region in South Korea, the boring log information (strata, standard penetration test, etc.), geological map (1:50k scale), digital terrain model (having 5 m × 5 m), soil map (1:250k scale) were collected and constructed as geospatial big data. Preliminarily, to build spatially coincided datasets with geotechnical response parameters and surface proxies, the mesh-type geospatial information was built by the advanced geostatistical interpolation and simulation methods.
Site classification systems use seismic response parameters related to the geotechnical characteristics of the study area as the classification criteria. The current site classification systems in South Korea and the United States recommend Vs30, which is the average shear wave velocity (Vs) up to 30 m underground. This criterion uses only the dynamic characteristics of the site without considering its geometric distribution characteristics. Thus, the geospatial information for the input layer included the geo-layer thickness, surface proxies (elevation, slope, geological category, soil category), average Vs for soil layer (Vs,soil) and site period (TG). The Vs30-based site class was defined as categorical labeled data. Finally, the site class can be predicted using only proxies based on the optimized classification techniques.
How to cite: Kim, H.-S., Sun, C.-G., Cho, H.-I., and Lee, M.-G.: Machine Learning based Multivariate Seismic Site Classification System for South Korea, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10937, https://doi.org/10.5194/egusphere-egu2020-10937, 2020.
Earthquake-induced land deformation and structure failure are more severe over soft soils than over firm soils and rocks owing to the seismic site effect and liquefaction. The site-specific seismic site effect related to the amplification of ground motion has spatial uncertainty depend on the local subsurface, surface geological, and topographic conditions. When the 2017 Pohang earthquake (M 5.4), South Korea’s second-strongest earthquake in decades, occurred, the severe damages influencing by variable site effect indicators were observed focusing on the basin or basin-edge region deposited unconsolidated Quaternary sediments. Thus, the site characterization is essential considering empirical correlations with geotechnical site response parameters and surface proxies. Furthermore, in the case of so many variables and tenuously related correlations, machine learning classification models can prove to be very precise than the parametric methods. In this study, the multivariate seismic site classification system was established using the machine learning technique based on the geospatial big data platform.
The supervised machine learning classification techniques and more specifically, random forest, support vector machine (SVM), and artificial neural network (ANN) algorithms have been adopted. Supervised machine learning algorithms analyze a set of labeled training data consisting of a set of input data and desired output values, and produce an inferred function which can be used for predictions from given input data. To optimize the classification criteria by considering the geotechnical uncertainty and local site effects, the training datasets applying principal component analysis (PCA) were verified with k-fold cross-validation. Moreover, the optimized training algorithm, proved by loss estimators (receiver operating characteristic curve (ROC), the area under the ROC curve (AUC)) based on the confusion matrix, was selected.
For the southeastern region in South Korea, the boring log information (strata, standard penetration test, etc.), geological map (1:50k scale), digital terrain model (having 5 m × 5 m), soil map (1:250k scale) were collected and constructed as geospatial big data. Preliminarily, to build spatially coincided datasets with geotechnical response parameters and surface proxies, the mesh-type geospatial information was built by the advanced geostatistical interpolation and simulation methods.
Site classification systems use seismic response parameters related to the geotechnical characteristics of the study area as the classification criteria. The current site classification systems in South Korea and the United States recommend Vs30, which is the average shear wave velocity (Vs) up to 30 m underground. This criterion uses only the dynamic characteristics of the site without considering its geometric distribution characteristics. Thus, the geospatial information for the input layer included the geo-layer thickness, surface proxies (elevation, slope, geological category, soil category), average Vs for soil layer (Vs,soil) and site period (TG). The Vs30-based site class was defined as categorical labeled data. Finally, the site class can be predicted using only proxies based on the optimized classification techniques.
How to cite: Kim, H.-S., Sun, C.-G., Cho, H.-I., and Lee, M.-G.: Machine Learning based Multivariate Seismic Site Classification System for South Korea, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10937, https://doi.org/10.5194/egusphere-egu2020-10937, 2020.
EGU2020-12721 | Displays | ITS4.6/NH6.7
Data Driven Prediction of Seismic Ground Response under Low Level ExcitationJaewon Yoo and Jaehun Ahn
It is an important task to model and predict seismic ground response; the results of ground response analysis are, in turn, used to assess liquefaction and integrity of undergound and upper structures. There has been numerious research and development on modelling of seismic ground response, but often there are quite large difference between prediction and measurement. In this study, it is attempted to train the input and output ground excitation data and make prediction based on it. To initiate this work, the deep learning network was trained for low level excitation data; the results showed reasonable match with actual measurements.
ACKNOWLEDGEMENT : The authors would like to thank the Ministry of Land, Infrastructure, and Transport of Korean government for the grant from Technology Advancement Research Program (grant no. 20CTAP-C152100-02) and Basic Science Research Program (grant no. 2017R1D1A3B03034563) through the National Research Foundation of Korea (NRF) funded by the Ministry of Education.
How to cite: Yoo, J. and Ahn, J.: Data Driven Prediction of Seismic Ground Response under Low Level Excitation, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12721, https://doi.org/10.5194/egusphere-egu2020-12721, 2020.
It is an important task to model and predict seismic ground response; the results of ground response analysis are, in turn, used to assess liquefaction and integrity of undergound and upper structures. There has been numerious research and development on modelling of seismic ground response, but often there are quite large difference between prediction and measurement. In this study, it is attempted to train the input and output ground excitation data and make prediction based on it. To initiate this work, the deep learning network was trained for low level excitation data; the results showed reasonable match with actual measurements.
ACKNOWLEDGEMENT : The authors would like to thank the Ministry of Land, Infrastructure, and Transport of Korean government for the grant from Technology Advancement Research Program (grant no. 20CTAP-C152100-02) and Basic Science Research Program (grant no. 2017R1D1A3B03034563) through the National Research Foundation of Korea (NRF) funded by the Ministry of Education.
How to cite: Yoo, J. and Ahn, J.: Data Driven Prediction of Seismic Ground Response under Low Level Excitation, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12721, https://doi.org/10.5194/egusphere-egu2020-12721, 2020.
EGU2020-18818 | Displays | ITS4.6/NH6.7
Deep Learning P and S wave phase picking of Ocean Bottom Seismometer (OBS) dataLuis Fernandez-Prieto, Antonio Villaseñor, and Roberto Cabieces
Ocean Bottom Seismometers (OBS) are the primary instruments used in the study of marine seismicity. Due to the characteristics of their emplacement on the sea bottom, these instruments have a much lower signal-noise ratio than land seismometers. Therefore, difficulties arise on the analysis of the data, specially when using automatic methods.
During recent years the use of machine learning methods applied to seismic signal analysis has increased significantly. We have developed a neural network algorithm that allows to pick seismic body signals, allowing to correctly identify P and S waves with a precision higher than 98%. This network was trained using data of the Southern California Seismic Network and was applied satisfactorily in analysis of data from Large-N experiments in different regions from Europe and Asia.
One of the remarkable characteristics of the network is the ability to identify the noise, both in the case of seismic signals with low signal-noise ratio and in the case of large amplitude non-seismic signals, such as human-induced noise. This feature makes the network an optimal candidate to study data recorded using OBS.
We have modified this neural network in order to analyze OBS data from different deployments. Combined with the use of an associator, we have successfully located events with very low signal-noise ratio, achieving results with a precision comparable or superior to a human operator.
How to cite: Fernandez-Prieto, L., Villaseñor, A., and Cabieces, R.: Deep Learning P and S wave phase picking of Ocean Bottom Seismometer (OBS) data, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18818, https://doi.org/10.5194/egusphere-egu2020-18818, 2020.
Ocean Bottom Seismometers (OBS) are the primary instruments used in the study of marine seismicity. Due to the characteristics of their emplacement on the sea bottom, these instruments have a much lower signal-noise ratio than land seismometers. Therefore, difficulties arise on the analysis of the data, specially when using automatic methods.
During recent years the use of machine learning methods applied to seismic signal analysis has increased significantly. We have developed a neural network algorithm that allows to pick seismic body signals, allowing to correctly identify P and S waves with a precision higher than 98%. This network was trained using data of the Southern California Seismic Network and was applied satisfactorily in analysis of data from Large-N experiments in different regions from Europe and Asia.
One of the remarkable characteristics of the network is the ability to identify the noise, both in the case of seismic signals with low signal-noise ratio and in the case of large amplitude non-seismic signals, such as human-induced noise. This feature makes the network an optimal candidate to study data recorded using OBS.
We have modified this neural network in order to analyze OBS data from different deployments. Combined with the use of an associator, we have successfully located events with very low signal-noise ratio, achieving results with a precision comparable or superior to a human operator.
How to cite: Fernandez-Prieto, L., Villaseñor, A., and Cabieces, R.: Deep Learning P and S wave phase picking of Ocean Bottom Seismometer (OBS) data, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18818, https://doi.org/10.5194/egusphere-egu2020-18818, 2020.
EGU2020-13442 | Displays | ITS4.6/NH6.7
Unsupervised classification of 30 years of near-fault seismological data: What can we learn about fault physics?Piero Poli and Josipa majstorovic
The exponential growth of geophysical (seismological in particular) data we faced in the last years, made it hard to quantitatively label (e.g. systematical separation of earthquakes and noise) the daily and continuous stream of records. On the other hand, these data are likely to contain an enormous amount of information about the physical processes occurring inside our planet, including new and original signals that can shed light on new physics of the crustal rocks.
Of particular interest are data recorded near major faults,where one hope to detect and discover new signals possibly associated with precursory phase of significant and hazardous earthquakes.
With the above ideas in mind, we perform an unsupervised classification of 30 years of seismological data recorded at ~10km from the L’Aquila fault (in Italy), which hosted a magnitude 6 event (in 2009) and still poses a significant hazard for the region.
We based our classification on daily spectra of three component data and relative spectral features. We then utilize self-organizing map (SOM) to perform a crude clustering of the 30 years of data. The data reduction offered by SOM permits a rapid visualization of this large datasets (~11k spectra) and individuation of main spectral groups. In a further step, we test different clustering algorithms (including hierarchical ones) to isolate groups of records sharing similar features, in a non-subjective manner. We believe that from the quantitative analysis (e.g. temporal evolution) of the retrieved clusters, the signature of fault physical processes (e.g. preparation of the magnitude 6 earthquake, in our case) can be retrieved. The newly detected signals will then be analyzed to learn more about the causative processes, generating them.
How to cite: Poli, P. and majstorovic, J.: Unsupervised classification of 30 years of near-fault seismological data: What can we learn about fault physics?, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13442, https://doi.org/10.5194/egusphere-egu2020-13442, 2020.
The exponential growth of geophysical (seismological in particular) data we faced in the last years, made it hard to quantitatively label (e.g. systematical separation of earthquakes and noise) the daily and continuous stream of records. On the other hand, these data are likely to contain an enormous amount of information about the physical processes occurring inside our planet, including new and original signals that can shed light on new physics of the crustal rocks.
Of particular interest are data recorded near major faults,where one hope to detect and discover new signals possibly associated with precursory phase of significant and hazardous earthquakes.
With the above ideas in mind, we perform an unsupervised classification of 30 years of seismological data recorded at ~10km from the L’Aquila fault (in Italy), which hosted a magnitude 6 event (in 2009) and still poses a significant hazard for the region.
We based our classification on daily spectra of three component data and relative spectral features. We then utilize self-organizing map (SOM) to perform a crude clustering of the 30 years of data. The data reduction offered by SOM permits a rapid visualization of this large datasets (~11k spectra) and individuation of main spectral groups. In a further step, we test different clustering algorithms (including hierarchical ones) to isolate groups of records sharing similar features, in a non-subjective manner. We believe that from the quantitative analysis (e.g. temporal evolution) of the retrieved clusters, the signature of fault physical processes (e.g. preparation of the magnitude 6 earthquake, in our case) can be retrieved. The newly detected signals will then be analyzed to learn more about the causative processes, generating them.
How to cite: Poli, P. and majstorovic, J.: Unsupervised classification of 30 years of near-fault seismological data: What can we learn about fault physics?, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13442, https://doi.org/10.5194/egusphere-egu2020-13442, 2020.
EGU2020-7744 | Displays | ITS4.6/NH6.7
Development of events detector for monitoring cryoseisms in upper soilsNikita Afonin and Elena Kozlovskaya
In some problems of solid Earth geophysics analysis of the huge amount of continuous seismic data is necessary. One of such problems is an investigation of so-called frost quakes or cryoseisms in the Arctic caused by extreme weather events. Weather extremes such as rapid temperature decrease in combination with thin snow cover can result in cracking of water-saturated soil and rock when the water has suddenly frozen and expanded. As cryoseisms can be hazardous for industrial and civil objects located in the near-field zone, their monitoring and analysis of weather conditions during which they occur, is necessary to access hazard caused by extreme weather events. One of the important tasks in studying cryoseisms is the development of efficient data processing routine capable to separate cryoseisms from other seismic events and noise in continuous seismic data. In our study, we present an algorithm for identification of cryoseisms that is based on classical STA/LTA algorithm for seismic event detection and neural network for their classification using selected characteristics of the records.
To access characteristics of cryoseisms, we used 3-component recordings of a swarm of strong cryoseismic events with similar waveforms that were registered on 06.06.2016 by seismic station OUL in northern Finland. The strongest event from the swarm produced a fracture on the road surface and damaged basements of buildings in the municipality of Oulu. Assuming that all events in the swarm were caused by the same mechanism (freezing of water-saturated soil), we used them as a learning sample for the neural network. Analysis of these events has shown that most of them have many similarities in selected records characteristics (central frequencies, duration etc.) with the strongest event and with each other. Application of this algorithm to the continuous seismic data recorded since the end of November 2015 to the end of February 2016, showed that the number of cryoseisms per day strongly correlates with variations of air temperature.
How to cite: Afonin, N. and Kozlovskaya, E.: Development of events detector for monitoring cryoseisms in upper soils, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7744, https://doi.org/10.5194/egusphere-egu2020-7744, 2020.
In some problems of solid Earth geophysics analysis of the huge amount of continuous seismic data is necessary. One of such problems is an investigation of so-called frost quakes or cryoseisms in the Arctic caused by extreme weather events. Weather extremes such as rapid temperature decrease in combination with thin snow cover can result in cracking of water-saturated soil and rock when the water has suddenly frozen and expanded. As cryoseisms can be hazardous for industrial and civil objects located in the near-field zone, their monitoring and analysis of weather conditions during which they occur, is necessary to access hazard caused by extreme weather events. One of the important tasks in studying cryoseisms is the development of efficient data processing routine capable to separate cryoseisms from other seismic events and noise in continuous seismic data. In our study, we present an algorithm for identification of cryoseisms that is based on classical STA/LTA algorithm for seismic event detection and neural network for their classification using selected characteristics of the records.
To access characteristics of cryoseisms, we used 3-component recordings of a swarm of strong cryoseismic events with similar waveforms that were registered on 06.06.2016 by seismic station OUL in northern Finland. The strongest event from the swarm produced a fracture on the road surface and damaged basements of buildings in the municipality of Oulu. Assuming that all events in the swarm were caused by the same mechanism (freezing of water-saturated soil), we used them as a learning sample for the neural network. Analysis of these events has shown that most of them have many similarities in selected records characteristics (central frequencies, duration etc.) with the strongest event and with each other. Application of this algorithm to the continuous seismic data recorded since the end of November 2015 to the end of February 2016, showed that the number of cryoseisms per day strongly correlates with variations of air temperature.
How to cite: Afonin, N. and Kozlovskaya, E.: Development of events detector for monitoring cryoseisms in upper soils, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7744, https://doi.org/10.5194/egusphere-egu2020-7744, 2020.
EGU2020-3493 | Displays | ITS4.6/NH6.7
Characterizing glacial processes applying classical beamforming and machine learningJosefine Umlauft, Philippe Roux, Florent Gimbert, Albanne Lecointre, Bertrand Rouet-LeDuc, Daniel Taylor Trugman, and Paul Allan Johnson
The cryosphere is a highly active and dynamic environment that rapidly responds to changing climatic conditions. processes behind are poorly understood they remain challenging to observe. Glacial dynamics are strongly intermittent in time and heterogeneous in space. Thus, monitoring with high spatio-temporal resolution is essential. In course of the RESOLVE project, continuous seismic observations were obtained using a dense seismic network (100 nodes, Ø 700 m) installed on the Argentière Glacier (French Alpes) during May in 2018. This unique data set offers the chance to study targeted processes and dynamics within the cryosphere on a local scale in detail.
We classical beamforming within the of the array (matched field processing) and unsupervised machine learning techniques to identify, cluster and locate seismic sources in 5D (x, y, z, velocity, time). Sources located with high resolution and accuracy related to processes and activity within the ice body, e.g. the geometry and dynamics of crevasses or the interaction at the glacier/bedrock interface, depending on the meteorological conditions such as daily temperature fluctuations or snow fall. Our preliminary results indicate strong potential in poorly resolved sources, which can be observed with statistical consistency reveal new insights into structural features/ physical properties of the glacier (e.g. analysis of scatterers).
How to cite: Umlauft, J., Roux, P., Gimbert, F., Lecointre, A., Rouet-LeDuc, B., Trugman, D. T., and Johnson, P. A.: Characterizing glacial processes applying classical beamforming and machine learning, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3493, https://doi.org/10.5194/egusphere-egu2020-3493, 2020.
The cryosphere is a highly active and dynamic environment that rapidly responds to changing climatic conditions. processes behind are poorly understood they remain challenging to observe. Glacial dynamics are strongly intermittent in time and heterogeneous in space. Thus, monitoring with high spatio-temporal resolution is essential. In course of the RESOLVE project, continuous seismic observations were obtained using a dense seismic network (100 nodes, Ø 700 m) installed on the Argentière Glacier (French Alpes) during May in 2018. This unique data set offers the chance to study targeted processes and dynamics within the cryosphere on a local scale in detail.
We classical beamforming within the of the array (matched field processing) and unsupervised machine learning techniques to identify, cluster and locate seismic sources in 5D (x, y, z, velocity, time). Sources located with high resolution and accuracy related to processes and activity within the ice body, e.g. the geometry and dynamics of crevasses or the interaction at the glacier/bedrock interface, depending on the meteorological conditions such as daily temperature fluctuations or snow fall. Our preliminary results indicate strong potential in poorly resolved sources, which can be observed with statistical consistency reveal new insights into structural features/ physical properties of the glacier (e.g. analysis of scatterers).
How to cite: Umlauft, J., Roux, P., Gimbert, F., Lecointre, A., Rouet-LeDuc, B., Trugman, D. T., and Johnson, P. A.: Characterizing glacial processes applying classical beamforming and machine learning, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3493, https://doi.org/10.5194/egusphere-egu2020-3493, 2020.
EGU2020-276 | Displays | ITS4.6/NH6.7
Exploiting satellite technology and machine learning to describe and predict hazardous shoreline changeMartin Rogers
East Anglia is particularly vulnerable to sea level rise, increases in storminess, coastal erosion, and coastal flooding. Critical national infrastructure (including Sizewell’s nuclear power stations and the Bacton gas terminals), population centres close to the coastal zone (> 600,000 in Norfolk and Suffolk) and iconic natural habitats (the Broads, attracting 7 million visitors a year) are under threat. Shoreline change, driven by complex interactions between environmental forcing factors and human shoreline modifications, is a key determinant of coastal vulnerability and exposure; its prediction is imperative for future coastal risk adaptation.
An automated, python-supported, tool has been developed to simultaneously extract the water and vegetation line from satellite imagery. PlanetLab multispectral optical imagery is used to provide multi-year, frequent (up to fortnightly) images with 3-5m spatial resolution. Net shoreline change (NSC) has been calculated along multiple stretches of the East Coast of England, most notably for areas experiencing varying rates of change in front of, and adjacent to, ‘hard’ coastal defences. The joint use of water and vegetation line proxies enables calculation of inter-tidal width variability alongside NSC. The image resolution used provides new opportunities for data-led approaches to the monitoring of shoreline response to storm events and/or human shoreline modification.
Artificial Neural Networks (ANN) have been trained to predict shoreline evolution until 2040. Early results are presented, alongside considerations surrounding data pre-processing and input parameter selection requirements. Training data comprises decadal-scale shoreline positions recovered using automated shoreline detection. Shoreline position, alongside databases of nearshore bathymetry, sea defences, artificial beach renourishment, nearshore processes (wave and tide gauge data, meteorological fields), combined with land cover, population and infrastructure data act as inputs. Optimal input filtering and ANN configuration is derived using hindcasts.
The research is timely; ANN predictions are compared with the Anglian Shoreline Management Plans (SMPs), which identify locations at greatest risk and assign future risk management funding. The findings of this research will feed into future revisions of the plans.
How to cite: Rogers, M.: Exploiting satellite technology and machine learning to describe and predict hazardous shoreline change , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-276, https://doi.org/10.5194/egusphere-egu2020-276, 2020.
East Anglia is particularly vulnerable to sea level rise, increases in storminess, coastal erosion, and coastal flooding. Critical national infrastructure (including Sizewell’s nuclear power stations and the Bacton gas terminals), population centres close to the coastal zone (> 600,000 in Norfolk and Suffolk) and iconic natural habitats (the Broads, attracting 7 million visitors a year) are under threat. Shoreline change, driven by complex interactions between environmental forcing factors and human shoreline modifications, is a key determinant of coastal vulnerability and exposure; its prediction is imperative for future coastal risk adaptation.
An automated, python-supported, tool has been developed to simultaneously extract the water and vegetation line from satellite imagery. PlanetLab multispectral optical imagery is used to provide multi-year, frequent (up to fortnightly) images with 3-5m spatial resolution. Net shoreline change (NSC) has been calculated along multiple stretches of the East Coast of England, most notably for areas experiencing varying rates of change in front of, and adjacent to, ‘hard’ coastal defences. The joint use of water and vegetation line proxies enables calculation of inter-tidal width variability alongside NSC. The image resolution used provides new opportunities for data-led approaches to the monitoring of shoreline response to storm events and/or human shoreline modification.
Artificial Neural Networks (ANN) have been trained to predict shoreline evolution until 2040. Early results are presented, alongside considerations surrounding data pre-processing and input parameter selection requirements. Training data comprises decadal-scale shoreline positions recovered using automated shoreline detection. Shoreline position, alongside databases of nearshore bathymetry, sea defences, artificial beach renourishment, nearshore processes (wave and tide gauge data, meteorological fields), combined with land cover, population and infrastructure data act as inputs. Optimal input filtering and ANN configuration is derived using hindcasts.
The research is timely; ANN predictions are compared with the Anglian Shoreline Management Plans (SMPs), which identify locations at greatest risk and assign future risk management funding. The findings of this research will feed into future revisions of the plans.
How to cite: Rogers, M.: Exploiting satellite technology and machine learning to describe and predict hazardous shoreline change , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-276, https://doi.org/10.5194/egusphere-egu2020-276, 2020.
EGU2020-31 | Displays | ITS4.6/NH6.7
Synergetic use of Planet data and high-resolution aerial images for windthrow detection based on Deep LearningMelanie Brandmeier, Wolfgang Deigele, Zayd Hamdi, and Christoph Straub
Due to climate change the number of storms and, thus, forest damage has increased over recent years. The state of the art of damage detection is manual digitization based on aerial images and requires a great amount of work and time. There have been numerous attempts to automatize this process in the past such as change detection based on SAR and optical data or the comparison of Digital Surface Models (DSMs) to detect changes in the mean forest height. By using Convolutional Neural Networks (CNNs) in conjunction with GIS we aim at completely streamlining the detection and mapping process.
We developed and tested different CNNs for rapid windthrow detection based on Planet data that is rapidly available after a storm event, and on airborne data to increase accuracy after this first assessment. The study area is in Bavaria (ca. 165 square km) and data was provided by the agency for forestry (LWF). A U-Net architecture was compared to other approaches using transfer learning (e.g. VGG32) to find the most performant architecture for the task on both datasets. U-Net was originally developed for medical image segmentation and has proven to be very powerful for other classification tasks.
Preliminary results highlight the potential of Deep Learning algorithms to detect damaged areas with accuracies of over 91% on airborne data and 92% on Planet data. The proposed workflow with complete integration into ArcGIS is well-suited for rapid first assessments after a storm event that allows for better planning of the flight campaign, and first management tasks followed by detailed mapping in a second stage.
How to cite: Brandmeier, M., Deigele, W., Hamdi, Z., and Straub, C.: Synergetic use of Planet data and high-resolution aerial images for windthrow detection based on Deep Learning, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-31, https://doi.org/10.5194/egusphere-egu2020-31, 2020.
Due to climate change the number of storms and, thus, forest damage has increased over recent years. The state of the art of damage detection is manual digitization based on aerial images and requires a great amount of work and time. There have been numerous attempts to automatize this process in the past such as change detection based on SAR and optical data or the comparison of Digital Surface Models (DSMs) to detect changes in the mean forest height. By using Convolutional Neural Networks (CNNs) in conjunction with GIS we aim at completely streamlining the detection and mapping process.
We developed and tested different CNNs for rapid windthrow detection based on Planet data that is rapidly available after a storm event, and on airborne data to increase accuracy after this first assessment. The study area is in Bavaria (ca. 165 square km) and data was provided by the agency for forestry (LWF). A U-Net architecture was compared to other approaches using transfer learning (e.g. VGG32) to find the most performant architecture for the task on both datasets. U-Net was originally developed for medical image segmentation and has proven to be very powerful for other classification tasks.
Preliminary results highlight the potential of Deep Learning algorithms to detect damaged areas with accuracies of over 91% on airborne data and 92% on Planet data. The proposed workflow with complete integration into ArcGIS is well-suited for rapid first assessments after a storm event that allows for better planning of the flight campaign, and first management tasks followed by detailed mapping in a second stage.
How to cite: Brandmeier, M., Deigele, W., Hamdi, Z., and Straub, C.: Synergetic use of Planet data and high-resolution aerial images for windthrow detection based on Deep Learning, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-31, https://doi.org/10.5194/egusphere-egu2020-31, 2020.
EGU2020-1894 | Displays | ITS4.6/NH6.7
Uncertainties propagation in a hydrological empirical modelLeonardo Santos, Emerson Silva, Cíntia Freitas, and Roberta Bacelar
Empirical models have been applied in several works in the literature on hydrological modeling. However, there is still an open question about uncertainties propagation in those models. In this paper, we developed an empirical hydrological model under a Machine Learning approach. Using the Keras interface and TensorFlow library, we trained and tested a Multilayer Perceptron. Our case study was conducted with data from the city of Nova Friburgo, in the mountainous region of Rio de Janeiro, Brazil. Precipitation and river level data were obtained from 5 hydrological stations (in situ), with a resolution of 15 minutes for 2 years. To quantify the propagation of uncertainties, we applied a stochastic perturbation to the input data, following an a priori defined probability distribution, and compared the statistical moments of this distribution with the statistical moments of the output distribution. Based on the proposed accuracy and precision indices, it is possible to conclude, from our case study, that the accuracy is higher but the precision is lower for uniformly distributed stochastic perturbation when compared to an equivalent triangular distribution case.
How to cite: Santos, L., Silva, E., Freitas, C., and Bacelar, R.: Uncertainties propagation in a hydrological empirical model, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1894, https://doi.org/10.5194/egusphere-egu2020-1894, 2020.
Empirical models have been applied in several works in the literature on hydrological modeling. However, there is still an open question about uncertainties propagation in those models. In this paper, we developed an empirical hydrological model under a Machine Learning approach. Using the Keras interface and TensorFlow library, we trained and tested a Multilayer Perceptron. Our case study was conducted with data from the city of Nova Friburgo, in the mountainous region of Rio de Janeiro, Brazil. Precipitation and river level data were obtained from 5 hydrological stations (in situ), with a resolution of 15 minutes for 2 years. To quantify the propagation of uncertainties, we applied a stochastic perturbation to the input data, following an a priori defined probability distribution, and compared the statistical moments of this distribution with the statistical moments of the output distribution. Based on the proposed accuracy and precision indices, it is possible to conclude, from our case study, that the accuracy is higher but the precision is lower for uniformly distributed stochastic perturbation when compared to an equivalent triangular distribution case.
How to cite: Santos, L., Silva, E., Freitas, C., and Bacelar, R.: Uncertainties propagation in a hydrological empirical model, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1894, https://doi.org/10.5194/egusphere-egu2020-1894, 2020.
EGU2020-3847 | Displays | ITS4.6/NH6.7
Characterizing the urban waterlogging variation in highly urbanized coastal cities: A watershed-based stepwise cluster analysis model approachZhifeng Wu, Qifei Zhang, Yinbiao Chen, and Paolo Tarolli
Under the combined effects of climate change and rapid urbanization, the low-lying coastal cities are vulnerable to urban waterlogging events. Urban waterlogging refers to the accumulated water disaster caused by the rainwater unable to be discharged through the drainage system in time, which affected by natural conditions and human activities. Due to the spatial heterogeneity of urban landscape and the non-linear interaction between influencing factors, in this work we proposes a novel approach to characterize the urban waterlogging variation in highly urbanized areas by implementing watershed-based Stepwise Cluster Analysis Model (SCAM), which with consideration of both natural and anthropogenic variables (i.e. topographic factors, cumulated precipitation, land surface characteristics, drainage density, and GDP). The watershed-based stepwise cluster analysis model is based on the theory of multivariate analysis of variance that can effectively capture the non-stationary and complex relationship between urban waterlogging and natural and anthropogenic factors. The watershed-based analysis can overcome the shortcomings of the negative sample selection method employed in previous studies, which greatly improve the model reliability and accuracy. Furthermore, different land-use (the proportion of impervious surfaces remains unchanged, increasing by 5% and 10%) and rainfall scenarios (accumulated precipitation increases by 5%, 10%, 20%, and 50%) are adopted to simulate the waterlogging density variation and thus to clarify the future urban waterlogging-prone areas. We consider waterlogging events in the highly urbanized coastal city - central urban districts of Guangzhou (China) from 2009 to 2015 as a case study. The results demonstrate that: (1) the SCAM performs a high degree of fitting and prediction capabilities both in the calibration and validation data sets, illustrating that it can successfully be used to reveal the complex mechanisms linking urban waterlogging to natural and anthropogenic factors; (2) The SCAM provides more accurate and detailed simulated results than other machine learning models (LR, ANN, SVM), which more realistic and detailed reflect the occurrence and distribution of urban waterlogging events; (3) Under different urbanization scenarios and precipitation scenarios, urban waterlogging density and urban waterlogging-prone areas present great variations, and thus strategies should be developed to cope with different future scenarios. Although heavy precipitation can increase the occurrence of urban waterlogging, the urban expansion characterized by the increase of impervious surface abundance was the dominant cause of urban waterlogging in the analyzed study area. This study extended our scientific understanding with theoretical and practical references to develop waterlogging management strategies and promote the further application of the stepwise cluster analysis model in the assessment and simulation of urban waterlogging variation.
How to cite: Wu, Z., Zhang, Q., Chen, Y., and Tarolli, P.: Characterizing the urban waterlogging variation in highly urbanized coastal cities: A watershed-based stepwise cluster analysis model approach, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3847, https://doi.org/10.5194/egusphere-egu2020-3847, 2020.
Under the combined effects of climate change and rapid urbanization, the low-lying coastal cities are vulnerable to urban waterlogging events. Urban waterlogging refers to the accumulated water disaster caused by the rainwater unable to be discharged through the drainage system in time, which affected by natural conditions and human activities. Due to the spatial heterogeneity of urban landscape and the non-linear interaction between influencing factors, in this work we proposes a novel approach to characterize the urban waterlogging variation in highly urbanized areas by implementing watershed-based Stepwise Cluster Analysis Model (SCAM), which with consideration of both natural and anthropogenic variables (i.e. topographic factors, cumulated precipitation, land surface characteristics, drainage density, and GDP). The watershed-based stepwise cluster analysis model is based on the theory of multivariate analysis of variance that can effectively capture the non-stationary and complex relationship between urban waterlogging and natural and anthropogenic factors. The watershed-based analysis can overcome the shortcomings of the negative sample selection method employed in previous studies, which greatly improve the model reliability and accuracy. Furthermore, different land-use (the proportion of impervious surfaces remains unchanged, increasing by 5% and 10%) and rainfall scenarios (accumulated precipitation increases by 5%, 10%, 20%, and 50%) are adopted to simulate the waterlogging density variation and thus to clarify the future urban waterlogging-prone areas. We consider waterlogging events in the highly urbanized coastal city - central urban districts of Guangzhou (China) from 2009 to 2015 as a case study. The results demonstrate that: (1) the SCAM performs a high degree of fitting and prediction capabilities both in the calibration and validation data sets, illustrating that it can successfully be used to reveal the complex mechanisms linking urban waterlogging to natural and anthropogenic factors; (2) The SCAM provides more accurate and detailed simulated results than other machine learning models (LR, ANN, SVM), which more realistic and detailed reflect the occurrence and distribution of urban waterlogging events; (3) Under different urbanization scenarios and precipitation scenarios, urban waterlogging density and urban waterlogging-prone areas present great variations, and thus strategies should be developed to cope with different future scenarios. Although heavy precipitation can increase the occurrence of urban waterlogging, the urban expansion characterized by the increase of impervious surface abundance was the dominant cause of urban waterlogging in the analyzed study area. This study extended our scientific understanding with theoretical and practical references to develop waterlogging management strategies and promote the further application of the stepwise cluster analysis model in the assessment and simulation of urban waterlogging variation.
How to cite: Wu, Z., Zhang, Q., Chen, Y., and Tarolli, P.: Characterizing the urban waterlogging variation in highly urbanized coastal cities: A watershed-based stepwise cluster analysis model approach, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3847, https://doi.org/10.5194/egusphere-egu2020-3847, 2020.
EGU2020-5194 | Displays | ITS4.6/NH6.7 | Highlight
A Real-time Traffic Routing Framework for Flood Risk Management Using Live Urban Observation DataNa Dong, Craig Robson, Stuart Barr, and Richard Dawson
Reliable transportation infrastructure is crucial to ensure the mobility, safety and economy of urban areas. Flooding in urban environments can disrupt the flow of people, goods, services and emergency responders as a result of disruption or damage to transport systems. Pervasive sensors for urban monitoring and traffic surveillance, coupled with big data analytics, provide new opportunities for managing the impacts of urban flooding through intelligent traffic management systems in real-time.
A framework has been developed to assess the effect of urban surface water on road network traffic movements, accounting for real-time traffic conditions and changes in road capacity under flood conditions. Through this framework, inferred future traffic disruptions and short-term congestions, along with their spatiotemporal prorogation can be provided to assist flood risk warning and safety guidance. Within this framework, both flood modelling results from the HiPIMS 2D hydrodynamic model, and traffic prediction from machine leaning, are integrated to enable improved traffic forecasting that accounts for surface water conditions. Information from 130 traffic counters and 46 CCTV cameras distributed over Newcastle upon Tyne (UK) are employed which include information on location, historical traffic flow, and imagery.
Figure 1 shows a flowchart of the traffic routing system. Congestion is evaluated on the basis of the level of service (LOS) value which is a function of both free flow speed and actual traffic density providing a quantitative measure for the quality of vehicle traffic service. Surface water results in decreased driving speeds which can in turn cause in a sudden increase of traffic density near the flooded road, and queuing in connected roads. A relationship among flood depth, free flow speed, flow rate and density has been constructed to examine the density curve variation in the whole process along with the surface flood dynamic. Based on the new speed-flow model and congestion degree an updated road network can be acquired using geometric calculation and network analysis. Finally, flooded traffic flows are rerouted by shortest path calculation associated with the origin-destination and changes in road capacity and vehicle speeds. A case study under a flood event similar to the one on June 28th 2012, which is a return period of 1 in 100 years, is demonstrated for Newcastle upon Tyne (UK).
Figure 1. Flowchart of the integrated traffic routing model that accounts for surface water flooding.
How to cite: Dong, N., Robson, C., Barr, S., and Dawson, R.: A Real-time Traffic Routing Framework for Flood Risk Management Using Live Urban Observation Data, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5194, https://doi.org/10.5194/egusphere-egu2020-5194, 2020.
Reliable transportation infrastructure is crucial to ensure the mobility, safety and economy of urban areas. Flooding in urban environments can disrupt the flow of people, goods, services and emergency responders as a result of disruption or damage to transport systems. Pervasive sensors for urban monitoring and traffic surveillance, coupled with big data analytics, provide new opportunities for managing the impacts of urban flooding through intelligent traffic management systems in real-time.
A framework has been developed to assess the effect of urban surface water on road network traffic movements, accounting for real-time traffic conditions and changes in road capacity under flood conditions. Through this framework, inferred future traffic disruptions and short-term congestions, along with their spatiotemporal prorogation can be provided to assist flood risk warning and safety guidance. Within this framework, both flood modelling results from the HiPIMS 2D hydrodynamic model, and traffic prediction from machine leaning, are integrated to enable improved traffic forecasting that accounts for surface water conditions. Information from 130 traffic counters and 46 CCTV cameras distributed over Newcastle upon Tyne (UK) are employed which include information on location, historical traffic flow, and imagery.
Figure 1 shows a flowchart of the traffic routing system. Congestion is evaluated on the basis of the level of service (LOS) value which is a function of both free flow speed and actual traffic density providing a quantitative measure for the quality of vehicle traffic service. Surface water results in decreased driving speeds which can in turn cause in a sudden increase of traffic density near the flooded road, and queuing in connected roads. A relationship among flood depth, free flow speed, flow rate and density has been constructed to examine the density curve variation in the whole process along with the surface flood dynamic. Based on the new speed-flow model and congestion degree an updated road network can be acquired using geometric calculation and network analysis. Finally, flooded traffic flows are rerouted by shortest path calculation associated with the origin-destination and changes in road capacity and vehicle speeds. A case study under a flood event similar to the one on June 28th 2012, which is a return period of 1 in 100 years, is demonstrated for Newcastle upon Tyne (UK).
Figure 1. Flowchart of the integrated traffic routing model that accounts for surface water flooding.
How to cite: Dong, N., Robson, C., Barr, S., and Dawson, R.: A Real-time Traffic Routing Framework for Flood Risk Management Using Live Urban Observation Data, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5194, https://doi.org/10.5194/egusphere-egu2020-5194, 2020.
EGU2020-6277 | Displays | ITS4.6/NH6.7
A novel artificial neural network for flood forecasting based on deep learning encoder-decoder architectureKangling Lin, Hua Chen, Chong-Yu Xu, Yanlai Zhou, and Shenglian Guo
With the rapid growth of deep learning recently, artificial neural networks have been propelled to the forefront in flood forecasting via their end-to-end learning ability. Encoder-decoder architecture, as a novel deep feature extraction, which captures the inherent relationship of the data involved, has emerged in time sequence forecasting nowadays. As the advance of encoder-decoder architecture in sequence to sequence learning, it has been applied in many fields, such as machine translation, energy and environment. However, it is seldom used in hydrological modelling. In this study, a new neural network is developed to forecast flood based on the encoder-decoder architecture. There are two deep learning methods, including the Long Short-Term Memory (LSTM) network and Temporal Convolutional Network (TCN), selected as encoders respectively, while the LSTM was also chosen as the decoder, whose results are compared with those from the standard LSTM without using encoder-decoder architecture.
These models were trained and tested by using the hourly flood events data from 2009 to 2015 in Jianxi basin, China. The results indicated that the new neural flood forecasting networks based encoder-decoder architectures generally perform better than the standard LSTM, since they have better goodness-of-fit between forecasted and observed flood and produce the promising performance in multi-index assessment. The TCN as an encoder has better model stability and accuracy than LSTM as an encoder, especially in longer forecast periods and larger flood. The study results also show that the encoder-decoder architecture can be used as an effective deep learning solution in flood forecasting.
How to cite: Lin, K., Chen, H., Xu, C.-Y., Zhou, Y., and Guo, S.: A novel artificial neural network for flood forecasting based on deep learning encoder-decoder architecture, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6277, https://doi.org/10.5194/egusphere-egu2020-6277, 2020.
With the rapid growth of deep learning recently, artificial neural networks have been propelled to the forefront in flood forecasting via their end-to-end learning ability. Encoder-decoder architecture, as a novel deep feature extraction, which captures the inherent relationship of the data involved, has emerged in time sequence forecasting nowadays. As the advance of encoder-decoder architecture in sequence to sequence learning, it has been applied in many fields, such as machine translation, energy and environment. However, it is seldom used in hydrological modelling. In this study, a new neural network is developed to forecast flood based on the encoder-decoder architecture. There are two deep learning methods, including the Long Short-Term Memory (LSTM) network and Temporal Convolutional Network (TCN), selected as encoders respectively, while the LSTM was also chosen as the decoder, whose results are compared with those from the standard LSTM without using encoder-decoder architecture.
These models were trained and tested by using the hourly flood events data from 2009 to 2015 in Jianxi basin, China. The results indicated that the new neural flood forecasting networks based encoder-decoder architectures generally perform better than the standard LSTM, since they have better goodness-of-fit between forecasted and observed flood and produce the promising performance in multi-index assessment. The TCN as an encoder has better model stability and accuracy than LSTM as an encoder, especially in longer forecast periods and larger flood. The study results also show that the encoder-decoder architecture can be used as an effective deep learning solution in flood forecasting.
How to cite: Lin, K., Chen, H., Xu, C.-Y., Zhou, Y., and Guo, S.: A novel artificial neural network for flood forecasting based on deep learning encoder-decoder architecture, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6277, https://doi.org/10.5194/egusphere-egu2020-6277, 2020.
EGU2020-6419 | Displays | ITS4.6/NH6.7
Application of artificial neural network model for regional frequency analysis at Han River basin, South KoreaJoohyung Lee, Hanbeen Kim, Taereem Kim, and Jun-Haeng Heo
Regional frequency analysis (RFA) is used to improve the accuracy of quantiles at sites where the observed data is insufficient. Due to the development of technologies, complex computation of huge data set is possible with a prevalent personal computer. Therefore, machine learning methods have been widely applied in many disciplines, including hydrology. There are also many previous studies that apply the machine learning methods to RFA. The main purpose of this study is to apply the artificial neural network (ANN) model for RFA. For this purpose, performance of RFA based on the ANN model is measured. For the homogeneous region in Han River basin, rainfall gauging sites are divided into training and testing groups. The training group consists of sites where the record length of data is more than 30 years. The testing group contains sites where the record length of data is spanned from 10 to 30 years. Various hydro-meteorological variables are used as an input layer and parameters of generalized extreme value (GEV) distribution for annual maximum rainfall data are used as an output layer of the ANN model. Then, the root mean square error (RMSE) values between the predicted and observed quantiles are calculated. To evaluate the model performance, the RMSEs of quantile estimated by the ANN model are compared to those of the index flood model.
How to cite: Lee, J., Kim, H., Kim, T., and Heo, J.-H.: Application of artificial neural network model for regional frequency analysis at Han River basin, South Korea, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6419, https://doi.org/10.5194/egusphere-egu2020-6419, 2020.
Regional frequency analysis (RFA) is used to improve the accuracy of quantiles at sites where the observed data is insufficient. Due to the development of technologies, complex computation of huge data set is possible with a prevalent personal computer. Therefore, machine learning methods have been widely applied in many disciplines, including hydrology. There are also many previous studies that apply the machine learning methods to RFA. The main purpose of this study is to apply the artificial neural network (ANN) model for RFA. For this purpose, performance of RFA based on the ANN model is measured. For the homogeneous region in Han River basin, rainfall gauging sites are divided into training and testing groups. The training group consists of sites where the record length of data is more than 30 years. The testing group contains sites where the record length of data is spanned from 10 to 30 years. Various hydro-meteorological variables are used as an input layer and parameters of generalized extreme value (GEV) distribution for annual maximum rainfall data are used as an output layer of the ANN model. Then, the root mean square error (RMSE) values between the predicted and observed quantiles are calculated. To evaluate the model performance, the RMSEs of quantile estimated by the ANN model are compared to those of the index flood model.
How to cite: Lee, J., Kim, H., Kim, T., and Heo, J.-H.: Application of artificial neural network model for regional frequency analysis at Han River basin, South Korea, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6419, https://doi.org/10.5194/egusphere-egu2020-6419, 2020.
EGU2020-8365 | Displays | ITS4.6/NH6.7
Random forest classification of morphology in the northern Gerecse (Hungary) to predict landslide-prone slopesGáspár Albert and Dávid Gerzsenyi
The morphology of the Gerecse Hills bears the imprints of fluvial terraces of the Danube River, Neogene tectonism and Quaternary erosion. The solid bedrocks are composed of Mesozoic and Paleogene limestones, marls, and sandstones, and are covered by 115 m thick layers of unconsolidated Quaternary fluvial, lacustrine, and aeolian sediments. Hillslopes, stream valleys, and loessy riverside bluffs are prone to landslides, which caused serious damages in inhabited and agricultural areas in the past. Attempts to map these landslides were made and the observations were documented in the National Landslide Cadastre (NLC) inventory since the 1970’s. These documentations are sporadic, concentrating only on certain locations, and they often refer inaccurately to the state and extent of the landslides. The aim of the present study was to complete and correct the landslide inventory by using quantitative modelling. On the 480 sq. km large study area all records of the inventory were revisited and corrected. Using objective criteria, the renewed records and additional sample locations were sorted into one of the following morphological categories: scarps, debris, transitional area, stable accumulation areas, stable hilltops, and stable slopes. The categorized map of these observations served as training data for the random forest classification (RFC).
Random forest is a powerful tool for multivariate classification that uses several decision trees. In our case, the predictions were done for each pixels of medium-resolution (~10 m) rasters. The predictor variables of the decision trees were morphometric and geological indices. The terrain indices were derived from the MERIT DEM with SAGA routines and the categorized geological data is from a medium-scale geological map [1]. The predictor variables were packed in a multi-band raster and the RFC method was executed using R 3.5 with RStudio.
After testing several combinations of the predictor variables and two different categorisation of the geological data, the best prediction has cca. 80% accuracy. The validation of the model is based on the calculation of the rate of well-predicted pixels compared to the total cell-count of the training data. The results showed that the probable location of landslide-prone slopes is not restricted to the areas recorded in the National Landslide Cadastre inventory. Based on the model, only ~6% of the estimated location of the highly unstable slopes (scarps) fall within the NLC polygons in the study area.
The project was partly supported by the Thematic Excellence Program, Industry and Digitization Subprogram, NRDI Office, project no. ED_18-1-2019-0030 (from the part of G. Albert) and the ÚNKP-19-3 New National Excellence Program of the Ministry for Innovation and Technology (from the part of D. Gerzsenyi).
Reference:
[1] Gyalog L., and Síkhegyi F., eds. Geological map of Hungary (scale: 1:100 000). Budapest, Hungary, Geological Institute of Hungary, 2005.
How to cite: Albert, G. and Gerzsenyi, D.: Random forest classification of morphology in the northern Gerecse (Hungary) to predict landslide-prone slopes, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8365, https://doi.org/10.5194/egusphere-egu2020-8365, 2020.
The morphology of the Gerecse Hills bears the imprints of fluvial terraces of the Danube River, Neogene tectonism and Quaternary erosion. The solid bedrocks are composed of Mesozoic and Paleogene limestones, marls, and sandstones, and are covered by 115 m thick layers of unconsolidated Quaternary fluvial, lacustrine, and aeolian sediments. Hillslopes, stream valleys, and loessy riverside bluffs are prone to landslides, which caused serious damages in inhabited and agricultural areas in the past. Attempts to map these landslides were made and the observations were documented in the National Landslide Cadastre (NLC) inventory since the 1970’s. These documentations are sporadic, concentrating only on certain locations, and they often refer inaccurately to the state and extent of the landslides. The aim of the present study was to complete and correct the landslide inventory by using quantitative modelling. On the 480 sq. km large study area all records of the inventory were revisited and corrected. Using objective criteria, the renewed records and additional sample locations were sorted into one of the following morphological categories: scarps, debris, transitional area, stable accumulation areas, stable hilltops, and stable slopes. The categorized map of these observations served as training data for the random forest classification (RFC).
Random forest is a powerful tool for multivariate classification that uses several decision trees. In our case, the predictions were done for each pixels of medium-resolution (~10 m) rasters. The predictor variables of the decision trees were morphometric and geological indices. The terrain indices were derived from the MERIT DEM with SAGA routines and the categorized geological data is from a medium-scale geological map [1]. The predictor variables were packed in a multi-band raster and the RFC method was executed using R 3.5 with RStudio.
After testing several combinations of the predictor variables and two different categorisation of the geological data, the best prediction has cca. 80% accuracy. The validation of the model is based on the calculation of the rate of well-predicted pixels compared to the total cell-count of the training data. The results showed that the probable location of landslide-prone slopes is not restricted to the areas recorded in the National Landslide Cadastre inventory. Based on the model, only ~6% of the estimated location of the highly unstable slopes (scarps) fall within the NLC polygons in the study area.
The project was partly supported by the Thematic Excellence Program, Industry and Digitization Subprogram, NRDI Office, project no. ED_18-1-2019-0030 (from the part of G. Albert) and the ÚNKP-19-3 New National Excellence Program of the Ministry for Innovation and Technology (from the part of D. Gerzsenyi).
Reference:
[1] Gyalog L., and Síkhegyi F., eds. Geological map of Hungary (scale: 1:100 000). Budapest, Hungary, Geological Institute of Hungary, 2005.
How to cite: Albert, G. and Gerzsenyi, D.: Random forest classification of morphology in the northern Gerecse (Hungary) to predict landslide-prone slopes, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8365, https://doi.org/10.5194/egusphere-egu2020-8365, 2020.
EGU2020-9257 | Displays | ITS4.6/NH6.7 | Highlight
Towards an automatic algorithm for natural oil slicks delineation using Copernicus Sentinel-1 imageryCristina Vrinceanu, Stuart Marsh, and Stephen Grebby
Radar imagery, and specifically SAR imagery, is the preferred data type for the detection and delineation of oil slicks formed following the discharge of oil through human activities or natural occurrences. The contrast between the dark oil surfaces, characterized by a low backscatter return, and the rough, bright sea surface with higher backscatter has been exploited for decades in studies and operational processes.
Despite the semi-automatic nature of the traditional detection approaches, the workflow has always included the expertise of a trained human operator, for validating the results and efficiently discriminating between oil stained surfaces and other ocean phenomena that can produce a similar effect on SAR imagery (e.g., algal blooms, greasy ice). Thus, the process is time and resource consuming, while results are highly subjective. Therefore, automating the process to reduce the time for processing and analysis while producing consistent results is the ultimate goal.
Addressing this challenge, a new algorithm is proposed in this presentation. Building on state-of-the-art methods, the algorithm makes use of the latest technological developments for processing and analyzing features on the ocean surface using a synergistic approach combining SAR, optical and ancillary datasets.
This presentation will focus on the results that have been obtained by ingesting high-resolution open SAR data delivered by the Copernicus Sentinel-1 satellites into the algorithm. This represents a significant advancement over traditional approaches both in terms of utilizing contemporary SAR mission imagery instead of that from the heritage missions (ERS, ENVISAT), and in deploying both conventional classification and artificial intelligence techniques (e.g. CNNs) .
The scope of this study also involves highlighting the strengths and shortcomings of each type of technique in relation to the scenario to help make recommendations on the appropriate algorithm to utilize. The full architecture of the SAR component of the algorithm will be detailed, while the case study results over a set of known seepage sites and potential candidate sites will be presented, demonstrating the reliability of this new method.
How to cite: Vrinceanu, C., Marsh, S., and Grebby, S.: Towards an automatic algorithm for natural oil slicks delineation using Copernicus Sentinel-1 imagery, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9257, https://doi.org/10.5194/egusphere-egu2020-9257, 2020.
Radar imagery, and specifically SAR imagery, is the preferred data type for the detection and delineation of oil slicks formed following the discharge of oil through human activities or natural occurrences. The contrast between the dark oil surfaces, characterized by a low backscatter return, and the rough, bright sea surface with higher backscatter has been exploited for decades in studies and operational processes.
Despite the semi-automatic nature of the traditional detection approaches, the workflow has always included the expertise of a trained human operator, for validating the results and efficiently discriminating between oil stained surfaces and other ocean phenomena that can produce a similar effect on SAR imagery (e.g., algal blooms, greasy ice). Thus, the process is time and resource consuming, while results are highly subjective. Therefore, automating the process to reduce the time for processing and analysis while producing consistent results is the ultimate goal.
Addressing this challenge, a new algorithm is proposed in this presentation. Building on state-of-the-art methods, the algorithm makes use of the latest technological developments for processing and analyzing features on the ocean surface using a synergistic approach combining SAR, optical and ancillary datasets.
This presentation will focus on the results that have been obtained by ingesting high-resolution open SAR data delivered by the Copernicus Sentinel-1 satellites into the algorithm. This represents a significant advancement over traditional approaches both in terms of utilizing contemporary SAR mission imagery instead of that from the heritage missions (ERS, ENVISAT), and in deploying both conventional classification and artificial intelligence techniques (e.g. CNNs) .
The scope of this study also involves highlighting the strengths and shortcomings of each type of technique in relation to the scenario to help make recommendations on the appropriate algorithm to utilize. The full architecture of the SAR component of the algorithm will be detailed, while the case study results over a set of known seepage sites and potential candidate sites will be presented, demonstrating the reliability of this new method.
How to cite: Vrinceanu, C., Marsh, S., and Grebby, S.: Towards an automatic algorithm for natural oil slicks delineation using Copernicus Sentinel-1 imagery, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9257, https://doi.org/10.5194/egusphere-egu2020-9257, 2020.
EGU2020-9487 | Displays | ITS4.6/NH6.7
Detecting avalanche debris from SAR imaging: a comparison of convolutional neural networks and variational autoencodersSophie Giffard-Roisin, Saumya Sinha, Fatima Karbou, Michael Deschatres, Anna Karas, Nicolas Eckert, Cécile Coléou, and Claire Monteleoni
Achieving reliable observations of avalanche debris is crucial for many applications including avalanche forecasting. The ability to continuously monitor the avalanche activity, in space and time, would provide indicators on the potential instability of the snowpack and would allow a better characterization of avalanche risk periods and zones. In this work, we use Sentinel-1 SAR (synthetic aperture radar) data and an independent in-situ avalanche inventory (as ground truth labels) to automatically detect avalanche debris in the French Alps during the remarkable winter season 2017-18.
Two main challenges are specific to this data: (i) the imbalance of the data with a small number of positive samples — or avalanche — (ii) the uncertainty of the labels coming from a separate in-situ inventory. We propose to compare two different deep learning methods on SAR image patches in order to tackle these issues: a fully supervised convolutional neural networks model and an unsupervised approach able to detect anomalies based on a variational autoencoder. Our preliminary results show that we are able to successfully locate new avalanche deposits with as much as 77% confidence on the most susceptible mountain zone (compared to 53% with a baseline method) on a balanced dataset.
In order to make an efficient use of remote sensing measurements on a complex terrain, we explore the following question: to what extent can deep learning methods improve the detection of avalanche deposits and help us to derive relevant avalanche activity statistics at different scales (in time and space) that could be useful for a large number of users (researchers, forecasters, government operators)?
How to cite: Giffard-Roisin, S., Sinha, S., Karbou, F., Deschatres, M., Karas, A., Eckert, N., Coléou, C., and Monteleoni, C.: Detecting avalanche debris from SAR imaging: a comparison of convolutional neural networks and variational autoencoders, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9487, https://doi.org/10.5194/egusphere-egu2020-9487, 2020.
Achieving reliable observations of avalanche debris is crucial for many applications including avalanche forecasting. The ability to continuously monitor the avalanche activity, in space and time, would provide indicators on the potential instability of the snowpack and would allow a better characterization of avalanche risk periods and zones. In this work, we use Sentinel-1 SAR (synthetic aperture radar) data and an independent in-situ avalanche inventory (as ground truth labels) to automatically detect avalanche debris in the French Alps during the remarkable winter season 2017-18.
Two main challenges are specific to this data: (i) the imbalance of the data with a small number of positive samples — or avalanche — (ii) the uncertainty of the labels coming from a separate in-situ inventory. We propose to compare two different deep learning methods on SAR image patches in order to tackle these issues: a fully supervised convolutional neural networks model and an unsupervised approach able to detect anomalies based on a variational autoencoder. Our preliminary results show that we are able to successfully locate new avalanche deposits with as much as 77% confidence on the most susceptible mountain zone (compared to 53% with a baseline method) on a balanced dataset.
In order to make an efficient use of remote sensing measurements on a complex terrain, we explore the following question: to what extent can deep learning methods improve the detection of avalanche deposits and help us to derive relevant avalanche activity statistics at different scales (in time and space) that could be useful for a large number of users (researchers, forecasters, government operators)?
How to cite: Giffard-Roisin, S., Sinha, S., Karbou, F., Deschatres, M., Karas, A., Eckert, N., Coléou, C., and Monteleoni, C.: Detecting avalanche debris from SAR imaging: a comparison of convolutional neural networks and variational autoencoders, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9487, https://doi.org/10.5194/egusphere-egu2020-9487, 2020.
EGU2020-11123 | Displays | ITS4.6/NH6.7
Automatic detection of volcanic eruptions in Doppler radar observations using a neural network approachMatthias Hort, Daniel Uhle, Fabio Venegas, Lea Scharff, Jan Walda, and Geoffroy Avard
Immediate detection of volcanic eruptions is essential when trying to mitigate the impact on the health of people living in the vicinity of a volcano or the impact on infrastructure and aviation. Eruption detection is most often done by either visual observation or the analysis of acoustic data. While visual observation is often difficult due to environmental conditions, infrasound data usually provide the onset of an event. Doppler radar data, admittedly not available for a lot of volcanoes, however, provide information on the dynamics of the eruption and the amount of material released. Eruptions can be easily detected in the data by visual analysis and here we present a neural network approach for the automatic detection of eruptions in Doppler radar data. We use data recorded at Colima volcano in Mexico in 2014/2015 and a data set recorded at Turrialba volcano between 2017 and 2019. In a first step we picked eruptions, rain and typical noise in both data sets, which were the used for training two networks (training data set) and testing the performance of the network using a separate test data set. The accuracy for classifying the different type of signals was between 95 and 98% for both data sets, which we consider quite successful. In case of the Turriabla data set eruptions were picked based on observations of OVSICORI data. When classifying the complete data set we have from Turriabla using the trained network, an additional 40 eruptions were found, which were not in the OVSICORI catalogue.
In most cases data from the instruments are transmitted to an observatory by radio, so the amount of data available is an issue. We therefore tested by what amount the data could be reduced to still be able to successfully detect an eruption. We also kept the network as small as possible to ideally run it on a small computer (e.g. a Rasberry Pi architecture) for eruption detection on site, so only the information that an eruption is detected needs to be transmitted.
How to cite: Hort, M., Uhle, D., Venegas, F., Scharff, L., Walda, J., and Avard, G.: Automatic detection of volcanic eruptions in Doppler radar observations using a neural network approach, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11123, https://doi.org/10.5194/egusphere-egu2020-11123, 2020.
Immediate detection of volcanic eruptions is essential when trying to mitigate the impact on the health of people living in the vicinity of a volcano or the impact on infrastructure and aviation. Eruption detection is most often done by either visual observation or the analysis of acoustic data. While visual observation is often difficult due to environmental conditions, infrasound data usually provide the onset of an event. Doppler radar data, admittedly not available for a lot of volcanoes, however, provide information on the dynamics of the eruption and the amount of material released. Eruptions can be easily detected in the data by visual analysis and here we present a neural network approach for the automatic detection of eruptions in Doppler radar data. We use data recorded at Colima volcano in Mexico in 2014/2015 and a data set recorded at Turrialba volcano between 2017 and 2019. In a first step we picked eruptions, rain and typical noise in both data sets, which were the used for training two networks (training data set) and testing the performance of the network using a separate test data set. The accuracy for classifying the different type of signals was between 95 and 98% for both data sets, which we consider quite successful. In case of the Turriabla data set eruptions were picked based on observations of OVSICORI data. When classifying the complete data set we have from Turriabla using the trained network, an additional 40 eruptions were found, which were not in the OVSICORI catalogue.
In most cases data from the instruments are transmitted to an observatory by radio, so the amount of data available is an issue. We therefore tested by what amount the data could be reduced to still be able to successfully detect an eruption. We also kept the network as small as possible to ideally run it on a small computer (e.g. a Rasberry Pi architecture) for eruption detection on site, so only the information that an eruption is detected needs to be transmitted.
How to cite: Hort, M., Uhle, D., Venegas, F., Scharff, L., Walda, J., and Avard, G.: Automatic detection of volcanic eruptions in Doppler radar observations using a neural network approach, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11123, https://doi.org/10.5194/egusphere-egu2020-11123, 2020.
EGU2020-19060 | Displays | ITS4.6/NH6.7
Predictor dataset selection method for construction of ML-based Models in flood detection using mutual informationMohammad Mehdi Bateni, Mario Martina, and Luigi Cesarini
The field of information theory originally developed within the context of communication engineering, deals with the quantification of the information present in a realization of a stochastic process. Mutual information is a measure of mutual dependence between two variables and can be determined from marginal and joint entropies. It is an efficient tool to investigate linear or non-linear dependencies. In this research, some transformed variables, each based on rainfall data from different datasets in Dominican Republic, are adopted in Neural Network and SVM models to classify flood/no-flood events. A selection procedure is used to select skillful inputs to the flood detection model. The relationship between the flood/no-flood output datasets and each predictor (relevance) and also among predictors (redundancy) were assessed based on the mutual information metric. The minimum redundancy between predictors and maximum relevance to the predictand is targeted in order to choose a set of appropriate predictors.
How to cite: Bateni, M. M., Martina, M., and Cesarini, L.: Predictor dataset selection method for construction of ML-based Models in flood detection using mutual information, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19060, https://doi.org/10.5194/egusphere-egu2020-19060, 2020.
The field of information theory originally developed within the context of communication engineering, deals with the quantification of the information present in a realization of a stochastic process. Mutual information is a measure of mutual dependence between two variables and can be determined from marginal and joint entropies. It is an efficient tool to investigate linear or non-linear dependencies. In this research, some transformed variables, each based on rainfall data from different datasets in Dominican Republic, are adopted in Neural Network and SVM models to classify flood/no-flood events. A selection procedure is used to select skillful inputs to the flood detection model. The relationship between the flood/no-flood output datasets and each predictor (relevance) and also among predictors (redundancy) were assessed based on the mutual information metric. The minimum redundancy between predictors and maximum relevance to the predictand is targeted in order to choose a set of appropriate predictors.
How to cite: Bateni, M. M., Martina, M., and Cesarini, L.: Predictor dataset selection method for construction of ML-based Models in flood detection using mutual information, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19060, https://doi.org/10.5194/egusphere-egu2020-19060, 2020.
EGU2020-19802 | Displays | ITS4.6/NH6.7
Deep Learning for image based weather estimation: a focus on the snowPierre Lepetit, Cécile Mallet, and laurent Barthes
The road traffic is highly sensitive to weather conditions. Accumulation of snow on the road can cause important safety problems. But road conditions monitoring is as hard as critical: in mid-latitude countries, on the one hand, the spatial variability of snowfall is high and on the other hand, accurate characterization of snow accumulation mainly relies on costly sensors.
In recent decades, webcams have become ubiquitous along the road network. The quality of these webcams is variable but even low-resolution images capture information about the extent and the thickness of the snow layer. Their images are also currently used by forecasters to refine their analysis. The automatic extraction of relevant meteorological information is hence very useful.
Recently, generic and efficient computer vision methods have emerged. Their application to image-based weather estimation has become an attractive field of research. However, the scope of existing work is generally limited to high-resolution images from one or a few cameras.
In this study, we show that for a moderate effort of labelling, recent Machine Learning approaches allow us to predict quantitative indices of the snow depth for a large variety of webcam settings and illumination.
Our approach is based on two datasets. The smallest one contains about 2.000 images coming from ten webcams that were set up near sensors devoted to snow depth measurements.
The largest one contains 20,000 images coming from 200 cameras of the AMOS dataset. Meteorological standard rules of human observation and the specifics of the webcams have been taken into account to manually label each image. These labels are not only about the thickness and the extent of the snow layer but also describe the precipitation (rain or snow, presence of streaks), the optical range and the foreground noise. Both datasets contain night images (45%) and at least 15% of images corrupted by foreground noise (filth, droplets, and snowflakes on the lens).
The labels of the AMOS subset allowed us to train ranking models for snow depth and visibility using a multi-task setting. The models are then calibrated on the smallest dataset. We tested several versions, built from pre-trained CNNs (ResNet152, DenseNet161, and VGG16).
Results are promising with up to 85% accuracy for comparison tasks, but a 10% decrease can be observed when the test webcams have not been used during the training phase.
A case study based on a widespread snow event over the French territory will be presented. We will show the potential of our method through a comparison with operational model forecasts.
How to cite: Lepetit, P., Mallet, C., and Barthes, L.: Deep Learning for image based weather estimation: a focus on the snow, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19802, https://doi.org/10.5194/egusphere-egu2020-19802, 2020.
The road traffic is highly sensitive to weather conditions. Accumulation of snow on the road can cause important safety problems. But road conditions monitoring is as hard as critical: in mid-latitude countries, on the one hand, the spatial variability of snowfall is high and on the other hand, accurate characterization of snow accumulation mainly relies on costly sensors.
In recent decades, webcams have become ubiquitous along the road network. The quality of these webcams is variable but even low-resolution images capture information about the extent and the thickness of the snow layer. Their images are also currently used by forecasters to refine their analysis. The automatic extraction of relevant meteorological information is hence very useful.
Recently, generic and efficient computer vision methods have emerged. Their application to image-based weather estimation has become an attractive field of research. However, the scope of existing work is generally limited to high-resolution images from one or a few cameras.
In this study, we show that for a moderate effort of labelling, recent Machine Learning approaches allow us to predict quantitative indices of the snow depth for a large variety of webcam settings and illumination.
Our approach is based on two datasets. The smallest one contains about 2.000 images coming from ten webcams that were set up near sensors devoted to snow depth measurements.
The largest one contains 20,000 images coming from 200 cameras of the AMOS dataset. Meteorological standard rules of human observation and the specifics of the webcams have been taken into account to manually label each image. These labels are not only about the thickness and the extent of the snow layer but also describe the precipitation (rain or snow, presence of streaks), the optical range and the foreground noise. Both datasets contain night images (45%) and at least 15% of images corrupted by foreground noise (filth, droplets, and snowflakes on the lens).
The labels of the AMOS subset allowed us to train ranking models for snow depth and visibility using a multi-task setting. The models are then calibrated on the smallest dataset. We tested several versions, built from pre-trained CNNs (ResNet152, DenseNet161, and VGG16).
Results are promising with up to 85% accuracy for comparison tasks, but a 10% decrease can be observed when the test webcams have not been used during the training phase.
A case study based on a widespread snow event over the French territory will be presented. We will show the potential of our method through a comparison with operational model forecasts.
How to cite: Lepetit, P., Mallet, C., and Barthes, L.: Deep Learning for image based weather estimation: a focus on the snow, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19802, https://doi.org/10.5194/egusphere-egu2020-19802, 2020.
EGU2020-20991 | Displays | ITS4.6/NH6.7
Discussion on The Construction Technology of Marine Environment Safety Knowledge Based on Knowledge GraphsLie Sun, Le Wu, Fei Xu, and ZhanLong Song
The lack of the ability for machines to understand and judge semantic knowledge in the field of emergency response decision-making for marine environment safety is one of the difficulties in intelligent emergency response of marine disaster. Taking advantage of knowledge graphs in semantic search and intelligent recommendation is an important goal for the construction of the marine environment safety knowledge base. We summarizes the knowledge representation method based on knowledge graphs, analyzes the characteristics and difficulties of knowledge representation for emergency decision-making of marine environment safety, constructs the knowledge system of marine environment safety knowledge base, and proposes the construction idea of marine environment safety knowledge base based on knowledge graphs.
How to cite: Sun, L., Wu, L., Xu, F., and Song, Z.: Discussion on The Construction Technology of Marine Environment Safety Knowledge Based on Knowledge Graphs, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20991, https://doi.org/10.5194/egusphere-egu2020-20991, 2020.
The lack of the ability for machines to understand and judge semantic knowledge in the field of emergency response decision-making for marine environment safety is one of the difficulties in intelligent emergency response of marine disaster. Taking advantage of knowledge graphs in semantic search and intelligent recommendation is an important goal for the construction of the marine environment safety knowledge base. We summarizes the knowledge representation method based on knowledge graphs, analyzes the characteristics and difficulties of knowledge representation for emergency decision-making of marine environment safety, constructs the knowledge system of marine environment safety knowledge base, and proposes the construction idea of marine environment safety knowledge base based on knowledge graphs.
How to cite: Sun, L., Wu, L., Xu, F., and Song, Z.: Discussion on The Construction Technology of Marine Environment Safety Knowledge Based on Knowledge Graphs, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20991, https://doi.org/10.5194/egusphere-egu2020-20991, 2020.
EGU2020-22179 | Displays | ITS4.6/NH6.7
Predicting flood responses from spatial rainfall variability and basin morphology through machine learningJorge Duarte, Pierre E. Kirstetter, Manabendra Saharia, Jonathan J. Gourley, Humberto Vergara, and Charles D. Nicholson
Predicting flash floods at short time scales as well as their impacts is of vital interest to forecasters, emergency managers and community members alike. Particularly, characteristics such as location, timing, and duration are crucial for decision-making processes for the protection of lives, property and infrastructure. Even though these characteristics are primarily driven by the causative rainfall and basin geomorphology, untangling the complex interactions between precipitation and hydrological processes becomes challenging due to the lack of observational datasets which capture diverse conditions.
This work follows upon previous efforts on incorporating spatial rainfall moments as viable predictors for flash flood event characteristics such as lag time and the exceedance of flood stage thresholds at gauged locations over the Conterminous United States (CONUS). These variables were modeled by applying various supervised machine learning techniques over a database of flood events. The data included morphological, climatological, streamflow and precipitation data from over 21,000 flood-producing rainfall events – that occurred over 900+ different basins throughout the CONUS between 2002-2011. This dataset included basin parameters and indices derived from radar-based precipitation, which represented sub-basin scale rainfall spatial variability for each storm event. Both classification and regression models were constructed, and variable importance analysis was performed in order to determine the relevant factors reflecting hydrometeorological processes. In this iteration, a closer look at model performance consistency and variable selection aims to further explore rainfall moments’ explanatory power of flood characteristics.
How to cite: Duarte, J., Kirstetter, P. E., Saharia, M., Gourley, J. J., Vergara, H., and Nicholson, C. D.: Predicting flood responses from spatial rainfall variability and basin morphology through machine learning, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22179, https://doi.org/10.5194/egusphere-egu2020-22179, 2020.
Predicting flash floods at short time scales as well as their impacts is of vital interest to forecasters, emergency managers and community members alike. Particularly, characteristics such as location, timing, and duration are crucial for decision-making processes for the protection of lives, property and infrastructure. Even though these characteristics are primarily driven by the causative rainfall and basin geomorphology, untangling the complex interactions between precipitation and hydrological processes becomes challenging due to the lack of observational datasets which capture diverse conditions.
This work follows upon previous efforts on incorporating spatial rainfall moments as viable predictors for flash flood event characteristics such as lag time and the exceedance of flood stage thresholds at gauged locations over the Conterminous United States (CONUS). These variables were modeled by applying various supervised machine learning techniques over a database of flood events. The data included morphological, climatological, streamflow and precipitation data from over 21,000 flood-producing rainfall events – that occurred over 900+ different basins throughout the CONUS between 2002-2011. This dataset included basin parameters and indices derived from radar-based precipitation, which represented sub-basin scale rainfall spatial variability for each storm event. Both classification and regression models were constructed, and variable importance analysis was performed in order to determine the relevant factors reflecting hydrometeorological processes. In this iteration, a closer look at model performance consistency and variable selection aims to further explore rainfall moments’ explanatory power of flood characteristics.
How to cite: Duarte, J., Kirstetter, P. E., Saharia, M., Gourley, J. J., Vergara, H., and Nicholson, C. D.: Predicting flood responses from spatial rainfall variability and basin morphology through machine learning, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22179, https://doi.org/10.5194/egusphere-egu2020-22179, 2020.
ITS4.8/ESSI4.1 – Data science, Analytics and Visualization: The challenges and opportunities for Earth and Space Science
EGU2020-9923 | Displays | ITS4.8/ESSI4.1
Methane Source Finder: A web-based data portal for exploring methane dataAndrew Thorpe, Riley Duren, Robert Tapella, Brian Bue, Kelsey Foster, Vineet Yadav, Talha Rafiq, Francesca Hopkins, Kevin Gill, Joshua Rodriguez, Aaron Plave, Daniel Cusworth, and Charles Miller
The Methane Source Finder is a web-based data portal developed under NASA’s CMS and ACCESS programs for exploring methane data in the state of California. This open access interactive map allows users to discover, analyze, and download data across a range of spatial scales derived from remote-sensing, surface monitoring, and bottom-up infrastructure information. This includes methane plume images and associated emission estimates derived from the 2016-2018 California Methane Survey using the airborne imaging spectrometer AVIRIS-NG. The fine spatial resolution (typically 3 m) AVIRIS-NG products when combined with the Vista infrastructure database of over 270,000 components statewide permits direct attribution of emissions to individual point source locations. These point source products have benefited from evaluation and feedback from state and local agencies and private sector companies and in some cases were used to directly guide leak detection and repair efforts. Additional data layers at local and regional scales provide context for point source emissions. These include methane flux inversions for the Los Angeles basin derived from surface observations and tracer transport modeling (3 km, 4 day resolution) as well as the CMS US methane gridded inventory (10 km, monthly resolution) over the state of California.
How to cite: Thorpe, A., Duren, R., Tapella, R., Bue, B., Foster, K., Yadav, V., Rafiq, T., Hopkins, F., Gill, K., Rodriguez, J., Plave, A., Cusworth, D., and Miller, C.: Methane Source Finder: A web-based data portal for exploring methane data, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9923, https://doi.org/10.5194/egusphere-egu2020-9923, 2020.
The Methane Source Finder is a web-based data portal developed under NASA’s CMS and ACCESS programs for exploring methane data in the state of California. This open access interactive map allows users to discover, analyze, and download data across a range of spatial scales derived from remote-sensing, surface monitoring, and bottom-up infrastructure information. This includes methane plume images and associated emission estimates derived from the 2016-2018 California Methane Survey using the airborne imaging spectrometer AVIRIS-NG. The fine spatial resolution (typically 3 m) AVIRIS-NG products when combined with the Vista infrastructure database of over 270,000 components statewide permits direct attribution of emissions to individual point source locations. These point source products have benefited from evaluation and feedback from state and local agencies and private sector companies and in some cases were used to directly guide leak detection and repair efforts. Additional data layers at local and regional scales provide context for point source emissions. These include methane flux inversions for the Los Angeles basin derived from surface observations and tracer transport modeling (3 km, 4 day resolution) as well as the CMS US methane gridded inventory (10 km, monthly resolution) over the state of California.
How to cite: Thorpe, A., Duren, R., Tapella, R., Bue, B., Foster, K., Yadav, V., Rafiq, T., Hopkins, F., Gill, K., Rodriguez, J., Plave, A., Cusworth, D., and Miller, C.: Methane Source Finder: A web-based data portal for exploring methane data, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9923, https://doi.org/10.5194/egusphere-egu2020-9923, 2020.
EGU2020-11541 | Displays | ITS4.8/ESSI4.1
Development and investigation of a deep learning based method for TEC map completionMingwu Jin, Yang Pan, Shunrong Zhang, and Yue Deng
Because of the limited coverage of receiver stations, current measurements of Total Electron Content (TEC) by ground-based GNSS receivers are not complete with large portions of data gaps. The processing to obtain complete TEC maps for space science research is time consuming and needs the collaboration of five International GNSS Service (IGS) Ionosphere Associate Analysis Centers (IAACs) to use different data processing and filling algorithms and to consolidate their results into final IGS completed TEC maps. In this work, we developed a Deep Convolutional Generative Adversarial Network (DCGAN) and Poisson blending model (DCGAN-PB) to learn IGS completion process for automatic completion of TEC maps. Using 10-fold cross validation of 20-year IGS TEC data, DCGAN-PB achieves the average root mean squared error (RMSE) about 4 absolute TEC units (TECu) of the high solar activity years and around 2 TECu for low solar activity years, which is about 50% reduction of RMSE for recovered TEC values compared to two conventional single-image inpainting methods. The developed DCGAN-PB model can lead to an efficient automatic completion tool for TEC maps.
How to cite: Jin, M., Pan, Y., Zhang, S., and Deng, Y.: Development and investigation of a deep learning based method for TEC map completion, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11541, https://doi.org/10.5194/egusphere-egu2020-11541, 2020.
Because of the limited coverage of receiver stations, current measurements of Total Electron Content (TEC) by ground-based GNSS receivers are not complete with large portions of data gaps. The processing to obtain complete TEC maps for space science research is time consuming and needs the collaboration of five International GNSS Service (IGS) Ionosphere Associate Analysis Centers (IAACs) to use different data processing and filling algorithms and to consolidate their results into final IGS completed TEC maps. In this work, we developed a Deep Convolutional Generative Adversarial Network (DCGAN) and Poisson blending model (DCGAN-PB) to learn IGS completion process for automatic completion of TEC maps. Using 10-fold cross validation of 20-year IGS TEC data, DCGAN-PB achieves the average root mean squared error (RMSE) about 4 absolute TEC units (TECu) of the high solar activity years and around 2 TECu for low solar activity years, which is about 50% reduction of RMSE for recovered TEC values compared to two conventional single-image inpainting methods. The developed DCGAN-PB model can lead to an efficient automatic completion tool for TEC maps.
How to cite: Jin, M., Pan, Y., Zhang, S., and Deng, Y.: Development and investigation of a deep learning based method for TEC map completion, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11541, https://doi.org/10.5194/egusphere-egu2020-11541, 2020.
EGU2020-13893 | Displays | ITS4.8/ESSI4.1
Automatic extraction of modern reefs satellite images geometries using Computer VisionGrisel Jimenez Soto, Mirza Arshad Beg, Michael C. Poppelreiter, and Khaidi Rahmatsyah
Three selective, global – scale areas for modern’s reef sites have been selected on exploring the automatic metrics extraction for further studies among the relationships between reef morphology and the surrounding oceanographic conditions that can potentially be used as an input parameter for training images for Multiple Point Statistics simulations.
Obtain geometric features from satellite images is a very laborious task and hard work when making manually. Nevertheless, automatic geometric features detection is a challenging problem due to the varying lighting, orientation, and background of the target object, especially when analyzing raw images in RGB format. In this work, a robust algorithm programmed in python is presented with the purpose of estimate automatically the geometric properties of a set of coral reef islands located in South East Asia. First, a python code load massive satellite imagery from a specific folder RGB format, then each raw coral reef island image is resized, converted from RGB band to gray, smoothed and binarized using Open Computer Vision Library available in python 3. The island edge information contains very prominent geometric attributes that characterize their behavior, thus morphological transformations were applied to define the contour of the island. Moreover, a structural analysis and shape descriptors were made in a set of images in order to numerically calculate the characteristics of the island. Finally, a total of 27 satellite images were processed by the algorithm successfully, only two images were not segmented correctly because the illumination and intensity of the predominant colors, especially blue, were different from the rest of the images. This dataset was exported from python to Microsoft Excel spreadsheets and CSV format.
How to cite: Jimenez Soto, G., Arshad Beg, M., Poppelreiter, M. C., and Rahmatsyah, K.: Automatic extraction of modern reefs satellite images geometries using Computer Vision, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13893, https://doi.org/10.5194/egusphere-egu2020-13893, 2020.
Three selective, global – scale areas for modern’s reef sites have been selected on exploring the automatic metrics extraction for further studies among the relationships between reef morphology and the surrounding oceanographic conditions that can potentially be used as an input parameter for training images for Multiple Point Statistics simulations.
Obtain geometric features from satellite images is a very laborious task and hard work when making manually. Nevertheless, automatic geometric features detection is a challenging problem due to the varying lighting, orientation, and background of the target object, especially when analyzing raw images in RGB format. In this work, a robust algorithm programmed in python is presented with the purpose of estimate automatically the geometric properties of a set of coral reef islands located in South East Asia. First, a python code load massive satellite imagery from a specific folder RGB format, then each raw coral reef island image is resized, converted from RGB band to gray, smoothed and binarized using Open Computer Vision Library available in python 3. The island edge information contains very prominent geometric attributes that characterize their behavior, thus morphological transformations were applied to define the contour of the island. Moreover, a structural analysis and shape descriptors were made in a set of images in order to numerically calculate the characteristics of the island. Finally, a total of 27 satellite images were processed by the algorithm successfully, only two images were not segmented correctly because the illumination and intensity of the predominant colors, especially blue, were different from the rest of the images. This dataset was exported from python to Microsoft Excel spreadsheets and CSV format.
How to cite: Jimenez Soto, G., Arshad Beg, M., Poppelreiter, M. C., and Rahmatsyah, K.: Automatic extraction of modern reefs satellite images geometries using Computer Vision, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13893, https://doi.org/10.5194/egusphere-egu2020-13893, 2020.
EGU2020-17707 | Displays | ITS4.8/ESSI4.1
The ADAM platformSimone Mantovani, Stefano Natali, Marco Folegani, Mario Cavicchi, Damiano Barboni, and Sergio Ferraresi
Operational Earth Observation (EO) satellite missions are entering their 5th lifetime decade, and the need to access historical data has strongly increased, particularly for long-term science and environmental monitoring applications. This trend that drives users to request long time-series of data will increase even more in the future, in particular regarding the interest on global change assessment and monitoring to support policy makers decisions on atmosphere, ocean, cryosphere, carbon and other biogeochemical cycles safeguard.
The Copernicus initiative (https://www.copernicus.eu) is playing a unique and unprecedented role form the point of view of amount, relevance and quality of provided environmental data. In the frame of the European Commission funded activities, the Data and Information Access Service (DIAS) are operated by five different consortia to acquire, process, archive and distribute data from Copernicus and Third-Party Missions.
With this enormous availability of past, present, and future geospatial environmental data, there is the need to make users able to identify the datasets that best fit with their needs and obtain these data in fastest and easiest-to-use possible way. The Advanced geospatial DAta Management - ADAM platform (https://adamplatform.eu/) provides discovery, access, processing and visualization services for data in the distributed cloud environment, significantly reducing the burden of data usability.
ADAM allows the exploitation of the content of EO data archives extended from a few years to decades and therefore makes their continuously increasing scientific value fully accessible. The advances in satellite sensor characteristics (spatial resolution, temporal frequency, spectral sensors) as well as in all related technical aspects (data and metadata format, storage, infrastructures) underline the strong need to preserve the EO space data without time constraints and to keep them accessible and exploitable, as they constitute a humankind asset. This is a typical big data challenge that ADAM can face.
This paper describes the ADAM platform and various application domains supported with its data science analytics and visualization capabilities.
How to cite: Mantovani, S., Natali, S., Folegani, M., Cavicchi, M., Barboni, D., and Ferraresi, S.: The ADAM platform, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-17707, https://doi.org/10.5194/egusphere-egu2020-17707, 2020.
Operational Earth Observation (EO) satellite missions are entering their 5th lifetime decade, and the need to access historical data has strongly increased, particularly for long-term science and environmental monitoring applications. This trend that drives users to request long time-series of data will increase even more in the future, in particular regarding the interest on global change assessment and monitoring to support policy makers decisions on atmosphere, ocean, cryosphere, carbon and other biogeochemical cycles safeguard.
The Copernicus initiative (https://www.copernicus.eu) is playing a unique and unprecedented role form the point of view of amount, relevance and quality of provided environmental data. In the frame of the European Commission funded activities, the Data and Information Access Service (DIAS) are operated by five different consortia to acquire, process, archive and distribute data from Copernicus and Third-Party Missions.
With this enormous availability of past, present, and future geospatial environmental data, there is the need to make users able to identify the datasets that best fit with their needs and obtain these data in fastest and easiest-to-use possible way. The Advanced geospatial DAta Management - ADAM platform (https://adamplatform.eu/) provides discovery, access, processing and visualization services for data in the distributed cloud environment, significantly reducing the burden of data usability.
ADAM allows the exploitation of the content of EO data archives extended from a few years to decades and therefore makes their continuously increasing scientific value fully accessible. The advances in satellite sensor characteristics (spatial resolution, temporal frequency, spectral sensors) as well as in all related technical aspects (data and metadata format, storage, infrastructures) underline the strong need to preserve the EO space data without time constraints and to keep them accessible and exploitable, as they constitute a humankind asset. This is a typical big data challenge that ADAM can face.
This paper describes the ADAM platform and various application domains supported with its data science analytics and visualization capabilities.
How to cite: Mantovani, S., Natali, S., Folegani, M., Cavicchi, M., Barboni, D., and Ferraresi, S.: The ADAM platform, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-17707, https://doi.org/10.5194/egusphere-egu2020-17707, 2020.
EGU2020-22386 | Displays | ITS4.8/ESSI4.1
Meteorology Open Application Platform (MOAP3.0) -- the Application and Implementation of Brand-new Intelligent Analysis Meteorological PlatformWenbin Song and Hu Zhengguang
The implementation of Intellectualized Analysis Meteorological Platform is based on the Meteorology Open Application Platform (MOAP3.0),which developed by the National Meteorological Centre of China Meteorological Administration. This visualization analysis network platform integrates the main characteristics of statistical analysis, intelligent interaction, and rendering method and uses decoupling development mode. Its Web-Server deployed on distributed cloud framework in a distributed environment, which can support real-time analysis, interactive analysis and visualization analysis of massive meteorological data. The data are including national meteorological observation data, national guidance forecast data (0.05° x 0.05°), the area of forecast, MICAPS data, etc. It has been put into operation since December 2019. According to the results of continual tests, it indicates that the entire system is quite stable, reliable, with second-grade responding time in data transmission. The whole system adopts the design of "one key linkage" and " drilling-down analysis of a temporal and spatial context step by step " , finalizes three main home pages of "disaster analysis", "meteorological big data for living analysis" and "station’s climate background analysis", involves in 36 standard interfaces ,and sets up 21 independent functional modules. In the spatial dimension, the cascading of six spatial levels of meteorological data is followed by observing data, grid data, urban, river basins, regional meteorological centers and national meteorological centers. In the time dimension, the linkage analysis of minute, hour, daily, ten-day periods, monthly value, and annual value completes the full time chain. Theoretically, the integration analysis of history- reality- forecast in China is realized basing on the whole station climate background and a relatively well-developed analysis system of meteorological spatial-temporal data mining. To be specific, the basic meteorological algorithms including in the background of the system are regional average value, precipitation days of different magnitude, historical extremes of single meteorological element, spatial interpolation, fall area analysis, etc. The web-visualization functions contain the online rendering of weather map, spatial-temporal integration display of multiple meteorological elements, color scale classification and filtering, etc. At the same time, in order to solve the problems of dense site in daily operation, the strategies of site thinning and hierarchical rendering are used to optimize. In conclusion, the whole system takes the standardized tile-type electronic map in China as the carrier, displays and interacts the massive meteorological data of various dimensions and types, and finally carries out a complete real-time sequence analysis product, which will be playing an important role in the practical application of forecasting and early warning system of Chinese meteorological fields.
How to cite: Song, W. and Zhengguang, H.: Meteorology Open Application Platform (MOAP3.0) -- the Application and Implementation of Brand-new Intelligent Analysis Meteorological Platform, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22386, https://doi.org/10.5194/egusphere-egu2020-22386, 2020.
The implementation of Intellectualized Analysis Meteorological Platform is based on the Meteorology Open Application Platform (MOAP3.0),which developed by the National Meteorological Centre of China Meteorological Administration. This visualization analysis network platform integrates the main characteristics of statistical analysis, intelligent interaction, and rendering method and uses decoupling development mode. Its Web-Server deployed on distributed cloud framework in a distributed environment, which can support real-time analysis, interactive analysis and visualization analysis of massive meteorological data. The data are including national meteorological observation data, national guidance forecast data (0.05° x 0.05°), the area of forecast, MICAPS data, etc. It has been put into operation since December 2019. According to the results of continual tests, it indicates that the entire system is quite stable, reliable, with second-grade responding time in data transmission. The whole system adopts the design of "one key linkage" and " drilling-down analysis of a temporal and spatial context step by step " , finalizes three main home pages of "disaster analysis", "meteorological big data for living analysis" and "station’s climate background analysis", involves in 36 standard interfaces ,and sets up 21 independent functional modules. In the spatial dimension, the cascading of six spatial levels of meteorological data is followed by observing data, grid data, urban, river basins, regional meteorological centers and national meteorological centers. In the time dimension, the linkage analysis of minute, hour, daily, ten-day periods, monthly value, and annual value completes the full time chain. Theoretically, the integration analysis of history- reality- forecast in China is realized basing on the whole station climate background and a relatively well-developed analysis system of meteorological spatial-temporal data mining. To be specific, the basic meteorological algorithms including in the background of the system are regional average value, precipitation days of different magnitude, historical extremes of single meteorological element, spatial interpolation, fall area analysis, etc. The web-visualization functions contain the online rendering of weather map, spatial-temporal integration display of multiple meteorological elements, color scale classification and filtering, etc. At the same time, in order to solve the problems of dense site in daily operation, the strategies of site thinning and hierarchical rendering are used to optimize. In conclusion, the whole system takes the standardized tile-type electronic map in China as the carrier, displays and interacts the massive meteorological data of various dimensions and types, and finally carries out a complete real-time sequence analysis product, which will be playing an important role in the practical application of forecasting and early warning system of Chinese meteorological fields.
How to cite: Song, W. and Zhengguang, H.: Meteorology Open Application Platform (MOAP3.0) -- the Application and Implementation of Brand-new Intelligent Analysis Meteorological Platform, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22386, https://doi.org/10.5194/egusphere-egu2020-22386, 2020.
EGU2020-9008 | Displays | ITS4.8/ESSI4.1
Geospatial technologies and tools for data collection and communication: A TaxonomyBritta Ricker
Currently, a myriad of geospatial technologies, geovisual techniques and data sources are available on the market both for data collection and geovisualization; from drones, LiDAR, multispectral satellite imagery, “big data”, 360-degree cameras, smartphones, smartwatches, web-based mobile maps to virtual reality and augmented reality. These technologies are becoming progressively easy to use due to improved computing power and accessible application programming interfaces. These advances combined with dropping prices in these technologies mean that there are increasing opportunities to collect more data from heterogeneous populations as well as communicating ideas to them. This offers seemingly limitless opportunities for anyone collecting and disseminating geospatial data. When data are aggregated and processed, it becomes information. To communicate this information effectively and efficiently geovisualizations can be utilized. The aim of geovisualization is to interactively reveal spatial patterns that may otherwise go unnoticed. Much excitement surrounds each of these geospatial technologies which offer increased opportunities to communicate geospatial phenomena in a stimulating manner through various geovisualization techniques and interfaces. The challenge is that it also takes very little effort to make geovisualizations that may be visually attractive but do not communicate anything. With so many accessible geospatial technologies available a common and important question persists: What geospatial technologies and geovisualization techniques are best suited to collect and communicate geospatial data?
The answer to this question will vary based on the phenomena being examined, the geospatial data available and the communication goals. Here I present a taxonomy of geospatial technologies and geovisualization techniques, identifying their strengths and weaknesses for data collection and geospatial information communication. The aim of this taxonomy is to act as a decision support tool, to help researchers make informed decisions about what technologies to incorporate into a research project. With so many different technologies available, what should a researcher consider before they pick which platform to use to communicate important findings? More explicitly, how can specific geospatial technologies help transform scientific data into information and subsequent knowledge?
Included in this taxonomy are data collection tools and cartographic interface tools. This taxonomy is informed by literature from a cross-section of disciplines ranging from cartography, spatial media, communication, geographic scale, spatial cognition, human-computer interaction, and user experience research. These literatures are presented and woven together to synthesize the strengths and weaknesses of different geospatial technologies for data collection/entry and spatial information communication. Additionally, key considerations are presented in an effort to achieve effective communication; meaning identifying intended use with intended users, to best meet communication goals. To illustrate key points, indicator data from the United Nations Sustainable Development Goals are used. The aim here is to offer recommendations on how to best identify and apply appropriate technology for data collection and geovisualization, in an effort to reduce the number of frivolous, confusing, and ugly maps available online.
How to cite: Ricker, B.: Geospatial technologies and tools for data collection and communication: A Taxonomy , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9008, https://doi.org/10.5194/egusphere-egu2020-9008, 2020.
Currently, a myriad of geospatial technologies, geovisual techniques and data sources are available on the market both for data collection and geovisualization; from drones, LiDAR, multispectral satellite imagery, “big data”, 360-degree cameras, smartphones, smartwatches, web-based mobile maps to virtual reality and augmented reality. These technologies are becoming progressively easy to use due to improved computing power and accessible application programming interfaces. These advances combined with dropping prices in these technologies mean that there are increasing opportunities to collect more data from heterogeneous populations as well as communicating ideas to them. This offers seemingly limitless opportunities for anyone collecting and disseminating geospatial data. When data are aggregated and processed, it becomes information. To communicate this information effectively and efficiently geovisualizations can be utilized. The aim of geovisualization is to interactively reveal spatial patterns that may otherwise go unnoticed. Much excitement surrounds each of these geospatial technologies which offer increased opportunities to communicate geospatial phenomena in a stimulating manner through various geovisualization techniques and interfaces. The challenge is that it also takes very little effort to make geovisualizations that may be visually attractive but do not communicate anything. With so many accessible geospatial technologies available a common and important question persists: What geospatial technologies and geovisualization techniques are best suited to collect and communicate geospatial data?
The answer to this question will vary based on the phenomena being examined, the geospatial data available and the communication goals. Here I present a taxonomy of geospatial technologies and geovisualization techniques, identifying their strengths and weaknesses for data collection and geospatial information communication. The aim of this taxonomy is to act as a decision support tool, to help researchers make informed decisions about what technologies to incorporate into a research project. With so many different technologies available, what should a researcher consider before they pick which platform to use to communicate important findings? More explicitly, how can specific geospatial technologies help transform scientific data into information and subsequent knowledge?
Included in this taxonomy are data collection tools and cartographic interface tools. This taxonomy is informed by literature from a cross-section of disciplines ranging from cartography, spatial media, communication, geographic scale, spatial cognition, human-computer interaction, and user experience research. These literatures are presented and woven together to synthesize the strengths and weaknesses of different geospatial technologies for data collection/entry and spatial information communication. Additionally, key considerations are presented in an effort to achieve effective communication; meaning identifying intended use with intended users, to best meet communication goals. To illustrate key points, indicator data from the United Nations Sustainable Development Goals are used. The aim here is to offer recommendations on how to best identify and apply appropriate technology for data collection and geovisualization, in an effort to reduce the number of frivolous, confusing, and ugly maps available online.
How to cite: Ricker, B.: Geospatial technologies and tools for data collection and communication: A Taxonomy , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9008, https://doi.org/10.5194/egusphere-egu2020-9008, 2020.
EGU2020-1130 | Displays | ITS4.8/ESSI4.1
A reproducible solar irradiance estimation process using Recurrent Neural NetworkAmita Muralikrishna, Rafael Santos, and Luis Eduardo Vieira
The Sun have a constant action on Earth, interfering in different ways on life in our planet. The physical, chemical and biological processes that occur on Earth are directly influenced by the variation of solar irradiance, which is a function of the activity in the Sun’s different atmospheric layers and their rapid variation. Studying this relationship may require the availability of a large amount of collected data, without significant gaps that could be caused from many kinds of issues. In this work, we present a Recurrent Neural Network as an option for estimating the Total Solar Irradiance (TSI) and the Spectral Solar Irradiance (SSI) variability. Solar images collected on different wave components were preprocessed and used as the input parameters, and TSI and SSI data collected by instruments onboard of SORCE were used as reference of the results we expected to achieve. Complementary to this approach, we opted for developing a reproducible procedure, for which we chose a free programming language, in attempt to offer the same kind of results, with same accuracy, for future studies which would like to reproduce our procedure. To achieve this, reproducible notebooks will be generated with the intention of providing transparency in the data analysis process and allowing the process and the results to be validated, modified and optimized by those who would like to do it. This approach aims to obtain a good accuracy in estimating the TSI and SSI, allowing its reconstruction in gap scales and also the forecast of their values six hours ahead.
How to cite: Muralikrishna, A., Santos, R., and Vieira, L. E.: A reproducible solar irradiance estimation process using Recurrent Neural Network, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1130, https://doi.org/10.5194/egusphere-egu2020-1130, 2020.
The Sun have a constant action on Earth, interfering in different ways on life in our planet. The physical, chemical and biological processes that occur on Earth are directly influenced by the variation of solar irradiance, which is a function of the activity in the Sun’s different atmospheric layers and their rapid variation. Studying this relationship may require the availability of a large amount of collected data, without significant gaps that could be caused from many kinds of issues. In this work, we present a Recurrent Neural Network as an option for estimating the Total Solar Irradiance (TSI) and the Spectral Solar Irradiance (SSI) variability. Solar images collected on different wave components were preprocessed and used as the input parameters, and TSI and SSI data collected by instruments onboard of SORCE were used as reference of the results we expected to achieve. Complementary to this approach, we opted for developing a reproducible procedure, for which we chose a free programming language, in attempt to offer the same kind of results, with same accuracy, for future studies which would like to reproduce our procedure. To achieve this, reproducible notebooks will be generated with the intention of providing transparency in the data analysis process and allowing the process and the results to be validated, modified and optimized by those who would like to do it. This approach aims to obtain a good accuracy in estimating the TSI and SSI, allowing its reconstruction in gap scales and also the forecast of their values six hours ahead.
How to cite: Muralikrishna, A., Santos, R., and Vieira, L. E.: A reproducible solar irradiance estimation process using Recurrent Neural Network, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1130, https://doi.org/10.5194/egusphere-egu2020-1130, 2020.
EGU2020-5109 | Displays | ITS4.8/ESSI4.1
Using supervised machine learning to automatically detect type II and III solar radio burstsEoin Carley
Solar flares are often associated with high-intensity radio emission known as `solar radio bursts' (SRBs). SRBs are generally observed in dynamic spectra and have five major spectral classes, labelled type I to type V depending on their shape and extent in frequency and time. Due to their morphological complexity, a challenge in solar radio physics is the automatic detection and classification of such radio bursts. Classification of SRBs has become necessary in recent years due to large data rates (3 Gb/s) generated by advanced radio telescopes such as the Low Frequency Array (LOFAR). Here we test the ability of several supervised machine learning algorithms to automatically classify type II and type III solar radio bursts. We test the detection accuracy of support vector machines (SVM), random forest (RF), as well as an implementation of transfer learning of the Inception and YOLO convolutional neural networks (CNNs). The training data was assembled from type II and III bursts observed by the Radio Solar Telescope Network (RSTN) from 1996 to 2018, supplemented by type II and III radio burst simulations. The CNNs were the best performers, often exceeding >90% accuracy on the validation set, with YOLO having the ability to perform radio burst burst localisation in dynamic spectra. This shows that machine learning algorithms (in particular CNNs) are capable of SRB classification, and we conclude by discussing future plans for the implementation of a CNN in the LOFAR for Space Weather (LOFAR4SW) data-stream pipelines.
How to cite: Carley, E.: Using supervised machine learning to automatically detect type II and III solar radio bursts, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5109, https://doi.org/10.5194/egusphere-egu2020-5109, 2020.
Solar flares are often associated with high-intensity radio emission known as `solar radio bursts' (SRBs). SRBs are generally observed in dynamic spectra and have five major spectral classes, labelled type I to type V depending on their shape and extent in frequency and time. Due to their morphological complexity, a challenge in solar radio physics is the automatic detection and classification of such radio bursts. Classification of SRBs has become necessary in recent years due to large data rates (3 Gb/s) generated by advanced radio telescopes such as the Low Frequency Array (LOFAR). Here we test the ability of several supervised machine learning algorithms to automatically classify type II and type III solar radio bursts. We test the detection accuracy of support vector machines (SVM), random forest (RF), as well as an implementation of transfer learning of the Inception and YOLO convolutional neural networks (CNNs). The training data was assembled from type II and III bursts observed by the Radio Solar Telescope Network (RSTN) from 1996 to 2018, supplemented by type II and III radio burst simulations. The CNNs were the best performers, often exceeding >90% accuracy on the validation set, with YOLO having the ability to perform radio burst burst localisation in dynamic spectra. This shows that machine learning algorithms (in particular CNNs) are capable of SRB classification, and we conclude by discussing future plans for the implementation of a CNN in the LOFAR for Space Weather (LOFAR4SW) data-stream pipelines.
How to cite: Carley, E.: Using supervised machine learning to automatically detect type II and III solar radio bursts, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5109, https://doi.org/10.5194/egusphere-egu2020-5109, 2020.
EGU2020-127 | Displays | ITS4.8/ESSI4.1 | Highlight
Innovative visualization and analysis capabilities to advance scientific discoveryEmily Law and Brian Day and the Solar System Treks Project Team
NASA’s Solar System Treks program produces a suite of interactive visualization and AI/data science analysis tools. These tools enable mission planners, planetary scientists, and engineers to access geospatial data products derived from big data returned from a wide range of instruments aboard a variety of past and current missions, for a growing number of planetary bodies.
The portals provide easy-to-use tools for browse, search and the ability to overlay a growing range and large amount of value added data products. Data products can be viewed in 2D and 3D, in VR and can be easily integrated by stacking and blending together rendering optimal visualization. Data sets can be plotted and compared against each other. Standard gaming and 3D mouse controllers allow users to maneuver first-person visualizations of flying across planetary surfaces.
The portals provide a set of advanced analysis tools that employed AI and data science methods. The tools facilitate measurement and study of terrain including distance, height, and depth of surface features. They allow users to perform analyses such as lighting and local hazard assessments including slope, surface roughness and crater/boulder distribution, rockfall distribution, and surface electrostatic potential. These tools faciliate a wide range of activities including the planning, design, development, test and operations associated with lunar sortie missions; robotic (and potentially crewed) operations on the surface; planning tasks in the areas of landing site evaluation and selection; design and placement of landers and other stationary assets; design of rovers and other mobile assets; developing terrain-relative navigation (TRN) capabilities; deorbit/impact site visualization; and assessment and planning of science traverses. Additional tools useful scientific research are under development such as line of sight calculation.
Seven portals are publicly available to explore the Moon, Mars, Vesta, Ceres, Titan, IcyMoons, and Mercury with more portals in development and planning stages.
This presentation will provide an overview of the Solar System Treks and highlight its innovative visualization and analysis capabilities that advance scientific discovery. The information system and science communities are invited to provide suggestions and requests as the development team continues to expand the portals’ tool suite to maximize scientific research.
Lastly, the authors would like to thank the Planetary Science Division of NASA’s Science Mission Directorate, NASA’s SMD Science Engagement and Partnerships, the Advanced Explorations Systems Program of NASA’s Human Exploration Operations Directorate, and the Moons to Mars Mission Directorate for their support and guidance in the development of the Solar System Treks.
How to cite: Law, E. and Day, B. and the Solar System Treks Project Team: Innovative visualization and analysis capabilities to advance scientific discovery, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-127, https://doi.org/10.5194/egusphere-egu2020-127, 2020.
NASA’s Solar System Treks program produces a suite of interactive visualization and AI/data science analysis tools. These tools enable mission planners, planetary scientists, and engineers to access geospatial data products derived from big data returned from a wide range of instruments aboard a variety of past and current missions, for a growing number of planetary bodies.
The portals provide easy-to-use tools for browse, search and the ability to overlay a growing range and large amount of value added data products. Data products can be viewed in 2D and 3D, in VR and can be easily integrated by stacking and blending together rendering optimal visualization. Data sets can be plotted and compared against each other. Standard gaming and 3D mouse controllers allow users to maneuver first-person visualizations of flying across planetary surfaces.
The portals provide a set of advanced analysis tools that employed AI and data science methods. The tools facilitate measurement and study of terrain including distance, height, and depth of surface features. They allow users to perform analyses such as lighting and local hazard assessments including slope, surface roughness and crater/boulder distribution, rockfall distribution, and surface electrostatic potential. These tools faciliate a wide range of activities including the planning, design, development, test and operations associated with lunar sortie missions; robotic (and potentially crewed) operations on the surface; planning tasks in the areas of landing site evaluation and selection; design and placement of landers and other stationary assets; design of rovers and other mobile assets; developing terrain-relative navigation (TRN) capabilities; deorbit/impact site visualization; and assessment and planning of science traverses. Additional tools useful scientific research are under development such as line of sight calculation.
Seven portals are publicly available to explore the Moon, Mars, Vesta, Ceres, Titan, IcyMoons, and Mercury with more portals in development and planning stages.
This presentation will provide an overview of the Solar System Treks and highlight its innovative visualization and analysis capabilities that advance scientific discovery. The information system and science communities are invited to provide suggestions and requests as the development team continues to expand the portals’ tool suite to maximize scientific research.
Lastly, the authors would like to thank the Planetary Science Division of NASA’s Science Mission Directorate, NASA’s SMD Science Engagement and Partnerships, the Advanced Explorations Systems Program of NASA’s Human Exploration Operations Directorate, and the Moons to Mars Mission Directorate for their support and guidance in the development of the Solar System Treks.
How to cite: Law, E. and Day, B. and the Solar System Treks Project Team: Innovative visualization and analysis capabilities to advance scientific discovery, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-127, https://doi.org/10.5194/egusphere-egu2020-127, 2020.
EGU2020-6077 | Displays | ITS4.8/ESSI4.1
Surveying the Machine Learning Landscape in Earth SciencesMuthukumaran Ramasubramanian, Katrina Virts, Ashlyn Shirey, Ankur Kumar, Muhammad Hassan, Ashish Acharya, Rahul Ramachandran, and Maskey Manil
Several recent papers have investigated different challenges in applying machine learning (ML) techniques to Earth science problems. The challenges listed range from interpretability of the results to computational demand to data issues. In this paper, we focus on specific challenges listed in the review papers that are centered around training data, as the size of training data is important in applying deep learning (DL) techniques. We are in the process of conducting a literature survey to better understand these challenges as well as to understand any trends. As part of this survey, our review has encompassed Earth science papers from AGU, AMS, IEEE and SPIE journals covering the last ten years and focused on papers that utilize supervised ML techniques.
Our initial survey results show some interesting findings. The use of supervised machine learning techniques in Earth science research has increased significantly in the last decade. The number of atmospheric science papers (i.e., from AMS journals) using ML approaches has increased by over 40%. Across all of Earth science even larger changes have occurred, including a >90% increase in AGU papers and a >10-fold increase in IEEE papers using ML.
We also conducted a deep dive into all the papers from AGU journals and uncovered interesting findings. There is a prevalence of the use of supervised ML in certain sub-disciplines within Earth science. The biogeoscience and land surface research communities lead in this area: over 20% of papers published in Global Biogeochemical Cycles, JGR Biogeosciences, JGR Earth Surface, and Water Resources Research use supervised ML techniques, including over 35% of the papers in JGR Biogeosciences. The availability of labeled training data in Earth science is reflected in the number of training samples used in supervised analysis. In the papers we surveyed, most ML algorithms were trained using small (i.e. hundreds of labeled) samples. However, for some applications using model output or large, established datasets, the number of training data ranged several orders of magnitude greater.
In this presentation, we will describe our findings from the literature survey. We will also list recommendations for the science community to address the existing challenges around training data.
How to cite: Ramasubramanian, M., Virts, K., Shirey, A., Kumar, A., Hassan, M., Acharya, A., Ramachandran, R., and Manil, M.: Surveying the Machine Learning Landscape in Earth Sciences, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6077, https://doi.org/10.5194/egusphere-egu2020-6077, 2020.
Several recent papers have investigated different challenges in applying machine learning (ML) techniques to Earth science problems. The challenges listed range from interpretability of the results to computational demand to data issues. In this paper, we focus on specific challenges listed in the review papers that are centered around training data, as the size of training data is important in applying deep learning (DL) techniques. We are in the process of conducting a literature survey to better understand these challenges as well as to understand any trends. As part of this survey, our review has encompassed Earth science papers from AGU, AMS, IEEE and SPIE journals covering the last ten years and focused on papers that utilize supervised ML techniques.
Our initial survey results show some interesting findings. The use of supervised machine learning techniques in Earth science research has increased significantly in the last decade. The number of atmospheric science papers (i.e., from AMS journals) using ML approaches has increased by over 40%. Across all of Earth science even larger changes have occurred, including a >90% increase in AGU papers and a >10-fold increase in IEEE papers using ML.
We also conducted a deep dive into all the papers from AGU journals and uncovered interesting findings. There is a prevalence of the use of supervised ML in certain sub-disciplines within Earth science. The biogeoscience and land surface research communities lead in this area: over 20% of papers published in Global Biogeochemical Cycles, JGR Biogeosciences, JGR Earth Surface, and Water Resources Research use supervised ML techniques, including over 35% of the papers in JGR Biogeosciences. The availability of labeled training data in Earth science is reflected in the number of training samples used in supervised analysis. In the papers we surveyed, most ML algorithms were trained using small (i.e. hundreds of labeled) samples. However, for some applications using model output or large, established datasets, the number of training data ranged several orders of magnitude greater.
In this presentation, we will describe our findings from the literature survey. We will also list recommendations for the science community to address the existing challenges around training data.
How to cite: Ramasubramanian, M., Virts, K., Shirey, A., Kumar, A., Hassan, M., Acharya, A., Ramachandran, R., and Manil, M.: Surveying the Machine Learning Landscape in Earth Sciences, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6077, https://doi.org/10.5194/egusphere-egu2020-6077, 2020.
EGU2020-10435 | Displays | ITS4.8/ESSI4.1 | Highlight
Earth science phenomena portal: from deep learning-based event detection to visual explorationManil Maskey, Rahul Ramachandran, Iksha Gurung, Muthukumaran Ramasubramanian, Aaron Kaulfus, Georgios Priftis, Brian Freitag, Drew Bollinger, Ricardo Mestre, and Daniel da Silva
Earth science researchers typically use event (an instance of an Earth science phenomenon) data for case study analysis. However, Earth science data search systems are currently limited to specifying a query parameter that includes the space and time of an event. Such an approach results in researchers spending a considerable amount of time sorting through data to conduct research studies on events. With the growing data volumes, it is imperative to investigate data-driven approaches to address this limitation in the data search system.
We describe several contributions towards alternative ways to accelerate event-based studies from large data archives.
The first contribution is the use of a machine learning-based approach, an enabling data-driven technology, to detect Earth science events from image archives. Specifically, the development of deep learning models to detect various Earth science phenomena is discussed. Deep learning includes machine learning algorithms that consist of multiple layers, where each layer performs feature detection. We leverage recent advancements in deep learning techniques (mostly using convolutional neural networks (CNNs) that have produced state-of-the-art image classification results in many domains.
The second contribution is the development of an event database and a phenomena portal. The phenomena portal utilizes the deep learning detected events cataloged in an event database. The portal provides a user interface with several features including display of events of the day, spatio-temporal characteristics of events, and integration of user feedback.
The third contribution is the development of a cloud-native framework to automate and scale the deep learning models in a production environment.
The paper also discusses the challenges in developing an end-to-end Earth science machine learning project and possible approaches to address those challenges.
How to cite: Maskey, M., Ramachandran, R., Gurung, I., Ramasubramanian, M., Kaulfus, A., Priftis, G., Freitag, B., Bollinger, D., Mestre, R., and da Silva, D.: Earth science phenomena portal: from deep learning-based event detection to visual exploration, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10435, https://doi.org/10.5194/egusphere-egu2020-10435, 2020.
Earth science researchers typically use event (an instance of an Earth science phenomenon) data for case study analysis. However, Earth science data search systems are currently limited to specifying a query parameter that includes the space and time of an event. Such an approach results in researchers spending a considerable amount of time sorting through data to conduct research studies on events. With the growing data volumes, it is imperative to investigate data-driven approaches to address this limitation in the data search system.
We describe several contributions towards alternative ways to accelerate event-based studies from large data archives.
The first contribution is the use of a machine learning-based approach, an enabling data-driven technology, to detect Earth science events from image archives. Specifically, the development of deep learning models to detect various Earth science phenomena is discussed. Deep learning includes machine learning algorithms that consist of multiple layers, where each layer performs feature detection. We leverage recent advancements in deep learning techniques (mostly using convolutional neural networks (CNNs) that have produced state-of-the-art image classification results in many domains.
The second contribution is the development of an event database and a phenomena portal. The phenomena portal utilizes the deep learning detected events cataloged in an event database. The portal provides a user interface with several features including display of events of the day, spatio-temporal characteristics of events, and integration of user feedback.
The third contribution is the development of a cloud-native framework to automate and scale the deep learning models in a production environment.
The paper also discusses the challenges in developing an end-to-end Earth science machine learning project and possible approaches to address those challenges.
How to cite: Maskey, M., Ramachandran, R., Gurung, I., Ramasubramanian, M., Kaulfus, A., Priftis, G., Freitag, B., Bollinger, D., Mestre, R., and da Silva, D.: Earth science phenomena portal: from deep learning-based event detection to visual exploration, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10435, https://doi.org/10.5194/egusphere-egu2020-10435, 2020.
EGU2020-11222 | Displays | ITS4.8/ESSI4.1 | Highlight
Using Earth Observations to engender a social-ecological systems perspective on rural livelihoods and wellbeing.Peter Hargreaves and Gary Watmough
An estimated 70% of the world’s poorest people live in rural spaces. There is a consistent differentiation between rural and urban contexts, where the former are typically characterised by weak infrastructure, limited services and social marginalisation. At the same time, the world’s poorest people are most vulnerable to global change impacts. Historic pathways to measuring and achieving poverty reduction must be adapted for an era of increasingly dynamic change, where spatio-temporal blind spots preclude a comprehensive understanding of poverty and its manifestation in rural developing contexts. To catalyse an effective poverty eradication narrative, we require a characterisation of the spatio-temporal anatomy of poverty metrics. To achieve this, researchers and practitioners must develop tools and mobilise data sources that enable the detection and visualisation of economic and social dimensions of rural spaces at finer temporal and spatial scales than is currently practised. This can only be realised by integrating new technologies and non-traditional sources of data alongside conventional data to engender a novel policy landscape.
Cue Earth Observation: the only medium through which data can be gathered that is global in its coverage but also available across multiple temporal and spatial scales. Earth Observation (EO) data (collected from satellite, airborne and in-situ remote sensors) have a demonstrable capacity to inform, update, situate and provide the necessary context to design evidence-based policy for sustainable development. This is particularly important for the Sustainable Development Goals (SDGs) because the nested indicators are based on data that can be visualised, and many have a definitive geospatial component, which can improve national statistics reporting.
In this review, we present a rubric for integrating EO and geospatial data into rural poverty analysis. This aims to provide a foundation from which researchers at the interface of social-ecological systems can unlock new capabilities for measuring economic, environmental and social conditions at the requisite scales and frequency for poverty reporting and also for broader livelihoods and development research. We review satellite applications and explore the development of EO methodologies for investigating social-ecological conditions as indirect proxies of rural wellbeing. This is nested within the broader sustainable development agenda (in particular the SDGs) and aims to set out what our capabilities are and where research should be focused in the near-term. In short, elucidating to a broad audience what the integration of EO can achieve and how developing social-ecological metrics from EO data can improve evidence-based policymaking.
Key words: Earth Observation; Poverty; Livelihoods; Sustainable Development Goals; Remote Sensing
How to cite: Hargreaves, P. and Watmough, G.: Using Earth Observations to engender a social-ecological systems perspective on rural livelihoods and wellbeing., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11222, https://doi.org/10.5194/egusphere-egu2020-11222, 2020.
An estimated 70% of the world’s poorest people live in rural spaces. There is a consistent differentiation between rural and urban contexts, where the former are typically characterised by weak infrastructure, limited services and social marginalisation. At the same time, the world’s poorest people are most vulnerable to global change impacts. Historic pathways to measuring and achieving poverty reduction must be adapted for an era of increasingly dynamic change, where spatio-temporal blind spots preclude a comprehensive understanding of poverty and its manifestation in rural developing contexts. To catalyse an effective poverty eradication narrative, we require a characterisation of the spatio-temporal anatomy of poverty metrics. To achieve this, researchers and practitioners must develop tools and mobilise data sources that enable the detection and visualisation of economic and social dimensions of rural spaces at finer temporal and spatial scales than is currently practised. This can only be realised by integrating new technologies and non-traditional sources of data alongside conventional data to engender a novel policy landscape.
Cue Earth Observation: the only medium through which data can be gathered that is global in its coverage but also available across multiple temporal and spatial scales. Earth Observation (EO) data (collected from satellite, airborne and in-situ remote sensors) have a demonstrable capacity to inform, update, situate and provide the necessary context to design evidence-based policy for sustainable development. This is particularly important for the Sustainable Development Goals (SDGs) because the nested indicators are based on data that can be visualised, and many have a definitive geospatial component, which can improve national statistics reporting.
In this review, we present a rubric for integrating EO and geospatial data into rural poverty analysis. This aims to provide a foundation from which researchers at the interface of social-ecological systems can unlock new capabilities for measuring economic, environmental and social conditions at the requisite scales and frequency for poverty reporting and also for broader livelihoods and development research. We review satellite applications and explore the development of EO methodologies for investigating social-ecological conditions as indirect proxies of rural wellbeing. This is nested within the broader sustainable development agenda (in particular the SDGs) and aims to set out what our capabilities are and where research should be focused in the near-term. In short, elucidating to a broad audience what the integration of EO can achieve and how developing social-ecological metrics from EO data can improve evidence-based policymaking.
Key words: Earth Observation; Poverty; Livelihoods; Sustainable Development Goals; Remote Sensing
How to cite: Hargreaves, P. and Watmough, G.: Using Earth Observations to engender a social-ecological systems perspective on rural livelihoods and wellbeing., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11222, https://doi.org/10.5194/egusphere-egu2020-11222, 2020.
EGU2020-17705 | Displays | ITS4.8/ESSI4.1
Planetary Science Virtual Observatory: VESPA/Europlanet outcome and prospectsStéphane Erard, Baptiste Cecconi, Pierre Le Sidaner, Angelo Pio Rossi, Hanna Rothkaehl, and Teresa Capria
The Europlanet-2020 programme, which ended Aug 2019, included an activity called VESPA (Virtual European Solar and Planetary Access) which focused on adapting Virtual Observatory (VO) techniques to handle Planetary Science data. We will present some aspects of VESPA at the end of this 4-years development phase and at the onset of the newly selected Europlanet-2024 programme in Feb 2020. VESPA currently distributes 54 data services which are searchable according to observing conditions and encompass a wide scope including surfaces, atmospheres, magnetospheres and planetary plasmas, small bodies, heliophysics, exoplanets, and lab spectroscopy. Versatile online visualization tools have been adapted for Planetary Science, and efforts were made to connect the Astronomy VO with related environments, e.g., GIS for planetary surfaces. The new programme will broaden and secure the former “data stewardship” concept, providing a handy solution to Open Science challenges in our community. It will also move towards a new concept of “enabling data analysis”: a run-on-demand platform will be adapted from another H2020 programme in Astronomy (ESCAPE); VESPA services will be made ready to use for Machine Learning and geological mapping activities, and will also host selected results from such analyses. More tutorials and practical use cases will be made available to facilitate access to the VESPA infrastructure.
VESPA portal: http://vespa.obspm.fr
The Europlanet 2020/2024 Research Infrastructure projects have received funding from the European Union's Horizon 2020 research and innovation programme under grant agreements No 654208 and No 871149
How to cite: Erard, S., Cecconi, B., Le Sidaner, P., Rossi, A. P., Rothkaehl, H., and Capria, T.: Planetary Science Virtual Observatory: VESPA/Europlanet outcome and prospects, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-17705, https://doi.org/10.5194/egusphere-egu2020-17705, 2020.
The Europlanet-2020 programme, which ended Aug 2019, included an activity called VESPA (Virtual European Solar and Planetary Access) which focused on adapting Virtual Observatory (VO) techniques to handle Planetary Science data. We will present some aspects of VESPA at the end of this 4-years development phase and at the onset of the newly selected Europlanet-2024 programme in Feb 2020. VESPA currently distributes 54 data services which are searchable according to observing conditions and encompass a wide scope including surfaces, atmospheres, magnetospheres and planetary plasmas, small bodies, heliophysics, exoplanets, and lab spectroscopy. Versatile online visualization tools have been adapted for Planetary Science, and efforts were made to connect the Astronomy VO with related environments, e.g., GIS for planetary surfaces. The new programme will broaden and secure the former “data stewardship” concept, providing a handy solution to Open Science challenges in our community. It will also move towards a new concept of “enabling data analysis”: a run-on-demand platform will be adapted from another H2020 programme in Astronomy (ESCAPE); VESPA services will be made ready to use for Machine Learning and geological mapping activities, and will also host selected results from such analyses. More tutorials and practical use cases will be made available to facilitate access to the VESPA infrastructure.
VESPA portal: http://vespa.obspm.fr
The Europlanet 2020/2024 Research Infrastructure projects have received funding from the European Union's Horizon 2020 research and innovation programme under grant agreements No 654208 and No 871149
How to cite: Erard, S., Cecconi, B., Le Sidaner, P., Rossi, A. P., Rothkaehl, H., and Capria, T.: Planetary Science Virtual Observatory: VESPA/Europlanet outcome and prospects, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-17705, https://doi.org/10.5194/egusphere-egu2020-17705, 2020.
EGU2020-18334 | Displays | ITS4.8/ESSI4.1
Development of GOCI-II Toolbox for SNAPJae-Moo Heo, Hee-Jeong Han, Hyun Yang, Sunghee Kwak, and Taekyung Lee
GOCI-II (Geostationary Ocean Color Imager II), the successor of GOCI, will be launched in February 2020. And a ground system for GOCI-II has been developed since 2015. Also, new tools should be developed for the scientific analysis and exploitation of GOCI-II data. GDPS (GOCI Data Processing System), a data analysis tool for existing GOCI, has some problems. It only works on Windows and a great deal of effort is required to develop and improve functions for analysis and processing for GOCI data. To solve these problems, we are developing a GOCI-II Toolbox (GTBX) based on SNAP (SeNtinel Application Platform) which is a widely used software platform and an evolution of ESA BEAM/NEST architecture inheriting all current NEST functionality. GOCI Level-1B and Level-2 file format are binary and HDF-EOS5, respectively. And GOCI-II Level-1B and Level-2 file format is NetCDF. The GTBX provides the visualization and analysis of GOCI/GOCI-II data, as well as GOCI-II Level-2 processor for ocean color products including atmospheric correction and application products of ocean, atmosphere and land. Furthermore, the GTBX extends SNAP product library to display the Thematic Realtime Environmental Distributed Data Services (THREDDS) catalogs of GOCI/GOCI-II data and provides remote access to partial data using the Open-Source Project for a Network Data Access Protocol (OPeNDAP).
In terms of the GOCI-II Level-2 processor, algorithms are implemented in Python and C/C ++ and each algorithm application is distributed as a Docker image. So, it can be run in any environment that support the Docker (e.g., Windows, Linux and Mac OS). In addition, we introduce parallel processing methods suitable for each application. In computing environments that support the Open Multi-Processing, Open Computing Language (OpenCL) and Compute Unified Device Architecture (CUDA) libraries, users of GOCI-II data take advantage of the powerful computing resources of multi-core CPU and GPU, and it is possible to process large-scale data at very high speed.
The GTBX work seamlessly with the generic functions of SNAP. By utilizing various visualization and analysis functions of SNAP and adding functions of easy access and powerful processing for GOCI/GOCI-II data, it is expected that the rich utilization of GOCI-II data will be possible.
How to cite: Heo, J.-M., Han, H.-J., Yang, H., Kwak, S., and Lee, T.: Development of GOCI-II Toolbox for SNAP, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18334, https://doi.org/10.5194/egusphere-egu2020-18334, 2020.
GOCI-II (Geostationary Ocean Color Imager II), the successor of GOCI, will be launched in February 2020. And a ground system for GOCI-II has been developed since 2015. Also, new tools should be developed for the scientific analysis and exploitation of GOCI-II data. GDPS (GOCI Data Processing System), a data analysis tool for existing GOCI, has some problems. It only works on Windows and a great deal of effort is required to develop and improve functions for analysis and processing for GOCI data. To solve these problems, we are developing a GOCI-II Toolbox (GTBX) based on SNAP (SeNtinel Application Platform) which is a widely used software platform and an evolution of ESA BEAM/NEST architecture inheriting all current NEST functionality. GOCI Level-1B and Level-2 file format are binary and HDF-EOS5, respectively. And GOCI-II Level-1B and Level-2 file format is NetCDF. The GTBX provides the visualization and analysis of GOCI/GOCI-II data, as well as GOCI-II Level-2 processor for ocean color products including atmospheric correction and application products of ocean, atmosphere and land. Furthermore, the GTBX extends SNAP product library to display the Thematic Realtime Environmental Distributed Data Services (THREDDS) catalogs of GOCI/GOCI-II data and provides remote access to partial data using the Open-Source Project for a Network Data Access Protocol (OPeNDAP).
In terms of the GOCI-II Level-2 processor, algorithms are implemented in Python and C/C ++ and each algorithm application is distributed as a Docker image. So, it can be run in any environment that support the Docker (e.g., Windows, Linux and Mac OS). In addition, we introduce parallel processing methods suitable for each application. In computing environments that support the Open Multi-Processing, Open Computing Language (OpenCL) and Compute Unified Device Architecture (CUDA) libraries, users of GOCI-II data take advantage of the powerful computing resources of multi-core CPU and GPU, and it is possible to process large-scale data at very high speed.
The GTBX work seamlessly with the generic functions of SNAP. By utilizing various visualization and analysis functions of SNAP and adding functions of easy access and powerful processing for GOCI/GOCI-II data, it is expected that the rich utilization of GOCI-II data will be possible.
How to cite: Heo, J.-M., Han, H.-J., Yang, H., Kwak, S., and Lee, T.: Development of GOCI-II Toolbox for SNAP, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18334, https://doi.org/10.5194/egusphere-egu2020-18334, 2020.
EGU2020-22202 | Displays | ITS4.8/ESSI4.1
Inferences of the lunar interior from a probabilistic gravity inversion approachKristel Izquierdo, Laurent Montesi, and Vedran Lekic
The shape and location of density anomalies inside the Moon provide insights into processes that produced them and their subsequent evolution. Gravity measurements provide the most complete data set to infer these anomalies on the Moon [1]. However, gravity inversions suffer from inherent non-uniqueness. To circumvent this issue, it is often assumed that the Bouguer gravity anomalies are produced by the relief of the crust-mantle or other internal interface [2]. This approach limits the recovery of 3D density anomalies or any anomaly at different depths. In this work, we develop an algorithm that provides a set of likely three-dimensional models consistent with the observed gravity data with no need to constrain the depth of anomalies a priori.
The volume of a sphere is divided in 6480 tesseroids and n Voronoi regions. The algorithm first assigns a density value to each Voronoi region, which can encompass one or more tesseroids. At each iteration, it can add or delete a region, or change its location [2, 3]. The optimal density of each region is then obtained by linear inversion of the gravity field and the likelihood of the solution is calculated using Bayes’ theorem. After convergence, the algorithm then outputs an ensemble of models with good fit to the observed data and high posterior probability. The ensemble might contain essentially similar interior density distribution models or many different ones, providing a view of the non-uniqueness of the inversion results.
We use the lunar radial gravity acceleration obtained by the GRAIL mission [4] up to spherical harmonic degree 400 as input data in the algorithm. The gravity acceleration data of the resulting models match the input gravity very well, only missing the gravity signature of smaller craters. A group of models show a deep positive density anomaly in the general area of the Clavius basin. The anomaly is centered at approximately 50°S and 10°E, at about 800 km depth. Density anomalies in this group of models remain relatively small and could be explained by mineralogical differences in the mantle. Major variations in crustal structure, such as the near side / far side dichotomy and the South Pole Aitken basin are also apparent, giving geological credence to these models. A different group of models points towards two high density regions with a much higher mass than the one described by the other group of models. It may be regarded as an unrealistic model. Our method embraces the non-uniqueness of gravity inversions and does not impose a single view of the interior although geological knowledge and geodynamic analyses are of course important to evaluate the realism of each solution.
References: [1] Wieczorek, M. A. (2006), Treatise on Geophysics 153-193. doi: 10.1016/B978-0-444-53802-4.00169-X. [2] Izquierdo, K et al. (2019) Geophys. J. Int. 220, 1687-1699, doi: 10.1093/gji/ggz544, [3] Izquierdo, K. et al., (2019) LPSC 50, abstr. 2157. [4] Lemoine, F. G., et al. ( 2013), J. Geophys. Res. 118, 1676–1698 doi: 10.1002/jgre.20118.
How to cite: Izquierdo, K., Montesi, L., and Lekic, V.: Inferences of the lunar interior from a probabilistic gravity inversion approach, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22202, https://doi.org/10.5194/egusphere-egu2020-22202, 2020.
The shape and location of density anomalies inside the Moon provide insights into processes that produced them and their subsequent evolution. Gravity measurements provide the most complete data set to infer these anomalies on the Moon [1]. However, gravity inversions suffer from inherent non-uniqueness. To circumvent this issue, it is often assumed that the Bouguer gravity anomalies are produced by the relief of the crust-mantle or other internal interface [2]. This approach limits the recovery of 3D density anomalies or any anomaly at different depths. In this work, we develop an algorithm that provides a set of likely three-dimensional models consistent with the observed gravity data with no need to constrain the depth of anomalies a priori.
The volume of a sphere is divided in 6480 tesseroids and n Voronoi regions. The algorithm first assigns a density value to each Voronoi region, which can encompass one or more tesseroids. At each iteration, it can add or delete a region, or change its location [2, 3]. The optimal density of each region is then obtained by linear inversion of the gravity field and the likelihood of the solution is calculated using Bayes’ theorem. After convergence, the algorithm then outputs an ensemble of models with good fit to the observed data and high posterior probability. The ensemble might contain essentially similar interior density distribution models or many different ones, providing a view of the non-uniqueness of the inversion results.
We use the lunar radial gravity acceleration obtained by the GRAIL mission [4] up to spherical harmonic degree 400 as input data in the algorithm. The gravity acceleration data of the resulting models match the input gravity very well, only missing the gravity signature of smaller craters. A group of models show a deep positive density anomaly in the general area of the Clavius basin. The anomaly is centered at approximately 50°S and 10°E, at about 800 km depth. Density anomalies in this group of models remain relatively small and could be explained by mineralogical differences in the mantle. Major variations in crustal structure, such as the near side / far side dichotomy and the South Pole Aitken basin are also apparent, giving geological credence to these models. A different group of models points towards two high density regions with a much higher mass than the one described by the other group of models. It may be regarded as an unrealistic model. Our method embraces the non-uniqueness of gravity inversions and does not impose a single view of the interior although geological knowledge and geodynamic analyses are of course important to evaluate the realism of each solution.
References: [1] Wieczorek, M. A. (2006), Treatise on Geophysics 153-193. doi: 10.1016/B978-0-444-53802-4.00169-X. [2] Izquierdo, K et al. (2019) Geophys. J. Int. 220, 1687-1699, doi: 10.1093/gji/ggz544, [3] Izquierdo, K. et al., (2019) LPSC 50, abstr. 2157. [4] Lemoine, F. G., et al. ( 2013), J. Geophys. Res. 118, 1676–1698 doi: 10.1002/jgre.20118.
How to cite: Izquierdo, K., Montesi, L., and Lekic, V.: Inferences of the lunar interior from a probabilistic gravity inversion approach, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22202, https://doi.org/10.5194/egusphere-egu2020-22202, 2020.
EGU2020-18883 | Displays | ITS4.8/ESSI4.1
Automatic detection of the electron density from the WHISPER instrument onboard CLUSTERNicolas Gilet, Emmanuel De Leon, Klet Jegou, Luca Bucciantini, Xavier Vallières, Jean-Louis Rauch, and Pierrette Décréau
The Waves of HIgh frequency and Sounder for Probing Electron density by Relaxation (WHISPER) instrument, is part of the Wave Experiment Consortium (WEC) of the CLUSTER mission. The instrument consists basically of a receiver, a transmitter, and a wave spectrum analyzer. It delivers active (sounding) and natural electric field spectra. The characteristic signature of waves indicates the nature of the ambient plasma regime and, combined with the spacecraft position, reveals the different magnetospheric boundaries and regions. The electron density can be deduced from the characteristics of natural waves in natural mode and from the resonance triggered in the sounding mode. The electron density is a parameter of major scientific interest and is also commonly used for the calibration of the particles instruments.
Until recently, the electron density required a manual intervention consisting in visualizing input parameters from the experiments, such as the WHISPER active/passive spectrograms combined with the dataset from the other instruments onboard CLUSTER.
Work is being carried out to automatize the detection of the electron density using Machine Learning and Deep Learning methods.
To automate this process, knowledge of the region (plasma regime) is highly desirable. In order to try to determinate the different plasma regions, a Multi-Layer Perceptron has been implemented. This model consists of three neuronal network dense layers, with some additional dropout to prevent overfitting. For each detected region, a second Multi-Layer perceptron was implemented to determine the plasma frequency. This model has been trained with 100k spectra using the plasma frequency values manually found. The accuracy can reach until 98% in some plasma regions.
These models of the electron density automated determination are also currently applied on the dataset of the mutual impedance instrument (RPC-MIP) onboarded ROSETTA and will be useful for other space missions such as BepiColombo (especially for PWI/AM2P experiment) or JUICE (RPWI/MIME experiment).
How to cite: Gilet, N., De Leon, E., Jegou, K., Bucciantini, L., Vallières, X., Rauch, J.-L., and Décréau, P.: Automatic detection of the electron density from the WHISPER instrument onboard CLUSTER, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18883, https://doi.org/10.5194/egusphere-egu2020-18883, 2020.
The Waves of HIgh frequency and Sounder for Probing Electron density by Relaxation (WHISPER) instrument, is part of the Wave Experiment Consortium (WEC) of the CLUSTER mission. The instrument consists basically of a receiver, a transmitter, and a wave spectrum analyzer. It delivers active (sounding) and natural electric field spectra. The characteristic signature of waves indicates the nature of the ambient plasma regime and, combined with the spacecraft position, reveals the different magnetospheric boundaries and regions. The electron density can be deduced from the characteristics of natural waves in natural mode and from the resonance triggered in the sounding mode. The electron density is a parameter of major scientific interest and is also commonly used for the calibration of the particles instruments.
Until recently, the electron density required a manual intervention consisting in visualizing input parameters from the experiments, such as the WHISPER active/passive spectrograms combined with the dataset from the other instruments onboard CLUSTER.
Work is being carried out to automatize the detection of the electron density using Machine Learning and Deep Learning methods.
To automate this process, knowledge of the region (plasma regime) is highly desirable. In order to try to determinate the different plasma regions, a Multi-Layer Perceptron has been implemented. This model consists of three neuronal network dense layers, with some additional dropout to prevent overfitting. For each detected region, a second Multi-Layer perceptron was implemented to determine the plasma frequency. This model has been trained with 100k spectra using the plasma frequency values manually found. The accuracy can reach until 98% in some plasma regions.
These models of the electron density automated determination are also currently applied on the dataset of the mutual impedance instrument (RPC-MIP) onboarded ROSETTA and will be useful for other space missions such as BepiColombo (especially for PWI/AM2P experiment) or JUICE (RPWI/MIME experiment).
How to cite: Gilet, N., De Leon, E., Jegou, K., Bucciantini, L., Vallières, X., Rauch, J.-L., and Décréau, P.: Automatic detection of the electron density from the WHISPER instrument onboard CLUSTER, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18883, https://doi.org/10.5194/egusphere-egu2020-18883, 2020.
EGU2020-7267 * | Displays | ITS4.8/ESSI4.1 | Highlight
Earth System Music: music generated from the first United Kingdom Earth System modelLee de Mora, Alistair Sellar, Andrew Yool, Julien Palmieri, Robin S. Smith, Till Kuhlbrodt, Robert J. Parker, Jeremy Walton, Jeremy C. Blackford, and Colin G. Jones
With the ever-growing interest from the general public towards understanding climate science, it is becoming increasingly important that we present this information in ways accessible to non-experts. In this pilot study, we use time series data from the first United Kingdom Earth System model (UKESM1) to create six procedurally generated musical pieces and use them to explain the process of modelling the earth system and to engage with the wider community.
Scientific data is almost always represented graphically either in figures or in videos. By adding audio to the visualisation of model data, the combination of music and imagery provides additional contextual clues to aid in the interpretation. Furthermore, the audiolisation of model data can be employed to generate interesting and captivating music, which can not only reach a wider audience, but also hold the attention of the listeners for extended periods of time.
Each of the six pieces presented in this work was themed around either a scientific principle or a practical aspect of earth system modelling. These pieces demonstrate the concepts of a spin up, a pre-industrial control run, multiple historical experiments, and the use of several future climate scenarios to a wider audience. They also show the ocean acidification over the historical period, the changes in circulation, the natural variability of the pre-industrial simulations, and the expected rise in sea surface temperature over the 20th century.
Each of these pieces were arranged using different musical progression, style and tempo. All six pieces were performed by the digital piano synthesizer, TiMidity++, and were published on the lead author's YouTube channel. The videos all show the progression of the data in time with the music and a brief description of the methodology is posted alongside the video.
To disseminate these works, links to each piece were published on the lead author's personal and professional social media accounts. The reach of these works was also analysed using YouTube's channel monitoring toolkit for content creators, YouTube studio.
How to cite: de Mora, L., Sellar, A., Yool, A., Palmieri, J., Smith, R. S., Kuhlbrodt, T., Parker, R. J., Walton, J., Blackford, J. C., and Jones, C. G.: Earth System Music: music generated from the first United Kingdom Earth System model, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7267, https://doi.org/10.5194/egusphere-egu2020-7267, 2020.
With the ever-growing interest from the general public towards understanding climate science, it is becoming increasingly important that we present this information in ways accessible to non-experts. In this pilot study, we use time series data from the first United Kingdom Earth System model (UKESM1) to create six procedurally generated musical pieces and use them to explain the process of modelling the earth system and to engage with the wider community.
Scientific data is almost always represented graphically either in figures or in videos. By adding audio to the visualisation of model data, the combination of music and imagery provides additional contextual clues to aid in the interpretation. Furthermore, the audiolisation of model data can be employed to generate interesting and captivating music, which can not only reach a wider audience, but also hold the attention of the listeners for extended periods of time.
Each of the six pieces presented in this work was themed around either a scientific principle or a practical aspect of earth system modelling. These pieces demonstrate the concepts of a spin up, a pre-industrial control run, multiple historical experiments, and the use of several future climate scenarios to a wider audience. They also show the ocean acidification over the historical period, the changes in circulation, the natural variability of the pre-industrial simulations, and the expected rise in sea surface temperature over the 20th century.
Each of these pieces were arranged using different musical progression, style and tempo. All six pieces were performed by the digital piano synthesizer, TiMidity++, and were published on the lead author's YouTube channel. The videos all show the progression of the data in time with the music and a brief description of the methodology is posted alongside the video.
To disseminate these works, links to each piece were published on the lead author's personal and professional social media accounts. The reach of these works was also analysed using YouTube's channel monitoring toolkit for content creators, YouTube studio.
How to cite: de Mora, L., Sellar, A., Yool, A., Palmieri, J., Smith, R. S., Kuhlbrodt, T., Parker, R. J., Walton, J., Blackford, J. C., and Jones, C. G.: Earth System Music: music generated from the first United Kingdom Earth System model, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7267, https://doi.org/10.5194/egusphere-egu2020-7267, 2020.
EGU2020-10267 | Displays | ITS4.8/ESSI4.1
Reducing Time to Results with EUMETSAT's New Data ServicesMiruna Stoicescu, Guillaume Aubert, Fabrizio Borgia, Oriol Espanyol, Mark Higgins, Michel Horny, Daniel Lee, Peter Miu, Ilaria Parodi, Anthony Patchett, Klaus-Peter Renner, Joaquin Rodriguez Guerra, Rodrigo Romero, Harald Rothfuss, Joachim Saalmueller, Michael Schick, Sally Wannop, and Lothar Wolf
EUMETSAT offers a vast and growing collection of earth observation data produced by over 35 years of operational meteorological satellites. New data products are produced 24/7x365 and consistency with previous satellites and other missions is ensured by intercalibration and reprocessing campaigns. The benefits for the geosciences community are readily apparent - a recent survey showed that EUMETSAT and its Satellite Application Facilities produce 26% of the Essential Climate Variable records identified by the Global Climate Observing System that can be observed from space.
With the advent of new core satellite programmes and many narrowly focused missions, the volume and complexity of the generated data products will increase significantly, making it unfeasible for traditional workflows, relying on accessing data holdings present on the user's premises, to fully exploit these observations.
Users can access EUMETSAT data via two service categories: “push” services, currently provided by EUMETCast Satellite and delivering data to users via satellite systems in near real-time, and “pull” services, currently provided by the Long Term Archive and by the EUMETSAT Visualisation Service (EUMETView). EUMETSAT is in the process of reshaping its data services portfolio by leveraging big data and cloud computing technologies. The new Data Services are being phased into operations during 2020 and address several challenges with using EUMETSAT's data: near real-time data access, accessing time series, viewing data, transforming it to make it compatible with downstream workflows, and processing data on the premises where they are stored.
EUMETSAT has established an on-premises hybrid cloud, in which new Data Services for online data access (Data Store), web map visualisations (View Service) and product format customisations (Data Tailor) are hosted. Additionally, our “push” services are extended, with the introduction of the EUMETCAST Terrestrial service.
The Data Store provides online access for directly downloading satellite data via a web-based user interface and APIs usable in processing chains. Users can download the data in its original format or customise it before download by invoking the Data Tailor Service. The View Service provides access via standard OGC Web Map, Web Coverage and Web Feature Services (WMS, WCS, WFS) which visualise data available in the Data Store. It is accessible via a web-based interface and APIs allowing the integration of visualisations in end-user applications. EUMETCast Terrestrial is an evolution of the EUMETCast Satellite system that relies on the network infrastructure provided by GEANT and its partners plus the Internet to deliver high volumes of data worldwide. EUMETCast Terrestrial is able to deliver data outside the EUMETCast Satellite footprint and to user communities large enough to benefit from a multicast service, but not large enough to justify a full satellite-based broadcast.
This presentation will showcase these new Data Services, which enable users to transition from traditional local data processing regimes to cloud-native research workflows. With the new Data Services, users can easily discover, explore, and tailor data products to their needs and thus shift the effort from data and infrastructure handling to domain-specific and scientific topics.
How to cite: Stoicescu, M., Aubert, G., Borgia, F., Espanyol, O., Higgins, M., Horny, M., Lee, D., Miu, P., Parodi, I., Patchett, A., Renner, K.-P., Rodriguez Guerra, J., Romero, R., Rothfuss, H., Saalmueller, J., Schick, M., Wannop, S., and Wolf, L.: Reducing Time to Results with EUMETSAT's New Data Services, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10267, https://doi.org/10.5194/egusphere-egu2020-10267, 2020.
EUMETSAT offers a vast and growing collection of earth observation data produced by over 35 years of operational meteorological satellites. New data products are produced 24/7x365 and consistency with previous satellites and other missions is ensured by intercalibration and reprocessing campaigns. The benefits for the geosciences community are readily apparent - a recent survey showed that EUMETSAT and its Satellite Application Facilities produce 26% of the Essential Climate Variable records identified by the Global Climate Observing System that can be observed from space.
With the advent of new core satellite programmes and many narrowly focused missions, the volume and complexity of the generated data products will increase significantly, making it unfeasible for traditional workflows, relying on accessing data holdings present on the user's premises, to fully exploit these observations.
Users can access EUMETSAT data via two service categories: “push” services, currently provided by EUMETCast Satellite and delivering data to users via satellite systems in near real-time, and “pull” services, currently provided by the Long Term Archive and by the EUMETSAT Visualisation Service (EUMETView). EUMETSAT is in the process of reshaping its data services portfolio by leveraging big data and cloud computing technologies. The new Data Services are being phased into operations during 2020 and address several challenges with using EUMETSAT's data: near real-time data access, accessing time series, viewing data, transforming it to make it compatible with downstream workflows, and processing data on the premises where they are stored.
EUMETSAT has established an on-premises hybrid cloud, in which new Data Services for online data access (Data Store), web map visualisations (View Service) and product format customisations (Data Tailor) are hosted. Additionally, our “push” services are extended, with the introduction of the EUMETCAST Terrestrial service.
The Data Store provides online access for directly downloading satellite data via a web-based user interface and APIs usable in processing chains. Users can download the data in its original format or customise it before download by invoking the Data Tailor Service. The View Service provides access via standard OGC Web Map, Web Coverage and Web Feature Services (WMS, WCS, WFS) which visualise data available in the Data Store. It is accessible via a web-based interface and APIs allowing the integration of visualisations in end-user applications. EUMETCast Terrestrial is an evolution of the EUMETCast Satellite system that relies on the network infrastructure provided by GEANT and its partners plus the Internet to deliver high volumes of data worldwide. EUMETCast Terrestrial is able to deliver data outside the EUMETCast Satellite footprint and to user communities large enough to benefit from a multicast service, but not large enough to justify a full satellite-based broadcast.
This presentation will showcase these new Data Services, which enable users to transition from traditional local data processing regimes to cloud-native research workflows. With the new Data Services, users can easily discover, explore, and tailor data products to their needs and thus shift the effort from data and infrastructure handling to domain-specific and scientific topics.
How to cite: Stoicescu, M., Aubert, G., Borgia, F., Espanyol, O., Higgins, M., Horny, M., Lee, D., Miu, P., Parodi, I., Patchett, A., Renner, K.-P., Rodriguez Guerra, J., Romero, R., Rothfuss, H., Saalmueller, J., Schick, M., Wannop, S., and Wolf, L.: Reducing Time to Results with EUMETSAT's New Data Services, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10267, https://doi.org/10.5194/egusphere-egu2020-10267, 2020.
EGU2020-10988 | Displays | ITS4.8/ESSI4.1
Visualization Techniques at the Aviation Weather TestbedAustin Cross
The Aviation Weather Center (AWC) is part of the US National Weather Service, providing domestic and global aviation weather forecasts and warnings. Forecasters and automated systems disseminate up to date information including through our website, aviationweather.gov. Recent years have seen AWC transition from primarily text-based products to increasing interactive and graphical tools. While nearly all information is presented in two dimensions, recent work has focused on adding three- and four- dimensional visualizations to increase understanding in the user community. Web based technologies allow interrogating large datasets on-the-fly without specific software installed on the client device, while providing a more complete picture and development of conceptual models beyond fixed horizontal slices of airspace.
How to cite: Cross, A.: Visualization Techniques at the Aviation Weather Testbed, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10988, https://doi.org/10.5194/egusphere-egu2020-10988, 2020.
The Aviation Weather Center (AWC) is part of the US National Weather Service, providing domestic and global aviation weather forecasts and warnings. Forecasters and automated systems disseminate up to date information including through our website, aviationweather.gov. Recent years have seen AWC transition from primarily text-based products to increasing interactive and graphical tools. While nearly all information is presented in two dimensions, recent work has focused on adding three- and four- dimensional visualizations to increase understanding in the user community. Web based technologies allow interrogating large datasets on-the-fly without specific software installed on the client device, while providing a more complete picture and development of conceptual models beyond fixed horizontal slices of airspace.
How to cite: Cross, A.: Visualization Techniques at the Aviation Weather Testbed, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10988, https://doi.org/10.5194/egusphere-egu2020-10988, 2020.
EGU2020-12692 | Displays | ITS4.8/ESSI4.1
The StraboSpot data system for geological researchJulie Newman, J. Douglas Walker, Basil Tikoff, and Randolph Williams
The StraboSpot digital data system is designed to allow researchers to digitally collect, store, and share both field and laboratory data. Originally designed for structural geology field data, StraboSpot has been extended to field-based petrology and sedimentology. Current efforts will integrate micrographs and data related to microscale and experimental rock deformation. The StraboSpot data system uses a graph database, rather than a relational database approach. This approach increases its flexibility and allows the system to track geologically complex relationships. StraboSpot currently operates on two different platform types: (1) a field-based application that functions with or without internet access; and (2) a web interface (Internet-connected settings only).
The data system uses two main concepts - spots and tags - to organize data. A spot consists of a specific area at any spatial scale of observation. Spots are related in a purely spatial manner, and consequently, one spot can enclose multiple other spots that themselves contain spots. Spatial data can thus be tracked from regional to microscopic scale. Tags provide conceptual grouping of spots, allowing linkages between spots that are independent of their spatial position. A simple example of a tag is a geologic unit or formation. Multiple tags can be assigned to any spot, and tags can be assigned throughout a field study. The advantage of tags is their flexibility, in that they can be completely defined by individual scientists. Critically, tags are independent of the spatial scale of the observation. Tags may also be used to accommodate complex and complete descriptions.
The strength of the StraboSpot platform is its flexibility, and that it can be linked to other existing and future databases in order to integrate with digital efforts across the geological sciences. The StraboSpot data system – in coordination with other digital data efforts – will allow researchers to conduct types of science that were previously not possible and allows geologists to join big data initiatives.
How to cite: Newman, J., Walker, J. D., Tikoff, B., and Williams, R.: The StraboSpot data system for geological research, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12692, https://doi.org/10.5194/egusphere-egu2020-12692, 2020.
The StraboSpot digital data system is designed to allow researchers to digitally collect, store, and share both field and laboratory data. Originally designed for structural geology field data, StraboSpot has been extended to field-based petrology and sedimentology. Current efforts will integrate micrographs and data related to microscale and experimental rock deformation. The StraboSpot data system uses a graph database, rather than a relational database approach. This approach increases its flexibility and allows the system to track geologically complex relationships. StraboSpot currently operates on two different platform types: (1) a field-based application that functions with or without internet access; and (2) a web interface (Internet-connected settings only).
The data system uses two main concepts - spots and tags - to organize data. A spot consists of a specific area at any spatial scale of observation. Spots are related in a purely spatial manner, and consequently, one spot can enclose multiple other spots that themselves contain spots. Spatial data can thus be tracked from regional to microscopic scale. Tags provide conceptual grouping of spots, allowing linkages between spots that are independent of their spatial position. A simple example of a tag is a geologic unit or formation. Multiple tags can be assigned to any spot, and tags can be assigned throughout a field study. The advantage of tags is their flexibility, in that they can be completely defined by individual scientists. Critically, tags are independent of the spatial scale of the observation. Tags may also be used to accommodate complex and complete descriptions.
The strength of the StraboSpot platform is its flexibility, and that it can be linked to other existing and future databases in order to integrate with digital efforts across the geological sciences. The StraboSpot data system – in coordination with other digital data efforts – will allow researchers to conduct types of science that were previously not possible and allows geologists to join big data initiatives.
How to cite: Newman, J., Walker, J. D., Tikoff, B., and Williams, R.: The StraboSpot data system for geological research, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12692, https://doi.org/10.5194/egusphere-egu2020-12692, 2020.
EGU2020-18998 | Displays | ITS4.8/ESSI4.1
EPOS-Norway PortalTor Langeland, Ove Daae Lampe, Gro Fonnes, Kuvvet Atakan, Jan Michalek, Christian Rønnevik, Terje Utheim, and Karen Tellefsen
The European Plate Observing System (EPOS) has established a pan-European infrastructure for solid Earth science data, governed by EPOS ERIC (European Research Infrastructure Consortium). The EPOS-Norway project is funded by the Research Council of Norway (Project no. 245763). The aim of the Norwegian EPOS e‑infrastructure is to integrate data from the Norwegian seismological and geodetic networks, as well as the data from the geological and geophysical data repositories.
We present the EPOS-Norway Portal as an online, open access, interactive tool, allowing visual analysis of multidimensional data. Currently it is providing access to more than 150 datasets (and growing) from four subdomains of Earth science in Norway. Those can be combined with your own data.
The EPOS-N Portal is implemented using Enlighten-web, a web program developed by NORCE. Enlighten-web facilitates interactive visual analysis of large multidimensional data sets. The Enlighten-web client runs inside a web browser. The user can create layouts consisting of one or more plots or views. Supported plot types are table views, scatter plots, vector plots, line plots and map views. For the map views the CESIUM framework is applied. Multiple scatter plots can be mapped on top of these map views.
An important element in the Enlighten-web functionality is brushing and linking, which is useful for exploring complex data sets to discover correlations and interesting properties hidden in the data. Brushing refers to interactively selecting a subset of the data. Linking involves two or more views on the same data sets, showing different attributes. The views are linked to each other, so that selecting a subset in one view automatically leads to the corresponding subsets being highlighted in all other linked views. If the updates in the linked plots are close to real-time while brushing, the user can perceive complex trends in the data by seeing how the selections in the linked plots vary depending on changes in the brushed subset. This interactivity requires GPU acceleration of the graphics rendering. In Enlighten-web, this is realized by using WebGL.
The EPOS-N Portal is accessing the external Granularity Database (GRDB) for metadata handling. Metadata can e.g. specify data sources, services, ownership, license information and data policy. Bar charts can be used for faceted search in metadata, e.g. search by categories. EPOS-N Portal can access remote datasets via web services. Relevant web services include FDSNWS for seismological data and OGC services for geological and geophysical data (e.g. WMS – Web Map Services). Standalone datasets are available through preloaded data files. Users can also simply add another WMS server or upload their own dataset for visualization.
Enlighten–web will also be adapted as a pilot ICS-D (Integrated Core Services - Distributed) for visualization in the European infrastructure. The EPOS ICS-C (Integrated Core Services - Central) is the entry point for users for accessing the e-Infrastructure under EPOS ERIC. ICS-C will let users create and manage workflows that usually include accessing data and services located in the EPOS Thematic Core Services (TCS). The ICS-C and TCSs will be extended with additional computing facilities through the ICS-D concept.
How to cite: Langeland, T., Daae Lampe, O., Fonnes, G., Atakan, K., Michalek, J., Rønnevik, C., Utheim, T., and Tellefsen, K.: EPOS-Norway Portal, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18998, https://doi.org/10.5194/egusphere-egu2020-18998, 2020.
The European Plate Observing System (EPOS) has established a pan-European infrastructure for solid Earth science data, governed by EPOS ERIC (European Research Infrastructure Consortium). The EPOS-Norway project is funded by the Research Council of Norway (Project no. 245763). The aim of the Norwegian EPOS e‑infrastructure is to integrate data from the Norwegian seismological and geodetic networks, as well as the data from the geological and geophysical data repositories.
We present the EPOS-Norway Portal as an online, open access, interactive tool, allowing visual analysis of multidimensional data. Currently it is providing access to more than 150 datasets (and growing) from four subdomains of Earth science in Norway. Those can be combined with your own data.
The EPOS-N Portal is implemented using Enlighten-web, a web program developed by NORCE. Enlighten-web facilitates interactive visual analysis of large multidimensional data sets. The Enlighten-web client runs inside a web browser. The user can create layouts consisting of one or more plots or views. Supported plot types are table views, scatter plots, vector plots, line plots and map views. For the map views the CESIUM framework is applied. Multiple scatter plots can be mapped on top of these map views.
An important element in the Enlighten-web functionality is brushing and linking, which is useful for exploring complex data sets to discover correlations and interesting properties hidden in the data. Brushing refers to interactively selecting a subset of the data. Linking involves two or more views on the same data sets, showing different attributes. The views are linked to each other, so that selecting a subset in one view automatically leads to the corresponding subsets being highlighted in all other linked views. If the updates in the linked plots are close to real-time while brushing, the user can perceive complex trends in the data by seeing how the selections in the linked plots vary depending on changes in the brushed subset. This interactivity requires GPU acceleration of the graphics rendering. In Enlighten-web, this is realized by using WebGL.
The EPOS-N Portal is accessing the external Granularity Database (GRDB) for metadata handling. Metadata can e.g. specify data sources, services, ownership, license information and data policy. Bar charts can be used for faceted search in metadata, e.g. search by categories. EPOS-N Portal can access remote datasets via web services. Relevant web services include FDSNWS for seismological data and OGC services for geological and geophysical data (e.g. WMS – Web Map Services). Standalone datasets are available through preloaded data files. Users can also simply add another WMS server or upload their own dataset for visualization.
Enlighten–web will also be adapted as a pilot ICS-D (Integrated Core Services - Distributed) for visualization in the European infrastructure. The EPOS ICS-C (Integrated Core Services - Central) is the entry point for users for accessing the e-Infrastructure under EPOS ERIC. ICS-C will let users create and manage workflows that usually include accessing data and services located in the EPOS Thematic Core Services (TCS). The ICS-C and TCSs will be extended with additional computing facilities through the ICS-D concept.
How to cite: Langeland, T., Daae Lampe, O., Fonnes, G., Atakan, K., Michalek, J., Rønnevik, C., Utheim, T., and Tellefsen, K.: EPOS-Norway Portal, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18998, https://doi.org/10.5194/egusphere-egu2020-18998, 2020.
ITS4.9/ESSI2.17 – Spatio-temporal data science: theoretical advances and applications in computational geosciences
EGU2020-16232 | Displays | ITS4.9/ESSI2.17 | Highlight
Spatio-temporal decomposition of geophysical signals in North AmericaAoibheann Brady, Jonathan Rougier, Bramha Dutt Vishwakarma, Yann Ziegler, Richard Westaway, and Jonathan Bamber
Sea level rise is one of the most significant consequences of projected future changes in climate. One factor which influences sea level rise is vertical land motion (VLM) due to glacial isostatic adjustment (GIA), which changes the elevation of the ocean floor. Typically, GIA forward models are used for this purpose, but these are known to vary with the assumptions made about ice loading history and Earth structure. In this study, we implement a Bayesian hierarchical modelling framework to explore a data-driven VLM solution for North America, with the aim of separating out the overall signal into its GIA and hydrology (mass change) components. A Bayesian spatio-temporal model is implemented in INLA using satellite (GRACE) and in-situ (GPS) data as observations. Under the assumption that GIA varies in space but is constant in time, and that hydrology is both spatially- and temporally-variable, it is possible to separate the contributions of each component with an associated uncertainty level. Early results will be presented. Extensions to the BHM framework to investigate sea level rise at the global scale, such as the inclusion of additional processes and incorporation of increased volumes of data, will be discussed.
How to cite: Brady, A., Rougier, J., Vishwakarma, B. D., Ziegler, Y., Westaway, R., and Bamber, J.: Spatio-temporal decomposition of geophysical signals in North America, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-16232, https://doi.org/10.5194/egusphere-egu2020-16232, 2020.
Sea level rise is one of the most significant consequences of projected future changes in climate. One factor which influences sea level rise is vertical land motion (VLM) due to glacial isostatic adjustment (GIA), which changes the elevation of the ocean floor. Typically, GIA forward models are used for this purpose, but these are known to vary with the assumptions made about ice loading history and Earth structure. In this study, we implement a Bayesian hierarchical modelling framework to explore a data-driven VLM solution for North America, with the aim of separating out the overall signal into its GIA and hydrology (mass change) components. A Bayesian spatio-temporal model is implemented in INLA using satellite (GRACE) and in-situ (GPS) data as observations. Under the assumption that GIA varies in space but is constant in time, and that hydrology is both spatially- and temporally-variable, it is possible to separate the contributions of each component with an associated uncertainty level. Early results will be presented. Extensions to the BHM framework to investigate sea level rise at the global scale, such as the inclusion of additional processes and incorporation of increased volumes of data, will be discussed.
How to cite: Brady, A., Rougier, J., Vishwakarma, B. D., Ziegler, Y., Westaway, R., and Bamber, J.: Spatio-temporal decomposition of geophysical signals in North America, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-16232, https://doi.org/10.5194/egusphere-egu2020-16232, 2020.
EGU2020-17518 | Displays | ITS4.9/ESSI2.17
Surface motion information retrieval from dense time series of spaceborne and terrestrial co-registered imagesSina Nakhostin, Jeanphilippe Malet, Mathilde Desrues, and David Michea
With the progress of Machine vision and Image processing applications and the access to massive time series of images (terrestrial imagery, satellite imagery) allowing the computation of displacement fields, challenging tasks such as detection of surface motion became achievable. This calls for fast, flexible and automated procedures for modeling and information retrieval.
While supervised learning paradigms have been finding extended application within the field of remote sensing, the scarcity of reliable labeled data to be used within the training phase, sets a noticeable limitation for the generalization of these procedures in the shadow of huge spatial, spectral or temporal redundancy. Although this downside can to some extent be ameliorated by enriching training samples through active learning techniques, relying merely on supervised approaches, is a hindrance while analyzing large stacks of remote-sensing data. In addition, the process of information retrieval becomes more challenging when the data is not the direct acquisition of the scene but other derivatives (after applying different image processing steps) of it. Modeling of the motion maps and extracting high-level information from them and/or fusion of these maps with other available features of the domain (with the aim of increasing the accuracy of the underlying physical patterns) are good examples of such situation, calling to break-free from the supervised learning paradigm.
Dimensionality Reduction (DR) techniques are a family of mathematical models which work based on matrix factorization. The unsupervised DR techniques seek to provide a new representation of the data within a lower (thus more interpretable) sub-space. After finding this new representative space the original data is being projected onto this new-found subspace in order to 1) reduce the redundancy within data and 2) emphasize the most important factors within it. This will indirectly help clarifying the best (observation) sampling strategies for characterization and emphasis on the most significant detectable pattern within data.
Spatio-temporal clustering aims at improving the result of clustering by bringing in the spatial information within an image, including coherent regions or textures and fuse them to the information provided across the temporal(or spectral in case of hyperspectral imagery) dimension. One way to reach this goal is to come up with image pyramid of the scene using methods including Gaussian Pyramids and/or Discrete Wavelet Transform and then iteratively clustering the scene beginning from the coarsest to the finest resolution of the pyramid, with the membership probabilities passed on to the next level in each iteration.
Applying the combination of the two mentioned techniques on the stacks of consecutive motion maps (produced by multi-temporal optical/SAR offset tracking) representing the surface behavior of different landslides, a more accurate classification of regions based on their landslide characteristics is expected to be achieved in a complete unsupervised manner. Extensive comparisons can then be made to evaluate the several existing clustering solutions in separation of specific known surface movements. Examples of application of these techniques to SAR derived offset-tracking glacier and landslide displacement fields, and optical terrestrial landslide displacement fields will be presented and discussed.
How to cite: Nakhostin, S., Malet, J., Desrues, M., and Michea, D.: Surface motion information retrieval from dense time series of spaceborne and terrestrial co-registered images, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-17518, https://doi.org/10.5194/egusphere-egu2020-17518, 2020.
With the progress of Machine vision and Image processing applications and the access to massive time series of images (terrestrial imagery, satellite imagery) allowing the computation of displacement fields, challenging tasks such as detection of surface motion became achievable. This calls for fast, flexible and automated procedures for modeling and information retrieval.
While supervised learning paradigms have been finding extended application within the field of remote sensing, the scarcity of reliable labeled data to be used within the training phase, sets a noticeable limitation for the generalization of these procedures in the shadow of huge spatial, spectral or temporal redundancy. Although this downside can to some extent be ameliorated by enriching training samples through active learning techniques, relying merely on supervised approaches, is a hindrance while analyzing large stacks of remote-sensing data. In addition, the process of information retrieval becomes more challenging when the data is not the direct acquisition of the scene but other derivatives (after applying different image processing steps) of it. Modeling of the motion maps and extracting high-level information from them and/or fusion of these maps with other available features of the domain (with the aim of increasing the accuracy of the underlying physical patterns) are good examples of such situation, calling to break-free from the supervised learning paradigm.
Dimensionality Reduction (DR) techniques are a family of mathematical models which work based on matrix factorization. The unsupervised DR techniques seek to provide a new representation of the data within a lower (thus more interpretable) sub-space. After finding this new representative space the original data is being projected onto this new-found subspace in order to 1) reduce the redundancy within data and 2) emphasize the most important factors within it. This will indirectly help clarifying the best (observation) sampling strategies for characterization and emphasis on the most significant detectable pattern within data.
Spatio-temporal clustering aims at improving the result of clustering by bringing in the spatial information within an image, including coherent regions or textures and fuse them to the information provided across the temporal(or spectral in case of hyperspectral imagery) dimension. One way to reach this goal is to come up with image pyramid of the scene using methods including Gaussian Pyramids and/or Discrete Wavelet Transform and then iteratively clustering the scene beginning from the coarsest to the finest resolution of the pyramid, with the membership probabilities passed on to the next level in each iteration.
Applying the combination of the two mentioned techniques on the stacks of consecutive motion maps (produced by multi-temporal optical/SAR offset tracking) representing the surface behavior of different landslides, a more accurate classification of regions based on their landslide characteristics is expected to be achieved in a complete unsupervised manner. Extensive comparisons can then be made to evaluate the several existing clustering solutions in separation of specific known surface movements. Examples of application of these techniques to SAR derived offset-tracking glacier and landslide displacement fields, and optical terrestrial landslide displacement fields will be presented and discussed.
How to cite: Nakhostin, S., Malet, J., Desrues, M., and Michea, D.: Surface motion information retrieval from dense time series of spaceborne and terrestrial co-registered images, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-17518, https://doi.org/10.5194/egusphere-egu2020-17518, 2020.
EGU2020-15212 | Displays | ITS4.9/ESSI2.17
Augmenting the sensor network around Helgoland using unsupervised machine learning methodsViktoria Wichert and Holger Brix
A sensor network surrounds the island of Helgoland, supplying marine data centers with autonomous measurements of variables such as temperature, salinity, chlorophyll and oxygen saturation. The output is a data collection containing information about the complicated conditions around Helgoland, lying at the edge between coastal area and open sea. Spatio-temporal phenomena, such as passing river plumes and pollutant influx through flood events can be found in this data set. Through the data provided by the existing measurement network, these events can be detected and investigated.
Because of its important role in understanding the transition between coastal and sea conditions, plans are made to augment the sensor network around Helgoland with another underwater sensor station, an Underwater Node (UWN). The new node is supposed to optimally complement the existing sensor network. Therefore, it makes sense to place it in an area that is not yet represented well by other sensors. The exact spatial and temporal extent of the area of representativity around a sensor is hard to determine, but is assumed to have similar statistical conditions as the sensor measures. This is difficult to specify in the complex system around Helgoland and might change with both, space and time.
Using an unsupervised machine learning approach, I determine areas of representativity around Helgoland with the goal of finding an ideal placement for a new sensor node. The areas of representativity are identified by clustering a dataset containing time series of the existing sensor network and complementary model data for a period of several years. The computed areas of representativity are compared to the existing sensor placements to decide where to deploy the additional UWN to achieve a good coverage for further investigations on spatio-temporal phenomena.
A challenge that occurs during the clustering analysis is to determine whether the spatial areas of representativity remain stable enough over time to base the decision of long-term sensor placement on its results. I compare results across different periods of time and investigate how fast areas of representativity change spatially with time and if there are areas that remain stable over the course of several years. This also allows insights on the occurrence and behavior of spatio-temporal events around Helgoland in the long-term.
Whether spatial areas of representativity remain stable enough temporally to be taken into account for augmenting sensor networks, influences future network design decisions. This way, the extended sensor network can capture a greater variety of the spatio-temporal phenomena around Helgoland, as well as allow an overview on the long-term behavior of the marine system.
How to cite: Wichert, V. and Brix, H.: Augmenting the sensor network around Helgoland using unsupervised machine learning methods, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-15212, https://doi.org/10.5194/egusphere-egu2020-15212, 2020.
A sensor network surrounds the island of Helgoland, supplying marine data centers with autonomous measurements of variables such as temperature, salinity, chlorophyll and oxygen saturation. The output is a data collection containing information about the complicated conditions around Helgoland, lying at the edge between coastal area and open sea. Spatio-temporal phenomena, such as passing river plumes and pollutant influx through flood events can be found in this data set. Through the data provided by the existing measurement network, these events can be detected and investigated.
Because of its important role in understanding the transition between coastal and sea conditions, plans are made to augment the sensor network around Helgoland with another underwater sensor station, an Underwater Node (UWN). The new node is supposed to optimally complement the existing sensor network. Therefore, it makes sense to place it in an area that is not yet represented well by other sensors. The exact spatial and temporal extent of the area of representativity around a sensor is hard to determine, but is assumed to have similar statistical conditions as the sensor measures. This is difficult to specify in the complex system around Helgoland and might change with both, space and time.
Using an unsupervised machine learning approach, I determine areas of representativity around Helgoland with the goal of finding an ideal placement for a new sensor node. The areas of representativity are identified by clustering a dataset containing time series of the existing sensor network and complementary model data for a period of several years. The computed areas of representativity are compared to the existing sensor placements to decide where to deploy the additional UWN to achieve a good coverage for further investigations on spatio-temporal phenomena.
A challenge that occurs during the clustering analysis is to determine whether the spatial areas of representativity remain stable enough over time to base the decision of long-term sensor placement on its results. I compare results across different periods of time and investigate how fast areas of representativity change spatially with time and if there are areas that remain stable over the course of several years. This also allows insights on the occurrence and behavior of spatio-temporal events around Helgoland in the long-term.
Whether spatial areas of representativity remain stable enough temporally to be taken into account for augmenting sensor networks, influences future network design decisions. This way, the extended sensor network can capture a greater variety of the spatio-temporal phenomena around Helgoland, as well as allow an overview on the long-term behavior of the marine system.
How to cite: Wichert, V. and Brix, H.: Augmenting the sensor network around Helgoland using unsupervised machine learning methods, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-15212, https://doi.org/10.5194/egusphere-egu2020-15212, 2020.
EGU2020-9456 | Displays | ITS4.9/ESSI2.17
From machine learning to sustainable taxation: GPS traces of trucks circulating in BelgiumArnaud Adam and Isabelle Thomas
Transport geography has always been characterized by a lack of accurate data, leading to surveys often based on samples that are spatially not representative. However, the current deluge of data collected through sensors promises to overpass this scarcity of data. We here consider one example: since April 1st 2016, a GPS tracker is mandatory within each truck circulating in Belgium for kilometre taxes. Every 30 seconds, this tracker collects the position of the truck (as well as some other information such as speed or direction), leading to an individual taxation of trucks. This contribution uses a one-week exhaustive database containing the totality of trucks circulating in Belgium, in order to understand transport fluxes within the country, as well as the spatial effects of the taxation on the circulation of trucks.
Machine learning techniques are applied on over 270 million of GPS points to detect stops of trucks, leading to transform GPS sequences into a complete Origin-Destination matrix. Using machine learning allows to accurately classify stops that are different in nature (leisure stop, (un-)loading areas, or congested roads). Based on this matrix, we firstly propose an overview of the daily traffic, as well as an evaluation of the number of stops made in every Belgian place. Secondly, GPS sequences and stops are combined, leading to characterise sub-trajectories of each truck (first/last miles and transit) by their fiscal debit. This individual characterisation, as well as its variation in space and time, are here discussed: is the individual taxation system always efficient in space and time?
This contribution helps to better understand the circulation of trucks in Belgium, the places where they stopped, as well as the importance of their locations in a fiscal point of view. What are the potential modifications of the trucks routes that would lead to a more sustainable kilometre taxation? This contribution illustrates that combining big-data and machine learning open new roads for accurately measuring and modelling transportation.
How to cite: Adam, A. and Thomas, I.: From machine learning to sustainable taxation: GPS traces of trucks circulating in Belgium , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9456, https://doi.org/10.5194/egusphere-egu2020-9456, 2020.
Transport geography has always been characterized by a lack of accurate data, leading to surveys often based on samples that are spatially not representative. However, the current deluge of data collected through sensors promises to overpass this scarcity of data. We here consider one example: since April 1st 2016, a GPS tracker is mandatory within each truck circulating in Belgium for kilometre taxes. Every 30 seconds, this tracker collects the position of the truck (as well as some other information such as speed or direction), leading to an individual taxation of trucks. This contribution uses a one-week exhaustive database containing the totality of trucks circulating in Belgium, in order to understand transport fluxes within the country, as well as the spatial effects of the taxation on the circulation of trucks.
Machine learning techniques are applied on over 270 million of GPS points to detect stops of trucks, leading to transform GPS sequences into a complete Origin-Destination matrix. Using machine learning allows to accurately classify stops that are different in nature (leisure stop, (un-)loading areas, or congested roads). Based on this matrix, we firstly propose an overview of the daily traffic, as well as an evaluation of the number of stops made in every Belgian place. Secondly, GPS sequences and stops are combined, leading to characterise sub-trajectories of each truck (first/last miles and transit) by their fiscal debit. This individual characterisation, as well as its variation in space and time, are here discussed: is the individual taxation system always efficient in space and time?
This contribution helps to better understand the circulation of trucks in Belgium, the places where they stopped, as well as the importance of their locations in a fiscal point of view. What are the potential modifications of the trucks routes that would lead to a more sustainable kilometre taxation? This contribution illustrates that combining big-data and machine learning open new roads for accurately measuring and modelling transportation.
How to cite: Adam, A. and Thomas, I.: From machine learning to sustainable taxation: GPS traces of trucks circulating in Belgium , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9456, https://doi.org/10.5194/egusphere-egu2020-9456, 2020.
EGU2020-9291 | Displays | ITS4.9/ESSI2.17
Data driven methods for real time flood, drought and water quality monitoring: applications for Internet of WaterBrianna Pagán, Nele Desmet, Piet Seuntjens, Erik Bollen, and Bart Kuijpers
The Internet of Water (IoW) is a large-scale permanent sensor network with 2500 small, energy-efficient wireless water quality sensors spread across Flanders, Belgium. This intelligent water management system will permanently monitor water quality and quantity in real time. Such a dense network of sensors with high temporal resolution (sub-hourly) will provide unprecedented volumes of data for drought, flood and pollution management, prediction and decisions. While traditional physical hydrological models are obvious choices for utilizing such a dataset, computational costs or limitations must be considered when working in real time decision making.
In collaboration with the Flemish Institute for Technological Research (VITO) and the University of Hasselt, we present several data mining and machine learning initiatives which support the IoW. Examples include interpolating grab sample measurements to river stretches to monitor salinity intrusion. A shallow feed forward neural network is trained on historical grab samples using physical characteristics of the river stretches (i.e. soil properties, ocean connectivity). Such a system allows for salinity monitoring without complex convection-diffusion modeling, and for estimating salinity in areas with less monitoring stations. Another highlighted project is the coupling of neural network and data assimilation schemes for water quality forecasting. A long short-term memory recurrent neural network is trained on historical water quality parameters and remotely sensed spatially distributed weather data. Using forecasted weather data, a model estimate of water quality parameters are obtained from the neural network. A Newtonian nudging data assimilation scheme further corrects the forecast leveraging previous day observations, which can aid in the correction for non-point or non-weather driven pollution influences. Calculations are supported by an optimized database system developed by the University of Hasselt which further exploits data mining techniques to estimate water movement and timing through the Flanders river network system. As geospatial data increases exponentially in both temporal and spatial resolutions, scientists and water managers must consider the tradeoff between computational resources and physical model accuracy. These type of hybrid approaches allows for near real-time analysis without computational limitations and will further support research to make communities more climate resilient.
How to cite: Pagán, B., Desmet, N., Seuntjens, P., Bollen, E., and Kuijpers, B.: Data driven methods for real time flood, drought and water quality monitoring: applications for Internet of Water, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9291, https://doi.org/10.5194/egusphere-egu2020-9291, 2020.
The Internet of Water (IoW) is a large-scale permanent sensor network with 2500 small, energy-efficient wireless water quality sensors spread across Flanders, Belgium. This intelligent water management system will permanently monitor water quality and quantity in real time. Such a dense network of sensors with high temporal resolution (sub-hourly) will provide unprecedented volumes of data for drought, flood and pollution management, prediction and decisions. While traditional physical hydrological models are obvious choices for utilizing such a dataset, computational costs or limitations must be considered when working in real time decision making.
In collaboration with the Flemish Institute for Technological Research (VITO) and the University of Hasselt, we present several data mining and machine learning initiatives which support the IoW. Examples include interpolating grab sample measurements to river stretches to monitor salinity intrusion. A shallow feed forward neural network is trained on historical grab samples using physical characteristics of the river stretches (i.e. soil properties, ocean connectivity). Such a system allows for salinity monitoring without complex convection-diffusion modeling, and for estimating salinity in areas with less monitoring stations. Another highlighted project is the coupling of neural network and data assimilation schemes for water quality forecasting. A long short-term memory recurrent neural network is trained on historical water quality parameters and remotely sensed spatially distributed weather data. Using forecasted weather data, a model estimate of water quality parameters are obtained from the neural network. A Newtonian nudging data assimilation scheme further corrects the forecast leveraging previous day observations, which can aid in the correction for non-point or non-weather driven pollution influences. Calculations are supported by an optimized database system developed by the University of Hasselt which further exploits data mining techniques to estimate water movement and timing through the Flanders river network system. As geospatial data increases exponentially in both temporal and spatial resolutions, scientists and water managers must consider the tradeoff between computational resources and physical model accuracy. These type of hybrid approaches allows for near real-time analysis without computational limitations and will further support research to make communities more climate resilient.
How to cite: Pagán, B., Desmet, N., Seuntjens, P., Bollen, E., and Kuijpers, B.: Data driven methods for real time flood, drought and water quality monitoring: applications for Internet of Water, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9291, https://doi.org/10.5194/egusphere-egu2020-9291, 2020.
EGU2020-20339 | Displays | ITS4.9/ESSI2.17
Analytics Optimized Geoscience Data Store with STARE-based PackagingKwo-Sen Kuo and Michael Rilee
The only effective strategy to address the volume challenge of Big Data is “parallel processing”, e.g. employing a cluster of computers (nodes), in which a large volume of data is partitioned and distributed to the cluster nodes. Each of the cluster nodes processes a small portion of the whole volume. The nodes, working in tandem, can therefore collectively process the entire volume within a much-reduced period of time. In the presence of data variety, however, it is no longer as straightforward, because naïve partition and distribution of diverse geo-datasets (packaged with existing practice) inevitably results in misalignment of data for the analysis. Expensive cross-node communication, which is also a form of data movement, thus becomes necessary to bring the data in alignment first before analysis may commence.
Geoscience analysis predominantly requires spatiotemporal alignment of diverse data. For example, we often need to compare observations acquired by different means & platforms and compare model output with observations. Such comparisons are meaningful only if data values for the same space and time are compared. With the existing practice of packaging data using the conventional array data structure, it is nearly impossible to spatiotemporally align diverse data. Because, while array indices are generally used for partition and distribution, for different datasets (even data granules) the same indices most-often-than-not refer to different spatiotemporal neighborhoods. Partition and distribution using conventional array indices thus often results in data of the same spatiotemporal neighborhoods (from different datasets) reside on different nodes. Comparison thus cannot be performed until they are brought together to the same node.
Therefore, we need indices that tie directly and consistently to spatiotemporal neighborhoods to be used for partition and distribution. SpatioTemporal Adaptive-Resolution Encoding (STARE) provides exactly such indices, which can replace floating-point encoding of longitude-latitude and time as a more analytics-optimized alternative. Moreover, data packaging can base on STARE indices. Due to its hierarchical nature, geo-spatiotemporal data packaged based on STARE hierarchy offers essentially a reusable partition for distribution adaptable to various computing-and-storage architectures, through which spatiotemporal alignment of geo-data from diverse sources can be readily and scalably achieved to optimize parallel analytic operations.
How to cite: Kuo, K.-S. and Rilee, M.: Analytics Optimized Geoscience Data Store with STARE-based Packaging, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20339, https://doi.org/10.5194/egusphere-egu2020-20339, 2020.
The only effective strategy to address the volume challenge of Big Data is “parallel processing”, e.g. employing a cluster of computers (nodes), in which a large volume of data is partitioned and distributed to the cluster nodes. Each of the cluster nodes processes a small portion of the whole volume. The nodes, working in tandem, can therefore collectively process the entire volume within a much-reduced period of time. In the presence of data variety, however, it is no longer as straightforward, because naïve partition and distribution of diverse geo-datasets (packaged with existing practice) inevitably results in misalignment of data for the analysis. Expensive cross-node communication, which is also a form of data movement, thus becomes necessary to bring the data in alignment first before analysis may commence.
Geoscience analysis predominantly requires spatiotemporal alignment of diverse data. For example, we often need to compare observations acquired by different means & platforms and compare model output with observations. Such comparisons are meaningful only if data values for the same space and time are compared. With the existing practice of packaging data using the conventional array data structure, it is nearly impossible to spatiotemporally align diverse data. Because, while array indices are generally used for partition and distribution, for different datasets (even data granules) the same indices most-often-than-not refer to different spatiotemporal neighborhoods. Partition and distribution using conventional array indices thus often results in data of the same spatiotemporal neighborhoods (from different datasets) reside on different nodes. Comparison thus cannot be performed until they are brought together to the same node.
Therefore, we need indices that tie directly and consistently to spatiotemporal neighborhoods to be used for partition and distribution. SpatioTemporal Adaptive-Resolution Encoding (STARE) provides exactly such indices, which can replace floating-point encoding of longitude-latitude and time as a more analytics-optimized alternative. Moreover, data packaging can base on STARE indices. Due to its hierarchical nature, geo-spatiotemporal data packaged based on STARE hierarchy offers essentially a reusable partition for distribution adaptable to various computing-and-storage architectures, through which spatiotemporal alignment of geo-data from diverse sources can be readily and scalably achieved to optimize parallel analytic operations.
How to cite: Kuo, K.-S. and Rilee, M.: Analytics Optimized Geoscience Data Store with STARE-based Packaging, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20339, https://doi.org/10.5194/egusphere-egu2020-20339, 2020.
EGU2020-12492 | Displays | ITS4.9/ESSI2.17 | Highlight
Modeling and Capturing New Phenomena from Very High Cadence Earth ObservationsGiovanni Marchisio and Rasmus Houborg
Planet operates the largest constellation of Earth-observing satellites in human history collecting 1.3 million 29 MP multispectral images over 250 million km2 daily at a resolution of 3-5 meters. This amounts to more than twice the Earth’s total landmass every day and to more than 10 times the area covered by all other commercial and public sources combined, including Sentinel and Landsat, and at a higher resolution. To date we have collected an average of 1,200 images for every point on the surface of the planet. This provides an unparalleled amount of data from which to establish historical baselines and train and refine machine learning algorithms. Intersecting dense time series of global observations with modern deep learning solutions allows us to take a daily pulse of the planet like it has never been done before.
The daily temporal cadence and higher resolution at global scale is unlocking new challenges and opportunities. These range from tracking and discovering previously unknown natural phenomena to improving existing approaches for modeling vegetation phenology and monitoring human impact on the environment. We will provide a brief overview of recent success stories from our university partner ecosystem. For instance, spatio-temporal analytics based on millions of observations has enabled researchers to show that sub-seasonal fluctuations in surface water of Arctic-Boreal can increase carbon emissions and affect global climate to an extent that has eluded traditional satellite remote sensing. The new data source has also enabled intraday measurements of river flows, the first ever measurements of crop water usage and evapotranspiration from space, field level sowing date prediction on a nearly daily basis and improved detection of early-season corn nitrogen stress.
The second part of our presentation covers Planet’s own internal development of spatio-temporal deep learning solutions which target the interaction between geosphere and anthroposphere. Man-made structures such as roads and buildings are among the information layers that we are beginning to extract from our imagery reliably and at a global scale. Our deep learning models, with about seven million parameters, are trained on several billion labeled pixels representative of a wide variety of terrains, densities, land cover types and seasons worldwide. The outcome is a pipeline that has produced the most complete and current map of all the roads and buildings worldwide. It reveals details not available in popular mapping tools, in both industrialized cities and rural settlements. The high temporal cadence of these spatial information feeds increases our confidence in tracking permanent change associated with urbanization and improves our knowledge of how human settlements grow. Applications include tracking urban sprawl at a country level in China, deriving land consumption rates for countries in Sub-Saharan Africa, identifying construction in flood risk zones worldwide, and timely augmentation of OpenStreetMap in disaster management situations that affect developing countries. With continually refreshed imagery from space, such maps can be updated to highlight new changes around the world, opening up new possibilities to improve transparency and help life on Earth.
How to cite: Marchisio, G. and Houborg, R.: Modeling and Capturing New Phenomena from Very High Cadence Earth Observations, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12492, https://doi.org/10.5194/egusphere-egu2020-12492, 2020.
Planet operates the largest constellation of Earth-observing satellites in human history collecting 1.3 million 29 MP multispectral images over 250 million km2 daily at a resolution of 3-5 meters. This amounts to more than twice the Earth’s total landmass every day and to more than 10 times the area covered by all other commercial and public sources combined, including Sentinel and Landsat, and at a higher resolution. To date we have collected an average of 1,200 images for every point on the surface of the planet. This provides an unparalleled amount of data from which to establish historical baselines and train and refine machine learning algorithms. Intersecting dense time series of global observations with modern deep learning solutions allows us to take a daily pulse of the planet like it has never been done before.
The daily temporal cadence and higher resolution at global scale is unlocking new challenges and opportunities. These range from tracking and discovering previously unknown natural phenomena to improving existing approaches for modeling vegetation phenology and monitoring human impact on the environment. We will provide a brief overview of recent success stories from our university partner ecosystem. For instance, spatio-temporal analytics based on millions of observations has enabled researchers to show that sub-seasonal fluctuations in surface water of Arctic-Boreal can increase carbon emissions and affect global climate to an extent that has eluded traditional satellite remote sensing. The new data source has also enabled intraday measurements of river flows, the first ever measurements of crop water usage and evapotranspiration from space, field level sowing date prediction on a nearly daily basis and improved detection of early-season corn nitrogen stress.
The second part of our presentation covers Planet’s own internal development of spatio-temporal deep learning solutions which target the interaction between geosphere and anthroposphere. Man-made structures such as roads and buildings are among the information layers that we are beginning to extract from our imagery reliably and at a global scale. Our deep learning models, with about seven million parameters, are trained on several billion labeled pixels representative of a wide variety of terrains, densities, land cover types and seasons worldwide. The outcome is a pipeline that has produced the most complete and current map of all the roads and buildings worldwide. It reveals details not available in popular mapping tools, in both industrialized cities and rural settlements. The high temporal cadence of these spatial information feeds increases our confidence in tracking permanent change associated with urbanization and improves our knowledge of how human settlements grow. Applications include tracking urban sprawl at a country level in China, deriving land consumption rates for countries in Sub-Saharan Africa, identifying construction in flood risk zones worldwide, and timely augmentation of OpenStreetMap in disaster management situations that affect developing countries. With continually refreshed imagery from space, such maps can be updated to highlight new changes around the world, opening up new possibilities to improve transparency and help life on Earth.
How to cite: Marchisio, G. and Houborg, R.: Modeling and Capturing New Phenomena from Very High Cadence Earth Observations, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12492, https://doi.org/10.5194/egusphere-egu2020-12492, 2020.
EGU2020-4099 | Displays | ITS4.9/ESSI2.17
Understanding traffic distribution pattern from the perspective of urban land useMin Zhang
With the rapid development of urbanization, many problems become more serious in big cities, such as traffic congestion. Different urban land use type can have different influence on traffic, therefore, the analysis of relationship between urban traffic and urban land use is important for better understanding of urban traffic status. This study firstly utilizes spatial data analysis method and time series analysis method to obtain urban traffic pattern from the spatial and temporal perspective, using one-week traffic sensor data, we measure the urban commuting patterns, which include weekday mode and weekend mode. Secondly, this study analyzes the relationship between traffic status and land use type in traffic analysis zone (TAZ) level, which indicates traffic status has spatial autocorrelation, besides, commercial land use and mixed land use type may result in more serious traffic congestion. The research can be of value for urban understanding and decision making in areas of urban management, urban plan and traffic control.
How to cite: Zhang, M.: Understanding traffic distribution pattern from the perspective of urban land use, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4099, https://doi.org/10.5194/egusphere-egu2020-4099, 2020.
With the rapid development of urbanization, many problems become more serious in big cities, such as traffic congestion. Different urban land use type can have different influence on traffic, therefore, the analysis of relationship between urban traffic and urban land use is important for better understanding of urban traffic status. This study firstly utilizes spatial data analysis method and time series analysis method to obtain urban traffic pattern from the spatial and temporal perspective, using one-week traffic sensor data, we measure the urban commuting patterns, which include weekday mode and weekend mode. Secondly, this study analyzes the relationship between traffic status and land use type in traffic analysis zone (TAZ) level, which indicates traffic status has spatial autocorrelation, besides, commercial land use and mixed land use type may result in more serious traffic congestion. The research can be of value for urban understanding and decision making in areas of urban management, urban plan and traffic control.
How to cite: Zhang, M.: Understanding traffic distribution pattern from the perspective of urban land use, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4099, https://doi.org/10.5194/egusphere-egu2020-4099, 2020.
EGU2020-13040 | Displays | ITS4.9/ESSI2.17
Negative Effects of Shrinking Cities and the Dilemma of Their Sustainable DevelopmentGuolei Zhou
Abstract: Urban shrinkage has become a global phenomenon, occurring all over the world. Faced with the reduction of population, shrinking cities will continue to lose the vitality of development, and it is difficult to achieve the past glory. How to realize the sustainable development of shrinking cities will become an important issue, which deserves our in-depth study. We will apply big data to analyze the spatiotemporal changes of the population of shrinking cities. Urban shrinkage will lead to a series of chain reactions, reflected in all aspects of social and economic development. Shops are closing down constantly, and there are very few visitors to the commercial streets at night. Industrial enterprises have moved to other places, and urban employment and financial income have declined sharply. The city's economy will face the risk of collapse. Consequently, the lack of maintenance of infrastructure and public service facilities, the decrease of residents' income and the decrease of residents' happiness lead to the lack of cohesion in urban society. The sustainable development of shrinking cities will face great difficulties. Therefore, we must be aware of the seriousness of this problem and take necessary actions to reduce the negative effects of urban shrinkage.
Key words: Shrinking cities, sustainable development, society, economy
How to cite: Zhou, G.: Negative Effects of Shrinking Cities and the Dilemma of Their Sustainable Development, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13040, https://doi.org/10.5194/egusphere-egu2020-13040, 2020.
Abstract: Urban shrinkage has become a global phenomenon, occurring all over the world. Faced with the reduction of population, shrinking cities will continue to lose the vitality of development, and it is difficult to achieve the past glory. How to realize the sustainable development of shrinking cities will become an important issue, which deserves our in-depth study. We will apply big data to analyze the spatiotemporal changes of the population of shrinking cities. Urban shrinkage will lead to a series of chain reactions, reflected in all aspects of social and economic development. Shops are closing down constantly, and there are very few visitors to the commercial streets at night. Industrial enterprises have moved to other places, and urban employment and financial income have declined sharply. The city's economy will face the risk of collapse. Consequently, the lack of maintenance of infrastructure and public service facilities, the decrease of residents' income and the decrease of residents' happiness lead to the lack of cohesion in urban society. The sustainable development of shrinking cities will face great difficulties. Therefore, we must be aware of the seriousness of this problem and take necessary actions to reduce the negative effects of urban shrinkage.
Key words: Shrinking cities, sustainable development, society, economy
How to cite: Zhou, G.: Negative Effects of Shrinking Cities and the Dilemma of Their Sustainable Development, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13040, https://doi.org/10.5194/egusphere-egu2020-13040, 2020.
EGU2020-1153 | Displays | ITS4.9/ESSI2.17 | Highlight
Synthetic sampling for spatio-temporal land cover mapping with machine learning and the Google Earth Engine in Andalusia, SpainLaura Bindereif, Tobias Rentschler, Martin Bartelheim, Marta Díaz-Zorita Bonilla, Philipp Gries, Thomas Scholten, and Karsten Schmidt
Land cover information plays an essential role for resource development, environmental monitoring and protection. Amongst other natural resources, soils and soil properties are strongly affected by land cover and land cover change, which can lead to soil degradation. Remote sensing techniques are very suitable for spatio-temporal mapping of land cover mapping and change detection. With remote sensing programs vast data archives were established. Machine learning applications provide appropriate algorithms to analyse such amounts of data efficiently and with accurate results. However, machine learning methods require specific sampling techniques and are usually made for balanced datasets with an even training sample frequency. Though, most real-world datasets are imbalanced and methods to reduce the imbalance of datasets with synthetic sampling are required. Synthetic sampling methods increase the number of samples in the minority class and/or decrease the number in the majority class to achieve higher model accuracy. The Synthetic Minority Over-Sampling Technique (SMOTE) is a method to generate synthetic samples and balance the dataset used in many machine learning applications. In the middle Guadalquivir basin, Andalusia, Spain, we used random forests with Landsat images from 1984 to 2018 as covariates to map the land cover change with the Google Earth Engine. The sampling design was based on stratified random sampling according to the CORINE land cover classification of 2012. The land cover classes in our study were arable land, permanent crops (plantations), pastures/grassland, forest and shrub. Artificial surfaces and water bodies were excluded from modelling. However, the number of the 130 training samples was imbalanced. The classes pasture (7 samples) and shrub (13 samples) show a lower number than the other classes (48, 47 and 16 samples). This led to misclassifications and negatively affected the classification accuracy. Therefore, we applied SMOTE to increase the number of samples and the classification accuracy of the model. Preliminary results are promising and show an increase of the classification accuracy, especially the accuracy of the previously underrepresented classes pasture and shrub. This corresponds to the results of studies with other objectives which also see the use of synthetic sampling methods as an improvement for the performance of classification frameworks.
How to cite: Bindereif, L., Rentschler, T., Bartelheim, M., Díaz-Zorita Bonilla, M., Gries, P., Scholten, T., and Schmidt, K.: Synthetic sampling for spatio-temporal land cover mapping with machine learning and the Google Earth Engine in Andalusia, Spain, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1153, https://doi.org/10.5194/egusphere-egu2020-1153, 2020.
Land cover information plays an essential role for resource development, environmental monitoring and protection. Amongst other natural resources, soils and soil properties are strongly affected by land cover and land cover change, which can lead to soil degradation. Remote sensing techniques are very suitable for spatio-temporal mapping of land cover mapping and change detection. With remote sensing programs vast data archives were established. Machine learning applications provide appropriate algorithms to analyse such amounts of data efficiently and with accurate results. However, machine learning methods require specific sampling techniques and are usually made for balanced datasets with an even training sample frequency. Though, most real-world datasets are imbalanced and methods to reduce the imbalance of datasets with synthetic sampling are required. Synthetic sampling methods increase the number of samples in the minority class and/or decrease the number in the majority class to achieve higher model accuracy. The Synthetic Minority Over-Sampling Technique (SMOTE) is a method to generate synthetic samples and balance the dataset used in many machine learning applications. In the middle Guadalquivir basin, Andalusia, Spain, we used random forests with Landsat images from 1984 to 2018 as covariates to map the land cover change with the Google Earth Engine. The sampling design was based on stratified random sampling according to the CORINE land cover classification of 2012. The land cover classes in our study were arable land, permanent crops (plantations), pastures/grassland, forest and shrub. Artificial surfaces and water bodies were excluded from modelling. However, the number of the 130 training samples was imbalanced. The classes pasture (7 samples) and shrub (13 samples) show a lower number than the other classes (48, 47 and 16 samples). This led to misclassifications and negatively affected the classification accuracy. Therefore, we applied SMOTE to increase the number of samples and the classification accuracy of the model. Preliminary results are promising and show an increase of the classification accuracy, especially the accuracy of the previously underrepresented classes pasture and shrub. This corresponds to the results of studies with other objectives which also see the use of synthetic sampling methods as an improvement for the performance of classification frameworks.
How to cite: Bindereif, L., Rentschler, T., Bartelheim, M., Díaz-Zorita Bonilla, M., Gries, P., Scholten, T., and Schmidt, K.: Synthetic sampling for spatio-temporal land cover mapping with machine learning and the Google Earth Engine in Andalusia, Spain, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1153, https://doi.org/10.5194/egusphere-egu2020-1153, 2020.
EGU2020-16187 | Displays | ITS4.9/ESSI2.17
Predict urban growth in a low-density context: Basilicata region study caseLucia Saganeiti, Ahmed Mustafà, Jacques Teller, and Beniamino Murgante
This paper presents a spatiotemporal analysis to simulate and predict urban growth. It is important to study the growth of cities to understand their implications for environmental sustainability and infrastructure needs. The aim of this work is to predict future scenarios in low-density settlements in order to control and regulate the processes of urban transformation.
We applied an integrated approach based on the multinomial logistic regression (MLR) and the cellular automata (CA) for urban sprinkling modelling. Our case study is the Basilicata region, in southern Italy, which is affected by urban sprinkling literally defined as: “a small amount of urban territory distributed in scattered particles".
Built-up density maps were created on the basis of three regional building datasets (1989, 1998 and 2013) with four density classes: no built up, low density, medium density and high density. These sources were used for calibrating and validating the model as well as for future simulation of urban sprinkling. Two components were considered for the calculation of the transition potential from one density class to another. For the first built up development causative factors were calibrated using the MLR for the expansion and densification processes. The causative factors consider elevation and slope as physical factors, Euclidian distance to railway station, different type of street, large size city and medium size city as proximity factors, population density and employment rate as socio economic factors and zoning for land use policies in the study area.
The second causative factor is the CA neighbourhood effects that have been calibrated using a multi objective genetic algorithm (MOGA) as in (Mustafa et al., 2018). The transition potential was calibrated for the 1989-1998 time period and the calibration outcomes were used to simulate the 2013 map. The calibrated map was then used for the simulation of the 2013 map which was compared with the actual map of 2013 (validation). We then used our calibrated model to simulate urban growth in the year 2030.
The robustness of MLR has been evaluated with the Receiver Operating Characteristic (ROC) index. The Fuzzines index has been used for evaluating the validation process accuracy.
The results of the 2030 prediction show the greatest variations in class 1 (low density) that represent the sprinkling of urban cells in the territory.
Keywords: Low-density, Cellular Automata, Multinomial Logistic Regression, Urban Sprinkling, Basilicata Region.
How to cite: Saganeiti, L., Mustafà, A., Teller, J., and Murgante, B.: Predict urban growth in a low-density context: Basilicata region study case, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-16187, https://doi.org/10.5194/egusphere-egu2020-16187, 2020.
This paper presents a spatiotemporal analysis to simulate and predict urban growth. It is important to study the growth of cities to understand their implications for environmental sustainability and infrastructure needs. The aim of this work is to predict future scenarios in low-density settlements in order to control and regulate the processes of urban transformation.
We applied an integrated approach based on the multinomial logistic regression (MLR) and the cellular automata (CA) for urban sprinkling modelling. Our case study is the Basilicata region, in southern Italy, which is affected by urban sprinkling literally defined as: “a small amount of urban territory distributed in scattered particles".
Built-up density maps were created on the basis of three regional building datasets (1989, 1998 and 2013) with four density classes: no built up, low density, medium density and high density. These sources were used for calibrating and validating the model as well as for future simulation of urban sprinkling. Two components were considered for the calculation of the transition potential from one density class to another. For the first built up development causative factors were calibrated using the MLR for the expansion and densification processes. The causative factors consider elevation and slope as physical factors, Euclidian distance to railway station, different type of street, large size city and medium size city as proximity factors, population density and employment rate as socio economic factors and zoning for land use policies in the study area.
The second causative factor is the CA neighbourhood effects that have been calibrated using a multi objective genetic algorithm (MOGA) as in (Mustafa et al., 2018). The transition potential was calibrated for the 1989-1998 time period and the calibration outcomes were used to simulate the 2013 map. The calibrated map was then used for the simulation of the 2013 map which was compared with the actual map of 2013 (validation). We then used our calibrated model to simulate urban growth in the year 2030.
The robustness of MLR has been evaluated with the Receiver Operating Characteristic (ROC) index. The Fuzzines index has been used for evaluating the validation process accuracy.
The results of the 2030 prediction show the greatest variations in class 1 (low density) that represent the sprinkling of urban cells in the territory.
Keywords: Low-density, Cellular Automata, Multinomial Logistic Regression, Urban Sprinkling, Basilicata Region.
How to cite: Saganeiti, L., Mustafà, A., Teller, J., and Murgante, B.: Predict urban growth in a low-density context: Basilicata region study case, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-16187, https://doi.org/10.5194/egusphere-egu2020-16187, 2020.
EGU2020-7181 | Displays | ITS4.9/ESSI2.17
Delimitating functional zones at municipal or county level in China based on a spatial optimization and simulation coupling approachDan Li, Jigang Qiao, and Yihan Zhang
Territory spatial planning is a guide and blueprint for future territorial development in China. It means to form a scientific, rational, intensive, and efficient spatial protection and development pattern in territory space. The first task according to the government is to delimitate the functional zones of ecology, agriculture, urban zones, and delineation of ecological protection red lines, permanent basic farmland boundaries, and urban development boundaries ("three zones and three lines"). Currently China used a resource and environment carrying capacity and land space development suitability evaluation ("double evaluations") to complete the delimitation task. However, the process of these evaluations and demarcation is relatively complicated, high-level human intervention, and the operability is not strong, therefore it is not practically at municipal or county level. We proposed a new delineation framework, methods, and software tools for the delimitation work, based on a spatial optimization and simulation coupling approach, and is verified by an example in Guangzhou, a super metropolis city in China. It shows that this method can rapidly and efficiently delimit urban ecological and agricultural zones based on regional geographic background conditions, by using an ant colony intelligent optimization algorithm, and using a cellular automata model to delineate compact urban zones. Compared with the "three zones" division plan in the "Guangzhou Land and Space Master Plan (2018-2035) Draft" which is published by local government, the delimitated functional zones by proposed method can meet the quantitative requirements of the draft, while providing more detailed and realistic spatial pattern of the three functional zones, which can be very useful for municipal and county level territory spatial planning work.
How to cite: Li, D., Qiao, J., and Zhang, Y.: Delimitating functional zones at municipal or county level in China based on a spatial optimization and simulation coupling approach, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7181, https://doi.org/10.5194/egusphere-egu2020-7181, 2020.
Territory spatial planning is a guide and blueprint for future territorial development in China. It means to form a scientific, rational, intensive, and efficient spatial protection and development pattern in territory space. The first task according to the government is to delimitate the functional zones of ecology, agriculture, urban zones, and delineation of ecological protection red lines, permanent basic farmland boundaries, and urban development boundaries ("three zones and three lines"). Currently China used a resource and environment carrying capacity and land space development suitability evaluation ("double evaluations") to complete the delimitation task. However, the process of these evaluations and demarcation is relatively complicated, high-level human intervention, and the operability is not strong, therefore it is not practically at municipal or county level. We proposed a new delineation framework, methods, and software tools for the delimitation work, based on a spatial optimization and simulation coupling approach, and is verified by an example in Guangzhou, a super metropolis city in China. It shows that this method can rapidly and efficiently delimit urban ecological and agricultural zones based on regional geographic background conditions, by using an ant colony intelligent optimization algorithm, and using a cellular automata model to delineate compact urban zones. Compared with the "three zones" division plan in the "Guangzhou Land and Space Master Plan (2018-2035) Draft" which is published by local government, the delimitated functional zones by proposed method can meet the quantitative requirements of the draft, while providing more detailed and realistic spatial pattern of the three functional zones, which can be very useful for municipal and county level territory spatial planning work.
How to cite: Li, D., Qiao, J., and Zhang, Y.: Delimitating functional zones at municipal or county level in China based on a spatial optimization and simulation coupling approach, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7181, https://doi.org/10.5194/egusphere-egu2020-7181, 2020.
EGU2020-18512 | Displays | ITS4.9/ESSI2.17
Investigating the effects of land use change on ecosystem services: the Basilicata region (Italy) case studyAngela Pilogallo, Lucia Saganeiti, Francesco Scorza, and Beniamino Murgante
By the end of this century effects of land-use change on ecosystem services are expected to be more significant than other world-wide transformation processes such as climate change, altering atmospheric concentrations of greenhouse gases or distribution of invasive alien species.
In recent years, scientific literature has been embellished with numerous land-use models that aim to explore the behaviour of land use systems under changing environmental conditions and different territorial transformations explaining the different dynamics that contribute to it, and to formulate scenario analyses to be followed up by development strategies.
In addition, it should be noted that a dimension of the nexus between planning and sustainability that is important but still too little explored, is the assessment of territorial changes and development dynamics through the alterations analysis induced on processes, functions and complex systems.
While land-use models can help investigate the effects of a combination of drivers at different scales, ecosystem services approach can help in better understand the trade-offs between different development scenarios making explicit the relations that every variation induces within the relationship between man and territory and among different environmental components.
In this framework is set the present work that aims to integrate scenario analysis of the Basilicata region (Italy) development with assessments of alterations induced on the capacity to deliver ecosystem services. Although this region is very poorly populated and characterised by low settlement density, it is not immune to the global phenomenon of land take associated with high territorial fragmentation.
The building stock increase due to real development dynamics and relative demographic increase typical of the post-war period, was followed by a further built up environment growth - in contrast with the demographic trend - and a significant land take due to massive construction of renewable energy production plants.
Changing model have been applied to identify and classify the driving forces for land use changes and predict future development scenarios.
In order to contribute to the development of decision support systems, scenarios resulting from the implementation of different policies are analyzed with the ecosystem services approach.
How to cite: Pilogallo, A., Saganeiti, L., Scorza, F., and Murgante, B.: Investigating the effects of land use change on ecosystem services: the Basilicata region (Italy) case study, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18512, https://doi.org/10.5194/egusphere-egu2020-18512, 2020.
By the end of this century effects of land-use change on ecosystem services are expected to be more significant than other world-wide transformation processes such as climate change, altering atmospheric concentrations of greenhouse gases or distribution of invasive alien species.
In recent years, scientific literature has been embellished with numerous land-use models that aim to explore the behaviour of land use systems under changing environmental conditions and different territorial transformations explaining the different dynamics that contribute to it, and to formulate scenario analyses to be followed up by development strategies.
In addition, it should be noted that a dimension of the nexus between planning and sustainability that is important but still too little explored, is the assessment of territorial changes and development dynamics through the alterations analysis induced on processes, functions and complex systems.
While land-use models can help investigate the effects of a combination of drivers at different scales, ecosystem services approach can help in better understand the trade-offs between different development scenarios making explicit the relations that every variation induces within the relationship between man and territory and among different environmental components.
In this framework is set the present work that aims to integrate scenario analysis of the Basilicata region (Italy) development with assessments of alterations induced on the capacity to deliver ecosystem services. Although this region is very poorly populated and characterised by low settlement density, it is not immune to the global phenomenon of land take associated with high territorial fragmentation.
The building stock increase due to real development dynamics and relative demographic increase typical of the post-war period, was followed by a further built up environment growth - in contrast with the demographic trend - and a significant land take due to massive construction of renewable energy production plants.
Changing model have been applied to identify and classify the driving forces for land use changes and predict future development scenarios.
In order to contribute to the development of decision support systems, scenarios resulting from the implementation of different policies are analyzed with the ecosystem services approach.
How to cite: Pilogallo, A., Saganeiti, L., Scorza, F., and Murgante, B.: Investigating the effects of land use change on ecosystem services: the Basilicata region (Italy) case study, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18512, https://doi.org/10.5194/egusphere-egu2020-18512, 2020.
EGU2020-2251 | Displays | ITS4.9/ESSI2.17
A process-oriented approach for mining marine heatwaves with a time series of raster formatted productsCunjin Xue and Changfeng Jing
A marine heatwave (MHW) is defined as a coherent area of extreme warm sea surface temperature that persists for days to months, which has a property of evolution from production through development to death in space and time. MHWs usually relates to climatic extremes that can have devastating and long-term impacts on ecosystems, with subsequent socioeconomic consequences. Long term remote sensing products make it possible for mining successive MHWs at global scale. However, more literatures focus on a spatial distribution at a fixed time snapshot or a temporal statistic at a fixed grid cell of MWHs. As few considering the temporal evolution of MWHs, it is greater challenge to mining their dynamic changes of spatial structure. Thus, this manuscript proposes a process-oriented approach for identifying and tracking MWHs, named as PoAITM. The PoAITM considers a dynamic evolution of a MWH, which consists of three steps. The first step uses the threshold-based algorithm to identifying the time series of grid pixels which meets the MWH definition, called as MWH pixels; the second adopts the spatial proximities to connect the MWH pixels at the snapshots, and transforms them spatial objects, called as MWH objects; the third combines the dynamic characteristics and spatiotemporal topologies of MWH objects between the previous and next snapshots to identify and track them belonging to the same ones. The final extract MWH with a property from production through development to death is defined as a MWH process. Comparison with the prevail methods of tracking MHWs, The PoAITM has three advantages. Firstly, PoAITM combines the spatial distribution and temporal evolution of MWH to identify and track the MWH objects. The second considers not only the spatial structure of MWH at current snapshot, also the previous and next ones, to track the MWH process, which ensures the MWH completeness in a temporal domain. The third is the dynamic behaviors of MWH, e.g. developing, merging, splitting, are also found between the successive MWH objects. Finally, we address the global MWHs exploring from the sea surface temperature products during the period of January 1982 to December 2018. The results not only show well-known knowledge, but also some new findings about evolution characteristics of MWHs, which may provide new references for further study on global climate change.
How to cite: Xue, C. and Jing, C.: A process-oriented approach for mining marine heatwaves with a time series of raster formatted products, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2251, https://doi.org/10.5194/egusphere-egu2020-2251, 2020.
A marine heatwave (MHW) is defined as a coherent area of extreme warm sea surface temperature that persists for days to months, which has a property of evolution from production through development to death in space and time. MHWs usually relates to climatic extremes that can have devastating and long-term impacts on ecosystems, with subsequent socioeconomic consequences. Long term remote sensing products make it possible for mining successive MHWs at global scale. However, more literatures focus on a spatial distribution at a fixed time snapshot or a temporal statistic at a fixed grid cell of MWHs. As few considering the temporal evolution of MWHs, it is greater challenge to mining their dynamic changes of spatial structure. Thus, this manuscript proposes a process-oriented approach for identifying and tracking MWHs, named as PoAITM. The PoAITM considers a dynamic evolution of a MWH, which consists of three steps. The first step uses the threshold-based algorithm to identifying the time series of grid pixels which meets the MWH definition, called as MWH pixels; the second adopts the spatial proximities to connect the MWH pixels at the snapshots, and transforms them spatial objects, called as MWH objects; the third combines the dynamic characteristics and spatiotemporal topologies of MWH objects between the previous and next snapshots to identify and track them belonging to the same ones. The final extract MWH with a property from production through development to death is defined as a MWH process. Comparison with the prevail methods of tracking MHWs, The PoAITM has three advantages. Firstly, PoAITM combines the spatial distribution and temporal evolution of MWH to identify and track the MWH objects. The second considers not only the spatial structure of MWH at current snapshot, also the previous and next ones, to track the MWH process, which ensures the MWH completeness in a temporal domain. The third is the dynamic behaviors of MWH, e.g. developing, merging, splitting, are also found between the successive MWH objects. Finally, we address the global MWHs exploring from the sea surface temperature products during the period of January 1982 to December 2018. The results not only show well-known knowledge, but also some new findings about evolution characteristics of MWHs, which may provide new references for further study on global climate change.
How to cite: Xue, C. and Jing, C.: A process-oriented approach for mining marine heatwaves with a time series of raster formatted products, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2251, https://doi.org/10.5194/egusphere-egu2020-2251, 2020.
EGU2020-6382 | Displays | ITS4.9/ESSI2.17
Study on spacial-correlation between water pollution and industrial agglomeration in the developed regions of ChinaLi-hong Shi
Abstract: Agglomeration of the manufacturing industries is not only a fundamental driving force for urban development However, a large number of manufacturing industries produce sewage and thus have negative effects on regional environment. This study first estimates the degree of clustering of pollution intensive manufacturing industries in the developed region of China at city level by introducing the Kernel density distribution function, and then evaluates the pollution distribution pattern by dividing the study area into several environmental units according to the naturally integrated characteristics of the primary streams. Furthermore, we quantitatively analyze the mechanism of the response of water environment quality to industrial distribution by utilizing the bi-variate spatial autocorrelation model. Results show that pollution-intensive manufacturing industries form clusters in suburban and non-sensitive areas. Besides, the density of pollution sources gradually decreases from the chief canals to the peripheral areas. Spatial autocorrelation analysis shows that spatial-relationships show differences according to industry categories: the agglomeration of textile, petrochemical and metallurgical industries prominently affects the spatial heterogeneity of water pollution distribution while the effects of the agglomeration of food manufacturing and paper-making industry on water pollution are not significant. Based on the spatial autocorrelation between industrial agglomeration and pollution distribution, we divide the study area into four types: high-agglomeration and high-pollution area, low-agglomeration and low-pollution area, low-agglomeration and high-pollution area, high-agglomeration and low-pollution area. Based on the that, we analyze the formation scheme and provide policy suggestions regarding industrial development. This paper provides a new perspective for the study of the interaction between industrial agglomeration and environment effects, and will be benefit the sustainable development of cities.
Key words: industrial agglomeration; Kernel Density Distribution function; water pollution; manufacturing industry; spatial autocorrelation
How to cite: Shi, L.: Study on spacial-correlation between water pollution and industrial agglomeration in the developed regions of China, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6382, https://doi.org/10.5194/egusphere-egu2020-6382, 2020.
Abstract: Agglomeration of the manufacturing industries is not only a fundamental driving force for urban development However, a large number of manufacturing industries produce sewage and thus have negative effects on regional environment. This study first estimates the degree of clustering of pollution intensive manufacturing industries in the developed region of China at city level by introducing the Kernel density distribution function, and then evaluates the pollution distribution pattern by dividing the study area into several environmental units according to the naturally integrated characteristics of the primary streams. Furthermore, we quantitatively analyze the mechanism of the response of water environment quality to industrial distribution by utilizing the bi-variate spatial autocorrelation model. Results show that pollution-intensive manufacturing industries form clusters in suburban and non-sensitive areas. Besides, the density of pollution sources gradually decreases from the chief canals to the peripheral areas. Spatial autocorrelation analysis shows that spatial-relationships show differences according to industry categories: the agglomeration of textile, petrochemical and metallurgical industries prominently affects the spatial heterogeneity of water pollution distribution while the effects of the agglomeration of food manufacturing and paper-making industry on water pollution are not significant. Based on the spatial autocorrelation between industrial agglomeration and pollution distribution, we divide the study area into four types: high-agglomeration and high-pollution area, low-agglomeration and low-pollution area, low-agglomeration and high-pollution area, high-agglomeration and low-pollution area. Based on the that, we analyze the formation scheme and provide policy suggestions regarding industrial development. This paper provides a new perspective for the study of the interaction between industrial agglomeration and environment effects, and will be benefit the sustainable development of cities.
Key words: industrial agglomeration; Kernel Density Distribution function; water pollution; manufacturing industry; spatial autocorrelation
How to cite: Shi, L.: Study on spacial-correlation between water pollution and industrial agglomeration in the developed regions of China, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6382, https://doi.org/10.5194/egusphere-egu2020-6382, 2020.
EGU2020-13476 | Displays | ITS4.9/ESSI2.17
Application of nonparametric trend analysis to concentration time seriesArtur Kohler
Groundwater contamination resulted from chemical releases related to anthropogenic activity often proves to be a persistent feature of the affected groundwater regime. The affected volume (i.e. where the concentration of hazardous substances exceeds a certain threshold) is a complex and dynamic entity commonly called “contaminant plume”. The plume can be described as a spatially dependent concentration pattern with temporal behavior. Persistent plumes are regularly monitored, concentration data gained by repeated sampling of monitoring points and laboratory analyses of the samples are used to assess the actual state of the plume. The change of the concentrations at certain points of the plume facilitates the assessment of the temporal behavior of the plume. Repeated sampling of the monitoring points provides concentration time series.
Concentration time series are evaluated for trends. Methods include parametric (regression using least squares) and non-parametric methods. Mann-Kendall statistic is a commonly used, well known non parametric method.
When using Mann-Kendall statistics consecutive concentration data are compared to each other, their cumulative relation defines Mann-Kendall statistic ‘S’. However, when comparing concentration data laboratory uncertainties are usually neglected. Allowing for laboratory uncertainties, rises the question of what concentrations are considered equal, less or more than other concentrations. In addition aggravating concentration data will change the previous equal – more - less status of two concentrations, thus changing the Mann-Kendall statistics value, which sometimes results in differences in trend significance.
How to cite: Kohler, A.: Application of nonparametric trend analysis to concentration time series, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13476, https://doi.org/10.5194/egusphere-egu2020-13476, 2020.
Groundwater contamination resulted from chemical releases related to anthropogenic activity often proves to be a persistent feature of the affected groundwater regime. The affected volume (i.e. where the concentration of hazardous substances exceeds a certain threshold) is a complex and dynamic entity commonly called “contaminant plume”. The plume can be described as a spatially dependent concentration pattern with temporal behavior. Persistent plumes are regularly monitored, concentration data gained by repeated sampling of monitoring points and laboratory analyses of the samples are used to assess the actual state of the plume. The change of the concentrations at certain points of the plume facilitates the assessment of the temporal behavior of the plume. Repeated sampling of the monitoring points provides concentration time series.
Concentration time series are evaluated for trends. Methods include parametric (regression using least squares) and non-parametric methods. Mann-Kendall statistic is a commonly used, well known non parametric method.
When using Mann-Kendall statistics consecutive concentration data are compared to each other, their cumulative relation defines Mann-Kendall statistic ‘S’. However, when comparing concentration data laboratory uncertainties are usually neglected. Allowing for laboratory uncertainties, rises the question of what concentrations are considered equal, less or more than other concentrations. In addition aggravating concentration data will change the previous equal – more - less status of two concentrations, thus changing the Mann-Kendall statistics value, which sometimes results in differences in trend significance.
How to cite: Kohler, A.: Application of nonparametric trend analysis to concentration time series, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13476, https://doi.org/10.5194/egusphere-egu2020-13476, 2020.
EGU2020-15891 | Displays | ITS4.9/ESSI2.17
Spatio-temporal variability of global soil salinization delineated by advanced machine learning algorithmsAmirhossein Hassani, Adisa Azapagic, and Nima Shokri
Soil salinity is among the major threats affecting the soil fertility, stability and vegetation. It can also accelerate the desertification processes, especially in arid and semi-arid regions. An accurate estimation of the global extent and distribution of the salt-affected soils and their temporal variations is pivotal to our understanding of the salinity-induced land degradation processes and to design effective remediation strategies. In this study, using legacy soil profiles data and a broad set of climatic, topographic, and remotely sensed soil surface and vegetative data, we trained ensembles of classification and regression trees to map the spatio-temporal variation of the soil salinity and sodicity (exchangeable sodium percentage) at the global scale from 1980 to 2018 at a 1 km resolution. The User’s Accuracies for soil salinity and sodicity classification were 88.05% and 84.65%, respectively. The 2018 map shows that globally ̴ 944 Mha of the lands are saline (with saturated paste electrical conductivity > 4 ds m-1), while ̴ 155 Mha can be classified as sodic soils (with sodium exchange percentage > 15%). Our findings and provided dataset show quantitatively how soil salinization is influenced by a broad array of climatic, anthropogenic and hydrologic parameters. Such information is crucial for effective water and land-use management, which is important for maintaining food security in face of future climatic uncertainties. Moreover, our results combined with the quantitative methodology developed in this study will provide us with an opportunity to delineate the role of anthropogenic activities on soil salinization. This information is useful not only for developing predictive models of primary and secondary soil salinization but also for natural resources management and policy makers.
How to cite: Hassani, A., Azapagic, A., and Shokri, N.: Spatio-temporal variability of global soil salinization delineated by advanced machine learning algorithms, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-15891, https://doi.org/10.5194/egusphere-egu2020-15891, 2020.
Soil salinity is among the major threats affecting the soil fertility, stability and vegetation. It can also accelerate the desertification processes, especially in arid and semi-arid regions. An accurate estimation of the global extent and distribution of the salt-affected soils and their temporal variations is pivotal to our understanding of the salinity-induced land degradation processes and to design effective remediation strategies. In this study, using legacy soil profiles data and a broad set of climatic, topographic, and remotely sensed soil surface and vegetative data, we trained ensembles of classification and regression trees to map the spatio-temporal variation of the soil salinity and sodicity (exchangeable sodium percentage) at the global scale from 1980 to 2018 at a 1 km resolution. The User’s Accuracies for soil salinity and sodicity classification were 88.05% and 84.65%, respectively. The 2018 map shows that globally ̴ 944 Mha of the lands are saline (with saturated paste electrical conductivity > 4 ds m-1), while ̴ 155 Mha can be classified as sodic soils (with sodium exchange percentage > 15%). Our findings and provided dataset show quantitatively how soil salinization is influenced by a broad array of climatic, anthropogenic and hydrologic parameters. Such information is crucial for effective water and land-use management, which is important for maintaining food security in face of future climatic uncertainties. Moreover, our results combined with the quantitative methodology developed in this study will provide us with an opportunity to delineate the role of anthropogenic activities on soil salinization. This information is useful not only for developing predictive models of primary and secondary soil salinization but also for natural resources management and policy makers.
How to cite: Hassani, A., Azapagic, A., and Shokri, N.: Spatio-temporal variability of global soil salinization delineated by advanced machine learning algorithms, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-15891, https://doi.org/10.5194/egusphere-egu2020-15891, 2020.
EGU2020-311 | Displays | ITS4.9/ESSI2.17
Remote Sensing of Surface Soil Moisture from FengYun MicroWave Radiation Imager (MWRI) Data Using a Machine Learning TechniqueSibo Zhang and Wei Yao
In the past, soil moisture can be retrieved from microwave imager over most of land conditions. However, the algorithm performances over Tibetan Plateau and the Northwest China vary greatly from one to another due to frozen soils and surface volumetric scattering. The majority of western Chinese region is often filled with invalid retrievals. In this study, Soil Moisture Operational Products System (SMOPS) products from NOAA are used as the learning objectives to train a machine learning (random forest) model for FY-3C microwave radiation imager (MWRI) data with multivariable inputs: brightness temperatures from all 10 MWRI channels from 10 to 89 GHz, brightness temperature polarization ratios at 10.65, 18.7 and 23.8 GHz, height in DEM (digital elevation model) and statistical soil porosity map data. Since the vegetation penetration of MWRI observations is limited, we exclude forest, urban and snow/ice surfaces in this work. It is shown that our new method performs very well and derives the surface soil moisture over Tibetan Plateau without major missing values. Comparing to other soil moisture data, the volumetric soil moisture (VSM) from this study correlates with SMOPS products much better than the MWRI operational L2 VSM products. R2 score increases from 0.3 to 0.6 and ubRMSE score improves significantly from 0.11 m3 m-3 to 0.04 m3 m-3 during the time period from 1 August 2017 to 31 May 2019. The spatial distribution of our MWRI VSM estimates is also much improved in western China. Moreover, our MWRI VSM estimates are in good agreement with the top 7 cm soil moisture of ECMWF ERA5 reanalysis data: R2 = 0.62, ubRMSD = 0.114 m3 m-3 and mean bias = -0.014 m3 m-3 for a global scale. We note that there is a risk of data gap of AMSR2 from the present to 2025. Obviously, for satellite low frequency microwave observations, MWRI observations from FY-3 series satellites can be a benefit supplement to keep the data integrity and increase the data density, since FY-3B\-3C\-3D satellites launched in November 2010\September 2013\November 2017 are still working today, and FY-3D is designed to work until November 2022.
How to cite: Zhang, S. and Yao, W.: Remote Sensing of Surface Soil Moisture from FengYun MicroWave Radiation Imager (MWRI) Data Using a Machine Learning Technique , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-311, https://doi.org/10.5194/egusphere-egu2020-311, 2020.
In the past, soil moisture can be retrieved from microwave imager over most of land conditions. However, the algorithm performances over Tibetan Plateau and the Northwest China vary greatly from one to another due to frozen soils and surface volumetric scattering. The majority of western Chinese region is often filled with invalid retrievals. In this study, Soil Moisture Operational Products System (SMOPS) products from NOAA are used as the learning objectives to train a machine learning (random forest) model for FY-3C microwave radiation imager (MWRI) data with multivariable inputs: brightness temperatures from all 10 MWRI channels from 10 to 89 GHz, brightness temperature polarization ratios at 10.65, 18.7 and 23.8 GHz, height in DEM (digital elevation model) and statistical soil porosity map data. Since the vegetation penetration of MWRI observations is limited, we exclude forest, urban and snow/ice surfaces in this work. It is shown that our new method performs very well and derives the surface soil moisture over Tibetan Plateau without major missing values. Comparing to other soil moisture data, the volumetric soil moisture (VSM) from this study correlates with SMOPS products much better than the MWRI operational L2 VSM products. R2 score increases from 0.3 to 0.6 and ubRMSE score improves significantly from 0.11 m3 m-3 to 0.04 m3 m-3 during the time period from 1 August 2017 to 31 May 2019. The spatial distribution of our MWRI VSM estimates is also much improved in western China. Moreover, our MWRI VSM estimates are in good agreement with the top 7 cm soil moisture of ECMWF ERA5 reanalysis data: R2 = 0.62, ubRMSD = 0.114 m3 m-3 and mean bias = -0.014 m3 m-3 for a global scale. We note that there is a risk of data gap of AMSR2 from the present to 2025. Obviously, for satellite low frequency microwave observations, MWRI observations from FY-3 series satellites can be a benefit supplement to keep the data integrity and increase the data density, since FY-3B\-3C\-3D satellites launched in November 2010\September 2013\November 2017 are still working today, and FY-3D is designed to work until November 2022.
How to cite: Zhang, S. and Yao, W.: Remote Sensing of Surface Soil Moisture from FengYun MicroWave Radiation Imager (MWRI) Data Using a Machine Learning Technique , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-311, https://doi.org/10.5194/egusphere-egu2020-311, 2020.
EGU2020-13227 | Displays | ITS4.9/ESSI2.17
crops planting area identification and analysis based on multi-source high resolution remote sensing dataLei Wang, Haoran Sun, Wenjun Li, and Liang Zhou
Crop planting structure is of great significance to the quantitative management of agricultural water and the accurate estimation of crop yield. With the increasing spatial and temporal resolution of remote sensing optical and SAR(Synthetic Aperture Radar) images, efficient crop mapping in large area becomes possible and the accuracy is improved. In this study, Qingyijiang Irrigation District in southwest of China is selected for crop identification methods comparison, which has heterogeneous terrain and complex crop structure . Multi-temporal optical (Sentinel-2) and SAR (Sentinel-1) data were used to calculate NDVI and backscattering coefficient as the main classification indexes. The multi-spectral and SAR data showed significant change in different stages of the whole crop growth period and varied with different crop types. Spatial distribution and texture analysis was also made. Classification using different combinations of indexes were performed using neural network, support vector machine and random forest method. The results showed that, the use of multi-temporal optical data and SAR data in the key growing periods of main crops can both provide satisfactory classification accuracy. The overall classification accuracy was greater than 82% and Kappa coefficient was greater than 0.8. SAR data has high accuracy and much potential in rice identification. However optical data had more accuracy in upland crops classification. In addition, the classification accuracy can be effectively improved by combination of classification indexes from optical and SAR data, the overall accuracy was up to 91.47%. The random forest method was superior to the other two methods in terms of the overall accuracy and the kappa coefficient.
How to cite: Wang, L., Sun, H., Li, W., and Zhou, L.: crops planting area identification and analysis based on multi-source high resolution remote sensing data, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13227, https://doi.org/10.5194/egusphere-egu2020-13227, 2020.
Crop planting structure is of great significance to the quantitative management of agricultural water and the accurate estimation of crop yield. With the increasing spatial and temporal resolution of remote sensing optical and SAR(Synthetic Aperture Radar) images, efficient crop mapping in large area becomes possible and the accuracy is improved. In this study, Qingyijiang Irrigation District in southwest of China is selected for crop identification methods comparison, which has heterogeneous terrain and complex crop structure . Multi-temporal optical (Sentinel-2) and SAR (Sentinel-1) data were used to calculate NDVI and backscattering coefficient as the main classification indexes. The multi-spectral and SAR data showed significant change in different stages of the whole crop growth period and varied with different crop types. Spatial distribution and texture analysis was also made. Classification using different combinations of indexes were performed using neural network, support vector machine and random forest method. The results showed that, the use of multi-temporal optical data and SAR data in the key growing periods of main crops can both provide satisfactory classification accuracy. The overall classification accuracy was greater than 82% and Kappa coefficient was greater than 0.8. SAR data has high accuracy and much potential in rice identification. However optical data had more accuracy in upland crops classification. In addition, the classification accuracy can be effectively improved by combination of classification indexes from optical and SAR data, the overall accuracy was up to 91.47%. The random forest method was superior to the other two methods in terms of the overall accuracy and the kappa coefficient.
How to cite: Wang, L., Sun, H., Li, W., and Zhou, L.: crops planting area identification and analysis based on multi-source high resolution remote sensing data, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13227, https://doi.org/10.5194/egusphere-egu2020-13227, 2020.
EGU2020-138 | Displays | ITS4.9/ESSI2.17
Comparing a random forest based prediction of winter wheat yield to historical production potentialYannik Roell, Amélie Beucher, Per Møller, Mette Greve, and Mogens Greve
Predicting wheat yield is crucial due to the importance of wheat across the world. When modeling yield, the difference between potential and actual yield consistently changes because of technology. Considering historical yield potential would help determine spatiotemporal trends in agricultural development. Comparing current and historical production in Denmark is possible because production has been documented throughout history. However, the current winter wheat yield model is solely based on soil. The aim of this study was to generate a new Danish winter wheat yield map and compare the results to historical production potential. Utilizing random forest with soil, climate, and topography variables, a winter wheat yield map was generated from 876 field trials carried out from 1992 to 2018. The random forest model performed better than the model based only on soil. The updated national yield map was then compared to production potential maps from 1688 and 1844. While historical time periods are characterized by numerous low production potential areas and few highly productive areas, present-day production is evenly distributed between low and high production. Advances in technology and farm practices have exceeded historical yield predictions. Thus, modeling current yield could be unreliable in future years as technology progresses.
How to cite: Roell, Y., Beucher, A., Møller, P., Greve, M., and Greve, M.: Comparing a random forest based prediction of winter wheat yield to historical production potential, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-138, https://doi.org/10.5194/egusphere-egu2020-138, 2020.
Predicting wheat yield is crucial due to the importance of wheat across the world. When modeling yield, the difference between potential and actual yield consistently changes because of technology. Considering historical yield potential would help determine spatiotemporal trends in agricultural development. Comparing current and historical production in Denmark is possible because production has been documented throughout history. However, the current winter wheat yield model is solely based on soil. The aim of this study was to generate a new Danish winter wheat yield map and compare the results to historical production potential. Utilizing random forest with soil, climate, and topography variables, a winter wheat yield map was generated from 876 field trials carried out from 1992 to 2018. The random forest model performed better than the model based only on soil. The updated national yield map was then compared to production potential maps from 1688 and 1844. While historical time periods are characterized by numerous low production potential areas and few highly productive areas, present-day production is evenly distributed between low and high production. Advances in technology and farm practices have exceeded historical yield predictions. Thus, modeling current yield could be unreliable in future years as technology progresses.
How to cite: Roell, Y., Beucher, A., Møller, P., Greve, M., and Greve, M.: Comparing a random forest based prediction of winter wheat yield to historical production potential, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-138, https://doi.org/10.5194/egusphere-egu2020-138, 2020.
EGU2020-6276 | Displays | ITS4.9/ESSI2.17
A combinatorial method for improving hourly precipitation interpolation based on singular value decompositionSheng Sheng, Hua Chen, Chong-Yu Xu, Wen Zhang, Zhishuai Li, and Shenglian Guo
Reliable estimation of grid precipitation dataset from gauge observations is crucial for hydrological modelling and water balance studies. Datasets developed by common precipitation interpolation methods are mainly derived from the spatial relationship of gauges while neglecting the trend contained in the antecedent precipitation. Precipitation data can be viewed as an intrinsically related matrix, with columns representing temporal relationships and rows representing spatial relationships. A method, called combinatorial point spatiotemporal interpolation based on singular value decomposition (CPST-SVD) that combines traditional interpolators and matrix factorization and considers the spatiotemporal correlation of precipitation, is proposed to improve estimation. Two widely used approaches including the inverse distance weighting (IDW) and universal kriging (UK) were combined to the new method respectively to interpolate precipitation data. Hourly precipitation data from several flood events were selected to verify the performance of the new method in the time period between 2012 and 2018 under different meteorological conditions in Hanjiang Basin, China. The Funk SVD algorithm and the stochastic gradient descent (SGD) algorithm were introduced for matrix factorization and optimization. The performances of all methods in the leave-one-out cross-validation were assessed and compared by five statistical indicators. The results show that CPST-SVD combined with IDW has the highest accuracy, followed by CPST-SVD combined with UK, IDW and UK in descending order. Through combination, estimation errors in precipitation interpolation can be greatly reduced, especially for the situation that the distribution of surrounding gauges is not so uniform or the precipitation in the target gauge is non-zero. In addition, the larger the amount of precipitation event, the greater the improvement of error. Therefore, this study provides a more accurate method for interpolating precipitation based on the assessment of both temporal and spatial information.
How to cite: Sheng, S., Chen, H., Xu, C.-Y., Zhang, W., Li, Z., and Guo, S.: A combinatorial method for improving hourly precipitation interpolation based on singular value decomposition, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6276, https://doi.org/10.5194/egusphere-egu2020-6276, 2020.
Reliable estimation of grid precipitation dataset from gauge observations is crucial for hydrological modelling and water balance studies. Datasets developed by common precipitation interpolation methods are mainly derived from the spatial relationship of gauges while neglecting the trend contained in the antecedent precipitation. Precipitation data can be viewed as an intrinsically related matrix, with columns representing temporal relationships and rows representing spatial relationships. A method, called combinatorial point spatiotemporal interpolation based on singular value decomposition (CPST-SVD) that combines traditional interpolators and matrix factorization and considers the spatiotemporal correlation of precipitation, is proposed to improve estimation. Two widely used approaches including the inverse distance weighting (IDW) and universal kriging (UK) were combined to the new method respectively to interpolate precipitation data. Hourly precipitation data from several flood events were selected to verify the performance of the new method in the time period between 2012 and 2018 under different meteorological conditions in Hanjiang Basin, China. The Funk SVD algorithm and the stochastic gradient descent (SGD) algorithm were introduced for matrix factorization and optimization. The performances of all methods in the leave-one-out cross-validation were assessed and compared by five statistical indicators. The results show that CPST-SVD combined with IDW has the highest accuracy, followed by CPST-SVD combined with UK, IDW and UK in descending order. Through combination, estimation errors in precipitation interpolation can be greatly reduced, especially for the situation that the distribution of surrounding gauges is not so uniform or the precipitation in the target gauge is non-zero. In addition, the larger the amount of precipitation event, the greater the improvement of error. Therefore, this study provides a more accurate method for interpolating precipitation based on the assessment of both temporal and spatial information.
How to cite: Sheng, S., Chen, H., Xu, C.-Y., Zhang, W., Li, Z., and Guo, S.: A combinatorial method for improving hourly precipitation interpolation based on singular value decomposition, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6276, https://doi.org/10.5194/egusphere-egu2020-6276, 2020.
EGU2020-18471 | Displays | ITS4.9/ESSI2.17
Anomaly Detection by STL Decomposition and Extended Isolation Forest on Environmental Univariate Time Seriesİsmail Sezen, Alper Unal, and Ali Deniz
Atmospheric pollution is one of the primary problems and high concentration levels are critical for human health and environment. This requires to study causes of unusual high concentration levels which do not conform to the expected behavior of the pollutant but it is not always easy to decide which levels are unusual, especially, when data is big and has complex structure. A visual inspection is subjective in most cases and a proper anomaly detection method should be used. Anomaly detection has been widely used in diverse research areas, but most of them have been developed for certain application domains. It also might not be always a good idea to identify anomalies by using data from near measurement sites because of spatio-temporal complexity of the pollutant. That’s why, it’s required to use a method which estimates anomalies from univariate time series data.
This work suggests a framework based on STL decomposition and extended isolation forest (EIF), which is a machine learning algorithm, to identify anomalies for univariate time series which has trend, multi-seasonality and seasonal variation. Main advantage of EIF method is that it defines anomalies by a score value.
In this study, a multi-seasonal STL decomposition has been applied on a univariate PM10 time series to remove trend and seasonal parts but STL is not resourceful to remove seasonal variation from the data. The remainder part still has 24 hours and yearly variation. To remove the variation, hourly and annual inter-quartile ranges (IQR) are calculated and data is standardized by dividing each value to corresponding IQR value. This process ensures removing seasonality in variation and the resulting data is processed by EIF to decide which values are anomaly by an objective criterion.
How to cite: Sezen, İ., Unal, A., and Deniz, A.: Anomaly Detection by STL Decomposition and Extended Isolation Forest on Environmental Univariate Time Series, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18471, https://doi.org/10.5194/egusphere-egu2020-18471, 2020.
Atmospheric pollution is one of the primary problems and high concentration levels are critical for human health and environment. This requires to study causes of unusual high concentration levels which do not conform to the expected behavior of the pollutant but it is not always easy to decide which levels are unusual, especially, when data is big and has complex structure. A visual inspection is subjective in most cases and a proper anomaly detection method should be used. Anomaly detection has been widely used in diverse research areas, but most of them have been developed for certain application domains. It also might not be always a good idea to identify anomalies by using data from near measurement sites because of spatio-temporal complexity of the pollutant. That’s why, it’s required to use a method which estimates anomalies from univariate time series data.
This work suggests a framework based on STL decomposition and extended isolation forest (EIF), which is a machine learning algorithm, to identify anomalies for univariate time series which has trend, multi-seasonality and seasonal variation. Main advantage of EIF method is that it defines anomalies by a score value.
In this study, a multi-seasonal STL decomposition has been applied on a univariate PM10 time series to remove trend and seasonal parts but STL is not resourceful to remove seasonal variation from the data. The remainder part still has 24 hours and yearly variation. To remove the variation, hourly and annual inter-quartile ranges (IQR) are calculated and data is standardized by dividing each value to corresponding IQR value. This process ensures removing seasonality in variation and the resulting data is processed by EIF to decide which values are anomaly by an objective criterion.
How to cite: Sezen, İ., Unal, A., and Deniz, A.: Anomaly Detection by STL Decomposition and Extended Isolation Forest on Environmental Univariate Time Series, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18471, https://doi.org/10.5194/egusphere-egu2020-18471, 2020.
EGU2020-3822 | Displays | ITS4.9/ESSI2.17
Spatiotemporal distribution of major pollutants and their health impacts in Hubei Province from 2015 to 2018 based on machine learning to improve LURXiao Feng
Air pollution poses a serious threat to human health. A large number of studies have shown that certain diseases are closely related to air pollution. Understanding the spatiotemporal distribution of air pollutants and their health effects are of great significance for pollution prevention. This study takes Hubei Province, China as an example. It integrates measured ground air quality data, natural environment data, and socioeconomic data, and uses machine learning to improve the land use regression model to simulate the spatial distribution of concentration PM2.5 / O3 from 2015 to 2018 in the study area. The combined pollutant concentration data and population raster data were used to assess the deaths from specific diseases (stroke, ischemic heart disease, lung cancer) caused by air pollutants. The result shows that high concentrations of pollutants are concentrated in the more economically developed eastern regions of Hubei Province, and the economically backward western regions have good air quality. In addition, the distribution of deaths caused by exposure to air pollution is similar to that of pollutants, which is higher in eastern part of Hubei province. However, the total number of deaths in the province is decreasing year by year. This result shows that environmental governance policies have alleviated the threat of air pollution to human health to some extent. It shows that Hubei Province should combine actual conditions and spatial-temporal distribution characteristics of pollutants to make appropriate environmental protection measures.
How to cite: Feng, X.: Spatiotemporal distribution of major pollutants and their health impacts in Hubei Province from 2015 to 2018 based on machine learning to improve LUR, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3822, https://doi.org/10.5194/egusphere-egu2020-3822, 2020.
Air pollution poses a serious threat to human health. A large number of studies have shown that certain diseases are closely related to air pollution. Understanding the spatiotemporal distribution of air pollutants and their health effects are of great significance for pollution prevention. This study takes Hubei Province, China as an example. It integrates measured ground air quality data, natural environment data, and socioeconomic data, and uses machine learning to improve the land use regression model to simulate the spatial distribution of concentration PM2.5 / O3 from 2015 to 2018 in the study area. The combined pollutant concentration data and population raster data were used to assess the deaths from specific diseases (stroke, ischemic heart disease, lung cancer) caused by air pollutants. The result shows that high concentrations of pollutants are concentrated in the more economically developed eastern regions of Hubei Province, and the economically backward western regions have good air quality. In addition, the distribution of deaths caused by exposure to air pollution is similar to that of pollutants, which is higher in eastern part of Hubei province. However, the total number of deaths in the province is decreasing year by year. This result shows that environmental governance policies have alleviated the threat of air pollution to human health to some extent. It shows that Hubei Province should combine actual conditions and spatial-temporal distribution characteristics of pollutants to make appropriate environmental protection measures.
How to cite: Feng, X.: Spatiotemporal distribution of major pollutants and their health impacts in Hubei Province from 2015 to 2018 based on machine learning to improve LUR, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3822, https://doi.org/10.5194/egusphere-egu2020-3822, 2020.
EGU2020-19933 | Displays | ITS4.9/ESSI2.17
Data Fusion on the CANDELA Cloud PlatformWei Yao, Octavian Dumitru, Jose Lorenzo, and Mihai Datcu
This abstract describes the Data Fusion tool of the Horizon 2020 CANDELA project. Here, Sentinel-1 (synthetic aperture radar) and Sentinel-2 (multispectral) satellite images are fused at feature level. This fusion is made by extracting the features from each type of image; then these features are combined in a new block within the Data Model Generation sub-module of the Data Fusion system.
The corresponding tool has already been integrated with the CANDELA cloud platform: its Data Model component on the platform is acting as backend, and the user interaction component on the local user machine as frontend. There are four main sub-modules: Data Model Generation for Data Fusion (DMG-DF), DataBase Management System (DBMS), Image Search and Semantic Annotation (ISSA), and multi-knowledge and Query (QE). The DMG-DF and DBMS sub-modules have been dockerized and deployed on the CANDELA platform. The ISSA and QE sub-modules require user inputs for their interactive interfaces. They can be started as a standard Graphical User Interface (GUI) tool which is linked directly to the database on the platform.
Before using the Data Fusion tool, users have to prepare the already co-registered Sentinel-1 and Sentinel-2 products as inputs. The S1tiling service provided on the platform is able to cut out the overlapping Sentinel-1 area based on Sentinel-2 tile IDs.
The pipeline of the Data Fusion tool starts from the DMG-DF process on the platform, and the data will be transferred via Internet; then local end users can perform semantic annotations. The annotations will be ingested into the database on the platform via Internet.
The Data Fusion process consists of three steps:
- On the platform, launch a Jupyter notebook for Python, and start the Data Model Generation for Data Fusion to process the prepared Sentinel-1 and Sentinel-2 products which cover the same area;
- On the local user machine, by clicking the Query button of the GUI, users can get access to the remote database, make image search and queries, and perform semantic annotations by loading quick-look images of processed Sentinel-1 and Sentinel-2 products via Internet. Feature fusion and image quick-look pairing are performed at runtime. The fused features and paired quick-looks help obtain better semantic annotations. When clicking on another ingestion button, the annotations are ingested into the database on the platform;
- On the platform, launch a Jupyter notebook for Python, and the annotations and the processed product metadata can be searched and queried.
Our preliminary validation results are made based on visual analysis, by comparing the obtained classification maps with already available CORINE land cover maps. In general, our fused results generate more complete classification maps which contain more classes.
How to cite: Yao, W., Dumitru, O., Lorenzo, J., and Datcu, M.: Data Fusion on the CANDELA Cloud Platform, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19933, https://doi.org/10.5194/egusphere-egu2020-19933, 2020.
This abstract describes the Data Fusion tool of the Horizon 2020 CANDELA project. Here, Sentinel-1 (synthetic aperture radar) and Sentinel-2 (multispectral) satellite images are fused at feature level. This fusion is made by extracting the features from each type of image; then these features are combined in a new block within the Data Model Generation sub-module of the Data Fusion system.
The corresponding tool has already been integrated with the CANDELA cloud platform: its Data Model component on the platform is acting as backend, and the user interaction component on the local user machine as frontend. There are four main sub-modules: Data Model Generation for Data Fusion (DMG-DF), DataBase Management System (DBMS), Image Search and Semantic Annotation (ISSA), and multi-knowledge and Query (QE). The DMG-DF and DBMS sub-modules have been dockerized and deployed on the CANDELA platform. The ISSA and QE sub-modules require user inputs for their interactive interfaces. They can be started as a standard Graphical User Interface (GUI) tool which is linked directly to the database on the platform.
Before using the Data Fusion tool, users have to prepare the already co-registered Sentinel-1 and Sentinel-2 products as inputs. The S1tiling service provided on the platform is able to cut out the overlapping Sentinel-1 area based on Sentinel-2 tile IDs.
The pipeline of the Data Fusion tool starts from the DMG-DF process on the platform, and the data will be transferred via Internet; then local end users can perform semantic annotations. The annotations will be ingested into the database on the platform via Internet.
The Data Fusion process consists of three steps:
- On the platform, launch a Jupyter notebook for Python, and start the Data Model Generation for Data Fusion to process the prepared Sentinel-1 and Sentinel-2 products which cover the same area;
- On the local user machine, by clicking the Query button of the GUI, users can get access to the remote database, make image search and queries, and perform semantic annotations by loading quick-look images of processed Sentinel-1 and Sentinel-2 products via Internet. Feature fusion and image quick-look pairing are performed at runtime. The fused features and paired quick-looks help obtain better semantic annotations. When clicking on another ingestion button, the annotations are ingested into the database on the platform;
- On the platform, launch a Jupyter notebook for Python, and the annotations and the processed product metadata can be searched and queried.
Our preliminary validation results are made based on visual analysis, by comparing the obtained classification maps with already available CORINE land cover maps. In general, our fused results generate more complete classification maps which contain more classes.
How to cite: Yao, W., Dumitru, O., Lorenzo, J., and Datcu, M.: Data Fusion on the CANDELA Cloud Platform, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19933, https://doi.org/10.5194/egusphere-egu2020-19933, 2020.
EGU2020-10768 | Displays | ITS4.9/ESSI2.17
A Deep-Learning Parallel Processing Agglomerative Algorithm for the Identification of Distinct Seismic Regions in the Southern Hellenic Seismic ArcAlexandra Moshou, Antonios Konstantaras, Emmanouil Markoulakis, Panagiotis Argyrakis, and Emmanouil Maravelakis
The identification of distinct seismic regions and the extraction of features of theirs in relation to known underground fault mappings could provide most valuable information towards understanding the seismic clustering phenomenon, i.e. whether an earthquake occurring in a particular area can trigger another earthquake in the vicinity. This research paper works towards that direction and unveils the potential presence and extent of distinct seismic regions in the area of the Southern Hellenic Seismic Arc. To achieve that, a spatio-temporal clustering algorithm has been developed based on expert knowledge regarding the spatial and timely influence of an earthquake in its nearby vicinity using seismic data provided by the Geodynamics Institute of Athens, and is further supported by geological observations of underground faults’ mappings beneath the addressed potentially distinct seismic regions. This is made possible thanks to advances in deep learning and graphics processing units’ 3D technology that encompass parallel processing architectures, which comprise of blocks of multiple cores with parallel threads providing the necessary foundation in terms of hardware for accelerated processing for parallel seismic big data analysis. Seismic data are normally stored in massive continuously expanding matrices, as wide areas seismic covering is thickening, due to the establishment of denser recording networks, and decades of data are being stacked together. This research work embodies that technology for the development and implementation of a Cuda parallel processing agglomerative spatio-temporal clustering algorithm that enables the import of expert knowledge for the investigation of the potential presence of distinct seismic regions in the vicinity under investigation. The overall spatio temporal clustering results are also in accordance with empirical observations reported in the literature throughout the vicinity of the Hellenic Seismic Arc.
Indexing terms: parallel processing, heterogeneous parallel programming, Cuda, distinct seismic regions, seismic clustering, spatio-temporal clustering
References
Axaridou A., I. Chrysakis, C. Georgis, M. Theodoridou, M. Doerr, A. Konstantaras, and E. Maravelakis. 3D-SYSTEK: Recording and exploiting the production workflow of 3D-models in cultural heritage. IISA 2014 - 5th International Conference on Information, Intelligence, Systems and Applications, 51-56, 2014.
Drakatos G. and J. Latoussakis. A catalog of aftershock sequences in Greece (1971–1997): Their spatial and temporal characteristics. Journal of Seismology. 5, 137–145, 2001.
Konstantaras A.J. Classification of distinct seismic regions and regional temporal modelling of seismicity in the vicinity of the Hellenic seismic arc. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing. 6 (4), 1857-1863, 2012.
Konstantaras A.J., E. Katsifarakis, E. Maravelakis, E. Skounakis, E. Kokkinos and E. Karapidakis. Intelligent spatial-clustering of seismicity in the vicinity of the Hellenic Seismic Arc. Earth Science Research 1 (2), 1-10, 2012.
Maravelakis E., A. Konstantaras, K. Kabassi, I. Chrysakis, C. Georgis and A. Axaridou. 3DSYSTEK web-based point cloud viewer. IISA 2014 - 5th International Conference on Information, Intelligence, Systems and Applications, 262-266, 2014.
Moshou Alexandra, Eleftheria Papadimitriou, George Drakatos, Christos Evangelidis Vasilios Karakostas, Filippos Vallianatos, and Konstantinos Makropoulos Focal Mechanisms at the convergent plate boundary in Southern Aegean, Greece, Geophysical Research Abstracts, Vol. 16, EGU2014-12185, 2014, EGU General Assembly 2014
How to cite: Moshou, A., Konstantaras, A., Markoulakis, E., Argyrakis, P., and Maravelakis, E.: A Deep-Learning Parallel Processing Agglomerative Algorithm for the Identification of Distinct Seismic Regions in the Southern Hellenic Seismic Arc, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10768, https://doi.org/10.5194/egusphere-egu2020-10768, 2020.
The identification of distinct seismic regions and the extraction of features of theirs in relation to known underground fault mappings could provide most valuable information towards understanding the seismic clustering phenomenon, i.e. whether an earthquake occurring in a particular area can trigger another earthquake in the vicinity. This research paper works towards that direction and unveils the potential presence and extent of distinct seismic regions in the area of the Southern Hellenic Seismic Arc. To achieve that, a spatio-temporal clustering algorithm has been developed based on expert knowledge regarding the spatial and timely influence of an earthquake in its nearby vicinity using seismic data provided by the Geodynamics Institute of Athens, and is further supported by geological observations of underground faults’ mappings beneath the addressed potentially distinct seismic regions. This is made possible thanks to advances in deep learning and graphics processing units’ 3D technology that encompass parallel processing architectures, which comprise of blocks of multiple cores with parallel threads providing the necessary foundation in terms of hardware for accelerated processing for parallel seismic big data analysis. Seismic data are normally stored in massive continuously expanding matrices, as wide areas seismic covering is thickening, due to the establishment of denser recording networks, and decades of data are being stacked together. This research work embodies that technology for the development and implementation of a Cuda parallel processing agglomerative spatio-temporal clustering algorithm that enables the import of expert knowledge for the investigation of the potential presence of distinct seismic regions in the vicinity under investigation. The overall spatio temporal clustering results are also in accordance with empirical observations reported in the literature throughout the vicinity of the Hellenic Seismic Arc.
Indexing terms: parallel processing, heterogeneous parallel programming, Cuda, distinct seismic regions, seismic clustering, spatio-temporal clustering
References
Axaridou A., I. Chrysakis, C. Georgis, M. Theodoridou, M. Doerr, A. Konstantaras, and E. Maravelakis. 3D-SYSTEK: Recording and exploiting the production workflow of 3D-models in cultural heritage. IISA 2014 - 5th International Conference on Information, Intelligence, Systems and Applications, 51-56, 2014.
Drakatos G. and J. Latoussakis. A catalog of aftershock sequences in Greece (1971–1997): Their spatial and temporal characteristics. Journal of Seismology. 5, 137–145, 2001.
Konstantaras A.J. Classification of distinct seismic regions and regional temporal modelling of seismicity in the vicinity of the Hellenic seismic arc. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing. 6 (4), 1857-1863, 2012.
Konstantaras A.J., E. Katsifarakis, E. Maravelakis, E. Skounakis, E. Kokkinos and E. Karapidakis. Intelligent spatial-clustering of seismicity in the vicinity of the Hellenic Seismic Arc. Earth Science Research 1 (2), 1-10, 2012.
Maravelakis E., A. Konstantaras, K. Kabassi, I. Chrysakis, C. Georgis and A. Axaridou. 3DSYSTEK web-based point cloud viewer. IISA 2014 - 5th International Conference on Information, Intelligence, Systems and Applications, 262-266, 2014.
Moshou Alexandra, Eleftheria Papadimitriou, George Drakatos, Christos Evangelidis Vasilios Karakostas, Filippos Vallianatos, and Konstantinos Makropoulos Focal Mechanisms at the convergent plate boundary in Southern Aegean, Greece, Geophysical Research Abstracts, Vol. 16, EGU2014-12185, 2014, EGU General Assembly 2014
How to cite: Moshou, A., Konstantaras, A., Markoulakis, E., Argyrakis, P., and Maravelakis, E.: A Deep-Learning Parallel Processing Agglomerative Algorithm for the Identification of Distinct Seismic Regions in the Southern Hellenic Seismic Arc, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10768, https://doi.org/10.5194/egusphere-egu2020-10768, 2020.
EGU2020-6771 | Displays | ITS4.9/ESSI2.17
Analysis and Construction of Geodetic Data Classification StandardYongshang Wang
The new geodetic technology system characterized by high precision, real-time and popularization has gradually formed in the wave of new technological revolution. Taking advantage of modern information technology,the new geodetic technology system has new features for technology sharing and data sharing, in which geodetic data and positioning applications and geodetic applications can meet the requirements of real-time, public-sharedand interactivity. Geodetic Data Standards are the cornerstone in process of geodetic informationization, which describes scientifically the geodetic data processing, management and service, as well as the theory, methods and procedures used for implementation. Along with the development of geodetic informationization, the width and depth of data standardization application will far exceed the level of traditional standardization. With the advancement of measurement technology, there are less technical constraints in the relatively simple instruments operation process while more complicated data structure to be analyzed, and the more urgent socialization services requirements to be met. The focus of geodetic standardization will shift from operational standards to data standards. In this paper, the content, characteristics, classification principles and methods of geodetic data are studied for the new characteristics of modern geodetic informationization.
How to cite: Wang, Y.: Analysis and Construction of Geodetic Data Classification Standard , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6771, https://doi.org/10.5194/egusphere-egu2020-6771, 2020.
The new geodetic technology system characterized by high precision, real-time and popularization has gradually formed in the wave of new technological revolution. Taking advantage of modern information technology,the new geodetic technology system has new features for technology sharing and data sharing, in which geodetic data and positioning applications and geodetic applications can meet the requirements of real-time, public-sharedand interactivity. Geodetic Data Standards are the cornerstone in process of geodetic informationization, which describes scientifically the geodetic data processing, management and service, as well as the theory, methods and procedures used for implementation. Along with the development of geodetic informationization, the width and depth of data standardization application will far exceed the level of traditional standardization. With the advancement of measurement technology, there are less technical constraints in the relatively simple instruments operation process while more complicated data structure to be analyzed, and the more urgent socialization services requirements to be met. The focus of geodetic standardization will shift from operational standards to data standards. In this paper, the content, characteristics, classification principles and methods of geodetic data are studied for the new characteristics of modern geodetic informationization.
How to cite: Wang, Y.: Analysis and Construction of Geodetic Data Classification Standard , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6771, https://doi.org/10.5194/egusphere-egu2020-6771, 2020.
EGU2020-21208 | Displays | ITS4.9/ESSI2.17
Long-term trends in ocean chlorophyll: update from a Bayesian hierarchical space-time modelClaudie Beaulieu, Matthew Hammond, Stephanie Henson, and Sujit Sahu
Assessing ongoing changes in marine primary productivity is essential to determine the impacts of climate change on marine ecosystems and fisheries. Satellite ocean color sensors provide detailed coverage of ocean chlorophyll in space and time, now with a combined record length of just over 20 years. Detecting climate change impacts is hindered by the shortness of the record and the long timescale of memory within the ocean such that even the sign of change in ocean chlorophyll is still inconclusive from time-series analysis of satellite data. Here we use a Bayesian hierarchical space-time model to estimate long-term trends in ocean chlorophyll. The main advantage of this approach comes from the principle of ”borrowing strength” from neighboring grid cells in a given region to improve overall detection. We use coupled model simulations from the CMIP5 experiment to form priors to provide a “first guess” on observational trend estimates and their uncertainty that we then update using satellite observations. We compare the results with estimates obtained with the commonly used vague prior, reflecting the case where no independent knowledge is available. A global average net positive chlorophyll trend is found, with stronger regional trends that are typically positive in high and mid latitudes, and negative at low latitudes outside the Atlantic. The Bayesian hierarchical model used here provides a framework for integrating different sources of data for detecting trends and estimating their uncertainty in studies of global change.
How to cite: Beaulieu, C., Hammond, M., Henson, S., and Sahu, S.: Long-term trends in ocean chlorophyll: update from a Bayesian hierarchical space-time model, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21208, https://doi.org/10.5194/egusphere-egu2020-21208, 2020.
Assessing ongoing changes in marine primary productivity is essential to determine the impacts of climate change on marine ecosystems and fisheries. Satellite ocean color sensors provide detailed coverage of ocean chlorophyll in space and time, now with a combined record length of just over 20 years. Detecting climate change impacts is hindered by the shortness of the record and the long timescale of memory within the ocean such that even the sign of change in ocean chlorophyll is still inconclusive from time-series analysis of satellite data. Here we use a Bayesian hierarchical space-time model to estimate long-term trends in ocean chlorophyll. The main advantage of this approach comes from the principle of ”borrowing strength” from neighboring grid cells in a given region to improve overall detection. We use coupled model simulations from the CMIP5 experiment to form priors to provide a “first guess” on observational trend estimates and their uncertainty that we then update using satellite observations. We compare the results with estimates obtained with the commonly used vague prior, reflecting the case where no independent knowledge is available. A global average net positive chlorophyll trend is found, with stronger regional trends that are typically positive in high and mid latitudes, and negative at low latitudes outside the Atlantic. The Bayesian hierarchical model used here provides a framework for integrating different sources of data for detecting trends and estimating their uncertainty in studies of global change.
How to cite: Beaulieu, C., Hammond, M., Henson, S., and Sahu, S.: Long-term trends in ocean chlorophyll: update from a Bayesian hierarchical space-time model, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21208, https://doi.org/10.5194/egusphere-egu2020-21208, 2020.
EGU2020-9186 | Displays | ITS4.9/ESSI2.17 | Highlight
Spatio-Temporal Modeling of Wind Speed Using EOF and Machine LearningFabian Guignard, Federico Amato, Sylvain Robert, and Mikhail Kanevski
Spatio-temporal modelling of wind speed is an important issue in applied research, such as renewable energy and risk assessment. Due to its turbulent nature and its very high variability, wind speed interpolation is a challenging task. Being universal modeling tools, Machine Learning (ML) algorithms are well suited to detect and model non-linear environmental phenomena such as wind.
The present research proposes a novel and general methodology for spatio-temporal interpolation with an application to hourly wind speed in Switzerland. The methodology is organized as follows. First, the dataset is decomposed through Empirical Orthogonal Functions (EOFs) in temporal basis and spatially dependent coefficients. EOFs constitute an orthogonal basis of the spatio-temporal signal from which the original wind field can be reconstructed. Subsequently, in order to be able to reconstruct the signal at spatial locations where measurements are unknown, the spatial coefficients resulted from the decomposition are interpolated. To this aim, several ML algorithms were used and compared, including k-Nearest Neighbors, Random Forest, Support Vector Machine, General Regression Neural Networks and Extreme Learning Machine. Finally, wind field is reconstructed with the help of the interpolated coefficients.
A case study on real data is presented. Data consists of two years of wind speed measurements at hourly frequency collected by Meteoswiss at several hundreds of stations in Switzerland, which has a complex orography. After cleaning and handling of missing values, a careful exploratory data analysis was carried out, followed by the application of the proposed novel methodology. The model is validated on an independent test set of stations. The outcome of the case study is a time series of hourly maps of wind field at 250 meters spatial resolution, which is highly relevant for renewable energy potential assessment.
In conclusion, the study introduced a new way to interpolate irregular spatio-temporal datasets. Further developments of the methodology could deal with the investigation of alternative basis such as Fourier and wavelets.
Reference
N. Cressie, C. K. Wikle, Statistics for Spatio-Temporal Data, Wiley, 2011.
M. Kanevski, A. Pozdnoukhov, V. Timonin, Machine Learning for Spatial Environmental Data, CRC Press, 2009.
How to cite: Guignard, F., Amato, F., Robert, S., and Kanevski, M.: Spatio-Temporal Modeling of Wind Speed Using EOF and Machine Learning , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9186, https://doi.org/10.5194/egusphere-egu2020-9186, 2020.
Spatio-temporal modelling of wind speed is an important issue in applied research, such as renewable energy and risk assessment. Due to its turbulent nature and its very high variability, wind speed interpolation is a challenging task. Being universal modeling tools, Machine Learning (ML) algorithms are well suited to detect and model non-linear environmental phenomena such as wind.
The present research proposes a novel and general methodology for spatio-temporal interpolation with an application to hourly wind speed in Switzerland. The methodology is organized as follows. First, the dataset is decomposed through Empirical Orthogonal Functions (EOFs) in temporal basis and spatially dependent coefficients. EOFs constitute an orthogonal basis of the spatio-temporal signal from which the original wind field can be reconstructed. Subsequently, in order to be able to reconstruct the signal at spatial locations where measurements are unknown, the spatial coefficients resulted from the decomposition are interpolated. To this aim, several ML algorithms were used and compared, including k-Nearest Neighbors, Random Forest, Support Vector Machine, General Regression Neural Networks and Extreme Learning Machine. Finally, wind field is reconstructed with the help of the interpolated coefficients.
A case study on real data is presented. Data consists of two years of wind speed measurements at hourly frequency collected by Meteoswiss at several hundreds of stations in Switzerland, which has a complex orography. After cleaning and handling of missing values, a careful exploratory data analysis was carried out, followed by the application of the proposed novel methodology. The model is validated on an independent test set of stations. The outcome of the case study is a time series of hourly maps of wind field at 250 meters spatial resolution, which is highly relevant for renewable energy potential assessment.
In conclusion, the study introduced a new way to interpolate irregular spatio-temporal datasets. Further developments of the methodology could deal with the investigation of alternative basis such as Fourier and wavelets.
Reference
N. Cressie, C. K. Wikle, Statistics for Spatio-Temporal Data, Wiley, 2011.
M. Kanevski, A. Pozdnoukhov, V. Timonin, Machine Learning for Spatial Environmental Data, CRC Press, 2009.
How to cite: Guignard, F., Amato, F., Robert, S., and Kanevski, M.: Spatio-Temporal Modeling of Wind Speed Using EOF and Machine Learning , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9186, https://doi.org/10.5194/egusphere-egu2020-9186, 2020.
EGU2020-9206 | Displays | ITS4.9/ESSI2.17 | Highlight
Spatio-temporal global patterns of 70 years of daily temperature using Fisher-Shannon complexity measuresFederico Amato, Fabian Guignard, and Mikhail Kanevski
Global climate has been the focus of an increasing number of researches over the last decades. The ratification of the Paris Agreement imposes to undertake the necessary actions to limit the increase in global average temperature below 1.5oC to ensure a reduction of the risks and impacts of climate change.
Despite the importance of its spatial and temporal distribution, warming has often been investigated only in terms of global and hemispheric means. Moreover, although it is known that climate is characterised by strong nonlinearity and chaotic behaviour, most of the studies in climate science adopt statistical methods valid only for stationary or linear systems. Nevertheless, it has already been shown that warming trends are characterised by strong nonlinearities, with an acceleration in the increase of temperatures since 1980.
In this work, we investigate the complex nature of global temperature trends. We study the maximum temperature at two meters above ground using the NCEP CDAS1 daily reanalysis data, with a spatial resolution of 2.5o by 2.5o and covering the time period from 1 of January 1948 to 30 of November 2018. For each spatial location, we characterize the corresponding temperature time series using methods from Information Theory. Specifically, we analysed the temperature by computing the Fisher Information Measure [1] (FIM) and the Shannon Entropy Power [2] (SEP) in a temporal sliding window, which allows to follow the temporal evolution of the two parameters. We find a significant change in the spatial patterns of the dynamic behaviour of temperatures starting from the early eighties. Specifically, two different patterns are recognizable. In the period from 1948 to the early eighties the latitudes higher than 60oN and lower than 60oS show high levels of SEP and low levels of FIM. The situation completely revers starting from 1980s, and in a faster way for the latitudes higher than 60oN, so that tropical and temperate zones are now characterized by high levels of entropy. The stronger growth of SEP is measured in the northern mid-latitudes. These regions are also known to have been characterized by higher warming trends. Finally, a drastic difference between oceans and land surfaces is detectable, with the former generally interested by significant increases of SEP since the eighties.
[1] Fisher, R. A Theory of statistical estimation. Math. Proc. Camb. Philos. Soc.22, 700–725, DOI: 10.1017/S0305004100009580 (1925).
[2] Shannon, C. E. A mathematical theory of communication. Bell Syst. Tech. J.27, 379–423, DOI: 10.1002/j.1538-7305.1948.tb01338.x (1948).
How to cite: Amato, F., Guignard, F., and Kanevski, M.: Spatio-temporal global patterns of 70 years of daily temperature using Fisher-Shannon complexity measures, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9206, https://doi.org/10.5194/egusphere-egu2020-9206, 2020.
Global climate has been the focus of an increasing number of researches over the last decades. The ratification of the Paris Agreement imposes to undertake the necessary actions to limit the increase in global average temperature below 1.5oC to ensure a reduction of the risks and impacts of climate change.
Despite the importance of its spatial and temporal distribution, warming has often been investigated only in terms of global and hemispheric means. Moreover, although it is known that climate is characterised by strong nonlinearity and chaotic behaviour, most of the studies in climate science adopt statistical methods valid only for stationary or linear systems. Nevertheless, it has already been shown that warming trends are characterised by strong nonlinearities, with an acceleration in the increase of temperatures since 1980.
In this work, we investigate the complex nature of global temperature trends. We study the maximum temperature at two meters above ground using the NCEP CDAS1 daily reanalysis data, with a spatial resolution of 2.5o by 2.5o and covering the time period from 1 of January 1948 to 30 of November 2018. For each spatial location, we characterize the corresponding temperature time series using methods from Information Theory. Specifically, we analysed the temperature by computing the Fisher Information Measure [1] (FIM) and the Shannon Entropy Power [2] (SEP) in a temporal sliding window, which allows to follow the temporal evolution of the two parameters. We find a significant change in the spatial patterns of the dynamic behaviour of temperatures starting from the early eighties. Specifically, two different patterns are recognizable. In the period from 1948 to the early eighties the latitudes higher than 60oN and lower than 60oS show high levels of SEP and low levels of FIM. The situation completely revers starting from 1980s, and in a faster way for the latitudes higher than 60oN, so that tropical and temperate zones are now characterized by high levels of entropy. The stronger growth of SEP is measured in the northern mid-latitudes. These regions are also known to have been characterized by higher warming trends. Finally, a drastic difference between oceans and land surfaces is detectable, with the former generally interested by significant increases of SEP since the eighties.
[1] Fisher, R. A Theory of statistical estimation. Math. Proc. Camb. Philos. Soc.22, 700–725, DOI: 10.1017/S0305004100009580 (1925).
[2] Shannon, C. E. A mathematical theory of communication. Bell Syst. Tech. J.27, 379–423, DOI: 10.1002/j.1538-7305.1948.tb01338.x (1948).
How to cite: Amato, F., Guignard, F., and Kanevski, M.: Spatio-temporal global patterns of 70 years of daily temperature using Fisher-Shannon complexity measures, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9206, https://doi.org/10.5194/egusphere-egu2020-9206, 2020.
ITS5.1/CL3.6 – Emission pathways, carbon budgets, and climate-carbon response: governing mechanisms, limitations, and implications for policymakers
EGU2020-2194 | Displays | ITS5.1/CL3.6
Non-CO2 forcing changes will likely decrease the remaining carbon budget for 1.5°CNadine Mengis and H. Damon Matthews
Estimates of the 1.5°C carbon budget vary widely among recent studies. One key contribution to this range is the non-CO2 climate forcing scenario uncertainty. Based on a partitioning of historical non-CO2 forcing, we show that there is currently a net negative non-CO2 forcing from fossil fuel combustion (FFC) mainly due to the co-emission of aerosols, and a net positive non-CO2 climate forcing from land-use change (LUC) and agricultural activities. We then perform a set of future simulations in which we prescribed a 1.5°C temperature stabilization trajectory, and diagnosed the resulting 1.5°C carbon budgets. Using the results of our historical partitioning, we prescribed changing non-CO2 forcing scenarios that are consistent with our model’s simulated decrease in FFC CO2 emissions. We compared the diagnosed carbon budgets from these idealized scenarios to those resulting from the default RCP scenario non-CO2 forcing, as well as from a scenario in which we assumed proportionality between future CO2 and non-CO2 forcing. We find a large range of carbon budget estimates across scenarios, with the largest budget emerging from the scenario with assumed proportionality of CO2 and non-CO2 forcing. Furthermore, our adjusted-RCP scenarios, in which the non-CO2 forcing is consistent with model-diagnosed FFC CO2 emissions, produced carbon budgets that are smaller than the corresponding default RCP scenarios. Our results suggest that ambitious mitigation scenarios will likely be characterized by an increasing contribution of non-CO2 forcing, and that an assumption of continued proportionality between CO2 and non-CO2 forcing would lead to an overestimate of the remaining carbon budget required to avoid low-temperature targets. Maintaining such proportionality under ambitious fossil fuel mitigation would require mitigation of non-CO2 emissions from agriculture and other non-FFC sources at a rate that is substantially faster than is found in the standard RCP scenarios.
How to cite: Mengis, N. and Matthews, H. D.: Non-CO2 forcing changes will likely decrease the remaining carbon budget for 1.5°C, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2194, https://doi.org/10.5194/egusphere-egu2020-2194, 2020.
Estimates of the 1.5°C carbon budget vary widely among recent studies. One key contribution to this range is the non-CO2 climate forcing scenario uncertainty. Based on a partitioning of historical non-CO2 forcing, we show that there is currently a net negative non-CO2 forcing from fossil fuel combustion (FFC) mainly due to the co-emission of aerosols, and a net positive non-CO2 climate forcing from land-use change (LUC) and agricultural activities. We then perform a set of future simulations in which we prescribed a 1.5°C temperature stabilization trajectory, and diagnosed the resulting 1.5°C carbon budgets. Using the results of our historical partitioning, we prescribed changing non-CO2 forcing scenarios that are consistent with our model’s simulated decrease in FFC CO2 emissions. We compared the diagnosed carbon budgets from these idealized scenarios to those resulting from the default RCP scenario non-CO2 forcing, as well as from a scenario in which we assumed proportionality between future CO2 and non-CO2 forcing. We find a large range of carbon budget estimates across scenarios, with the largest budget emerging from the scenario with assumed proportionality of CO2 and non-CO2 forcing. Furthermore, our adjusted-RCP scenarios, in which the non-CO2 forcing is consistent with model-diagnosed FFC CO2 emissions, produced carbon budgets that are smaller than the corresponding default RCP scenarios. Our results suggest that ambitious mitigation scenarios will likely be characterized by an increasing contribution of non-CO2 forcing, and that an assumption of continued proportionality between CO2 and non-CO2 forcing would lead to an overestimate of the remaining carbon budget required to avoid low-temperature targets. Maintaining such proportionality under ambitious fossil fuel mitigation would require mitigation of non-CO2 emissions from agriculture and other non-FFC sources at a rate that is substantially faster than is found in the standard RCP scenarios.
How to cite: Mengis, N. and Matthews, H. D.: Non-CO2 forcing changes will likely decrease the remaining carbon budget for 1.5°C, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2194, https://doi.org/10.5194/egusphere-egu2020-2194, 2020.
EGU2020-11278 * | Displays | ITS5.1/CL3.6 | Highlight
A new framework for understanding and quantifying uncertainties in the remaining carbon budgetH. Damon Matthews, Katarzyna Tokarska, Joeri Rogelj, Piers Forster, Karsten Haustein, Christopher Smith, Andrew MacDougall, Nadine Mengis, Sebastian Sippel, and Reto Knutti
The remaining carbon budget quantifies the allowable future CO2 emissions to keep global mean warming below a desiredlevel. Carbon budget estimates are subject to uncertainty in the Transient Climate Response to Cumulative CO2 Emissions (TCRE), which measures the warming resulting from a given total amount of CO2 emitted. Moreover, other sources of uncertainty linked to non-CO2 emissions have been shown to also strongly affect estimates of the remaining carbon budget. Here we present a new framework that estimates the TCRE using geophysical constraints derived from observations, and integrates the effect of geophysical and socioeconomic pathway uncertainties on the distribution of the remaining carbon budget. We estimate a median TCRE of 0.40 °C and likely range of 0.3 to 0.5 °C (17-83%) per 1000 GtCO2 emitted. Our 1.5 °C remaining carbon budget has a median value of 710 GtCO2 from 2020 onwards, with a range of 470 to 960 GtCO2, (for a 67% to 33% chance of not exceeding the target). Uncertainty in the amount of current warming from non-CO2 forcing is the dominant geophysical contributor to the spread in both the TCRE and remaining carbon budget estimates. The remaining carbon budget distribution is also strongly affected by current and future mitigation decisions, where the range of non-CO2forcing across scenarios has the potential to increase or decrease the median 1.5 °C remaining carbon budget by 740 GtCO2.
How to cite: Matthews, H. D., Tokarska, K., Rogelj, J., Forster, P., Haustein, K., Smith, C., MacDougall, A., Mengis, N., Sippel, S., and Knutti, R.: A new framework for understanding and quantifying uncertainties in the remaining carbon budget, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11278, https://doi.org/10.5194/egusphere-egu2020-11278, 2020.
The remaining carbon budget quantifies the allowable future CO2 emissions to keep global mean warming below a desiredlevel. Carbon budget estimates are subject to uncertainty in the Transient Climate Response to Cumulative CO2 Emissions (TCRE), which measures the warming resulting from a given total amount of CO2 emitted. Moreover, other sources of uncertainty linked to non-CO2 emissions have been shown to also strongly affect estimates of the remaining carbon budget. Here we present a new framework that estimates the TCRE using geophysical constraints derived from observations, and integrates the effect of geophysical and socioeconomic pathway uncertainties on the distribution of the remaining carbon budget. We estimate a median TCRE of 0.40 °C and likely range of 0.3 to 0.5 °C (17-83%) per 1000 GtCO2 emitted. Our 1.5 °C remaining carbon budget has a median value of 710 GtCO2 from 2020 onwards, with a range of 470 to 960 GtCO2, (for a 67% to 33% chance of not exceeding the target). Uncertainty in the amount of current warming from non-CO2 forcing is the dominant geophysical contributor to the spread in both the TCRE and remaining carbon budget estimates. The remaining carbon budget distribution is also strongly affected by current and future mitigation decisions, where the range of non-CO2forcing across scenarios has the potential to increase or decrease the median 1.5 °C remaining carbon budget by 740 GtCO2.
How to cite: Matthews, H. D., Tokarska, K., Rogelj, J., Forster, P., Haustein, K., Smith, C., MacDougall, A., Mengis, N., Sippel, S., and Knutti, R.: A new framework for understanding and quantifying uncertainties in the remaining carbon budget, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11278, https://doi.org/10.5194/egusphere-egu2020-11278, 2020.
EGU2020-2200 | Displays | ITS5.1/CL3.6
Quantifying the probability distribution function of the Transient Climate Response to Cumulative CO2 EmissionsLynsay Spafford and Andrew MacDougall
The Transient Climate Response to Cumulative CO2 Emissions (TCRE) is the proportionality between global temperature change and cumulative CO2 emissions. The TCRE implies a finite quantity of CO2 emissions, or carbon budget, consistent with a given temperature change limit. The uncertainty of the TCRE is often assumed be normally distributed, but this assumption has yet to be validated. We calculated the TCRE using a zero-dimensional ocean diffusive model and a Monte-Carlo error propagation (n=10 000 000) randomly drawing from probability density functions of the climate feedback parameter, the land-borne fraction of carbon, effective ocean diffusivity, radiative forcing from an e-fold increase in atmospheric CO2 concentration, and the ratio of sea to global surface temperature change. The calculated TCRE has a positively skewed distribution, ranging from 1.1-2.9 K EgC-1 (5-95% confidence), with a mean and median value of 1.9 K EgC-1 and 1.8 K EgC-1. The calculated distribution of the TCRE is well described by a log-normal distribution. The CO2-only carbon budget compatible with 2°C warming is 1 100 PgC, ranging from 700-1 800 PgC (5-95% confidence) estimated using a simplified model of ocean dynamics. Climate sensitivity (climate feedback) is the most influential Earth system parameter on the TCRE, followed by the land-borne fraction of carbon, radiative forcing from an e-fold increase in CO2, effective ocean diffusivity, and the ratio of sea to global surface temperature change. While the uncertainty of the TCRE is considerable, the use of a log-normal distribution may improve estimations of the TCRE and associated carbon budgets.
How to cite: Spafford, L. and MacDougall, A.: Quantifying the probability distribution function of the Transient Climate Response to Cumulative CO2 Emissions, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2200, https://doi.org/10.5194/egusphere-egu2020-2200, 2020.
The Transient Climate Response to Cumulative CO2 Emissions (TCRE) is the proportionality between global temperature change and cumulative CO2 emissions. The TCRE implies a finite quantity of CO2 emissions, or carbon budget, consistent with a given temperature change limit. The uncertainty of the TCRE is often assumed be normally distributed, but this assumption has yet to be validated. We calculated the TCRE using a zero-dimensional ocean diffusive model and a Monte-Carlo error propagation (n=10 000 000) randomly drawing from probability density functions of the climate feedback parameter, the land-borne fraction of carbon, effective ocean diffusivity, radiative forcing from an e-fold increase in atmospheric CO2 concentration, and the ratio of sea to global surface temperature change. The calculated TCRE has a positively skewed distribution, ranging from 1.1-2.9 K EgC-1 (5-95% confidence), with a mean and median value of 1.9 K EgC-1 and 1.8 K EgC-1. The calculated distribution of the TCRE is well described by a log-normal distribution. The CO2-only carbon budget compatible with 2°C warming is 1 100 PgC, ranging from 700-1 800 PgC (5-95% confidence) estimated using a simplified model of ocean dynamics. Climate sensitivity (climate feedback) is the most influential Earth system parameter on the TCRE, followed by the land-borne fraction of carbon, radiative forcing from an e-fold increase in CO2, effective ocean diffusivity, and the ratio of sea to global surface temperature change. While the uncertainty of the TCRE is considerable, the use of a log-normal distribution may improve estimations of the TCRE and associated carbon budgets.
How to cite: Spafford, L. and MacDougall, A.: Quantifying the probability distribution function of the Transient Climate Response to Cumulative CO2 Emissions, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2200, https://doi.org/10.5194/egusphere-egu2020-2200, 2020.
EGU2020-6124 | Displays | ITS5.1/CL3.6
Carbon-concentration and carbon-climate feedbacks in CMIP6 models, and their comparison to CMIP5 modelsVivek Arora, Anna Katavouta, Richard Williams, Chris Jones, Victor Brovkin, and Pierre Friedlingstein and the rest of C4MIP carbon feedbacks analysis team
Results from the fully-, biogeochemically-, and radiatively-coupled simulations in which CO2 increases at a rate of 1% per year (1pctCO2) from its pre-industrial value are analyzed to quantify the magnitude of two feedback parameters which characterize the coupled carbon-climate system. These feedback parameters quantify the response of ocean and terrestrial carbon pools to changes in atmospheric CO2 concentration and the resulting change in global climate. The results are based on eight comprehensive Earth system models from the fifth Coupled Model Intercomparison Project (CMIP5) and eleven models from the sixth CMIP (CMIP6). The comparison of model results from two CMIP phases shows that, for both land and ocean, the model mean values of the feedback parameters and their multi-model spread has not changed significantly across the two CMIP phases. The absolute values of feedback parameters are lower for land with models that include a representation of nitrogen cycle. The sensitivity of feedback parameters to the three different ways in which they may be calculated is shown and, consistent with existing studies, the most relevant definition is that calculated using results from the fully- and biogeochemically-coupled configurations. Based on these two simulations simplified expressions for the feedback parameters are obtained when the small temperature change in the biogeochemically-coupled simulation is ignored. Decomposition of the terms of these simplified expressions for the feedback parameters allows identification of the reasons for differing responses among ocean and land carbon cycle models.
How to cite: Arora, V., Katavouta, A., Williams, R., Jones, C., Brovkin, V., and Friedlingstein, P. and the rest of C4MIP carbon feedbacks analysis team: Carbon-concentration and carbon-climate feedbacks in CMIP6 models, and their comparison to CMIP5 models, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6124, https://doi.org/10.5194/egusphere-egu2020-6124, 2020.
Results from the fully-, biogeochemically-, and radiatively-coupled simulations in which CO2 increases at a rate of 1% per year (1pctCO2) from its pre-industrial value are analyzed to quantify the magnitude of two feedback parameters which characterize the coupled carbon-climate system. These feedback parameters quantify the response of ocean and terrestrial carbon pools to changes in atmospheric CO2 concentration and the resulting change in global climate. The results are based on eight comprehensive Earth system models from the fifth Coupled Model Intercomparison Project (CMIP5) and eleven models from the sixth CMIP (CMIP6). The comparison of model results from two CMIP phases shows that, for both land and ocean, the model mean values of the feedback parameters and their multi-model spread has not changed significantly across the two CMIP phases. The absolute values of feedback parameters are lower for land with models that include a representation of nitrogen cycle. The sensitivity of feedback parameters to the three different ways in which they may be calculated is shown and, consistent with existing studies, the most relevant definition is that calculated using results from the fully- and biogeochemically-coupled configurations. Based on these two simulations simplified expressions for the feedback parameters are obtained when the small temperature change in the biogeochemically-coupled simulation is ignored. Decomposition of the terms of these simplified expressions for the feedback parameters allows identification of the reasons for differing responses among ocean and land carbon cycle models.
How to cite: Arora, V., Katavouta, A., Williams, R., Jones, C., Brovkin, V., and Friedlingstein, P. and the rest of C4MIP carbon feedbacks analysis team: Carbon-concentration and carbon-climate feedbacks in CMIP6 models, and their comparison to CMIP5 models, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6124, https://doi.org/10.5194/egusphere-egu2020-6124, 2020.
EGU2020-17643 | Displays | ITS5.1/CL3.6
Strength and reversibility of the ocean carbon sink under negative emissionsNadine Goris, Jerry Tjiputra, Ingjald Pilskog, and Jörg Schwinger
Climate change is progressing fast and net negative emissions will most likely be needed to achieve ambitious climate targets. To determine the amount of negative emissions needed, it is key to identify the reversibility of our carbon sinks, i.e. to establish how much their of their strength is lost during declining emissions. Specifically, the strength of the ocean carbon sink is likely to decline with ongoing rising emissions and subsequent negative emissions might lead to the ocean reverting into a carbon source.
In light of these challenges, we analyze strength and reversibility of the ocean carbon sink with the Norwegian Earth System Model under an idealized scenario, the 'Climate and carbon reversibility experiment' of CDRMIP. Here, a strong atmospheric CO2 increase of 1% per year is assumed until CO2-concentrations have quadrupled, followed by a decrease of 1% per year until pre-industrial concentrations are restored. Our model results indicate that the oceanic CO2-uptake is not able to keep pace with the atmospheric rise and descent, but shows only a slow increase of oceanic CO2-uptake and a sudden decrease with the onset of negative emissions. However, the seasonal envelope illustrates that this is not true for all months but that the oceanic CO2-uptake during austral winter months shows both a strong uptake and high reversibility.
A regional analysis of seasonal characteristics shows that a strong and reversible CO2-uptake throughout the experiment is only maintained by the biological pump in high latitudes during spring and summer (austral and boreal, respectively). For other months and latitudes, the oceanic CO2-uptake is weak or even turns into outgassing due to ongoing warming and subsequent sluggish cooling of sea surface temperature. In our model simulation, the inertia of sea surface temperature is the main cause for the irreversibility of the oceanic CO2-uptake. This result, however, is highly dependent on the amount of CO2 taken up during rising emissions. In a corresponding simulation without warming, our model’s ocean takes up more CO2 during rising emissions, leading to dissolved inorganic carbon being the main cause for the irreversability of the oceanic CO2 sink.
Our study shows that seasonal mechanisms are of high importance when considering the strength of the ocean carbon sink under negative emissions. Regional monthly trajectories visualize different aspects of biological and physical mechanisms, which can be observed early on and help to verify strength and reversibility of the ocean carbon sink.
How to cite: Goris, N., Tjiputra, J., Pilskog, I., and Schwinger, J.: Strength and reversibility of the ocean carbon sink under negative emissions, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-17643, https://doi.org/10.5194/egusphere-egu2020-17643, 2020.
Climate change is progressing fast and net negative emissions will most likely be needed to achieve ambitious climate targets. To determine the amount of negative emissions needed, it is key to identify the reversibility of our carbon sinks, i.e. to establish how much their of their strength is lost during declining emissions. Specifically, the strength of the ocean carbon sink is likely to decline with ongoing rising emissions and subsequent negative emissions might lead to the ocean reverting into a carbon source.
In light of these challenges, we analyze strength and reversibility of the ocean carbon sink with the Norwegian Earth System Model under an idealized scenario, the 'Climate and carbon reversibility experiment' of CDRMIP. Here, a strong atmospheric CO2 increase of 1% per year is assumed until CO2-concentrations have quadrupled, followed by a decrease of 1% per year until pre-industrial concentrations are restored. Our model results indicate that the oceanic CO2-uptake is not able to keep pace with the atmospheric rise and descent, but shows only a slow increase of oceanic CO2-uptake and a sudden decrease with the onset of negative emissions. However, the seasonal envelope illustrates that this is not true for all months but that the oceanic CO2-uptake during austral winter months shows both a strong uptake and high reversibility.
A regional analysis of seasonal characteristics shows that a strong and reversible CO2-uptake throughout the experiment is only maintained by the biological pump in high latitudes during spring and summer (austral and boreal, respectively). For other months and latitudes, the oceanic CO2-uptake is weak or even turns into outgassing due to ongoing warming and subsequent sluggish cooling of sea surface temperature. In our model simulation, the inertia of sea surface temperature is the main cause for the irreversibility of the oceanic CO2-uptake. This result, however, is highly dependent on the amount of CO2 taken up during rising emissions. In a corresponding simulation without warming, our model’s ocean takes up more CO2 during rising emissions, leading to dissolved inorganic carbon being the main cause for the irreversability of the oceanic CO2 sink.
Our study shows that seasonal mechanisms are of high importance when considering the strength of the ocean carbon sink under negative emissions. Regional monthly trajectories visualize different aspects of biological and physical mechanisms, which can be observed early on and help to verify strength and reversibility of the ocean carbon sink.
How to cite: Goris, N., Tjiputra, J., Pilskog, I., and Schwinger, J.: Strength and reversibility of the ocean carbon sink under negative emissions, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-17643, https://doi.org/10.5194/egusphere-egu2020-17643, 2020.
EGU2020-2807 | Displays | ITS5.1/CL3.6
Capacity of bioenergy with carbon capture and storage in China consistent with the global remaining carbon budgetRong Wang and the BECCS group
Bioenergy with carbon capture and storage (BECCS) is one of negative-emission technologies that must be applied if we are to achieve the 1.5 °C, or even the 2 °C, warming targets of the Paris Agreement. As a start, existing coal-fired power plants could be retrofitted to co-fire with biofuel from agricultural and forestry residues, but the potential and costs of BECCS are as yet unassessed. Here, we modelled an optimal county-scale network of BECCS in China, by considering: spatial information on biofuel feedstock; power-plant retrofitting to increase the use of biofuel; biofuel transport to power stations and CO2 transport to geological repositories for carbon storage; and BECCS life-cycle emissions. BECCS at marginal costs of $100 per tonne CO2-equivalent (t CO2-eq)-1 could abate net CO2-eq emissions by up to Gt yr-1, assuming that CO2 emitted by power plants could be captured at 90% efficiency and accounting for additional emissions of greenhouse gases from the production cycle of BECCS. Because of the huge stock of useable agricultural and forestry residues in China, this carbon price leverages 20 times more mitigation of CO2 emissions by BECCS in China than in western North America. To cap cumulative emissions over 2011-2030 from China’s power sector at 5% of the global remaining carbon budget for the 2 °C limit since 2011, BECCS would require marginal costs of $ (t CO2-eq) -1, or the equivalent of investing 0.45% of GDP to generate 1.22 PWh yr-1 of electricity by 2030; this would abate 35% more carbon emissions than the announced nationally determined contribution by China. These results clarify the economics of emission abatement by BECCS in China, suggesting that using the available domestic biofuel feedstock has the potential to make a great contribution to global carbon emission mitigation.
How to cite: Wang, R. and the BECCS group: Capacity of bioenergy with carbon capture and storage in China consistent with the global remaining carbon budget, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2807, https://doi.org/10.5194/egusphere-egu2020-2807, 2020.
Bioenergy with carbon capture and storage (BECCS) is one of negative-emission technologies that must be applied if we are to achieve the 1.5 °C, or even the 2 °C, warming targets of the Paris Agreement. As a start, existing coal-fired power plants could be retrofitted to co-fire with biofuel from agricultural and forestry residues, but the potential and costs of BECCS are as yet unassessed. Here, we modelled an optimal county-scale network of BECCS in China, by considering: spatial information on biofuel feedstock; power-plant retrofitting to increase the use of biofuel; biofuel transport to power stations and CO2 transport to geological repositories for carbon storage; and BECCS life-cycle emissions. BECCS at marginal costs of $100 per tonne CO2-equivalent (t CO2-eq)-1 could abate net CO2-eq emissions by up to Gt yr-1, assuming that CO2 emitted by power plants could be captured at 90% efficiency and accounting for additional emissions of greenhouse gases from the production cycle of BECCS. Because of the huge stock of useable agricultural and forestry residues in China, this carbon price leverages 20 times more mitigation of CO2 emissions by BECCS in China than in western North America. To cap cumulative emissions over 2011-2030 from China’s power sector at 5% of the global remaining carbon budget for the 2 °C limit since 2011, BECCS would require marginal costs of $ (t CO2-eq) -1, or the equivalent of investing 0.45% of GDP to generate 1.22 PWh yr-1 of electricity by 2030; this would abate 35% more carbon emissions than the announced nationally determined contribution by China. These results clarify the economics of emission abatement by BECCS in China, suggesting that using the available domestic biofuel feedstock has the potential to make a great contribution to global carbon emission mitigation.
How to cite: Wang, R. and the BECCS group: Capacity of bioenergy with carbon capture and storage in China consistent with the global remaining carbon budget, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2807, https://doi.org/10.5194/egusphere-egu2020-2807, 2020.
EGU2020-19469 | Displays | ITS5.1/CL3.6
Emergent Constraints on Climate-Carbon Cycle ProjectionsPeter Cox
Earth System Models (ESMs) are designed to project changes in the climate-carbon cycle system over the coming centuries. These models agree that the climate will change significantly under feasible scenarios of future CO2emissions. However, model projections still cover a wide range for any given scenario, which impedes progress on tackling climate change. Estimates of the Transient Climate Response to Emissions (TCRE), and therefore of remaining carbon budgets, are affected by uncertainties in the response of land and ocean carbon sinks to changes in climate and CO2, and also by continuing uncertainties in the sensitivity of climate to radiative forcing. Over the last 7 years Emergent Constraints have been proposed on many of the key uncertainties. Emergent constraints use the full range of model behaviours to find relationships between measureable aspects of present and past climates, and future climate projections. This presentation will summarise proposed emergent constraints of relevance to future climate-carbon cycle projections, and discuss the implications for the remaining carbon budgets for stabilisation at 1.5K and 2K.
How to cite: Cox, P.: Emergent Constraints on Climate-Carbon Cycle Projections, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19469, https://doi.org/10.5194/egusphere-egu2020-19469, 2020.
Earth System Models (ESMs) are designed to project changes in the climate-carbon cycle system over the coming centuries. These models agree that the climate will change significantly under feasible scenarios of future CO2emissions. However, model projections still cover a wide range for any given scenario, which impedes progress on tackling climate change. Estimates of the Transient Climate Response to Emissions (TCRE), and therefore of remaining carbon budgets, are affected by uncertainties in the response of land and ocean carbon sinks to changes in climate and CO2, and also by continuing uncertainties in the sensitivity of climate to radiative forcing. Over the last 7 years Emergent Constraints have been proposed on many of the key uncertainties. Emergent constraints use the full range of model behaviours to find relationships between measureable aspects of present and past climates, and future climate projections. This presentation will summarise proposed emergent constraints of relevance to future climate-carbon cycle projections, and discuss the implications for the remaining carbon budgets for stabilisation at 1.5K and 2K.
How to cite: Cox, P.: Emergent Constraints on Climate-Carbon Cycle Projections, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19469, https://doi.org/10.5194/egusphere-egu2020-19469, 2020.
EGU2020-18182 | Displays | ITS5.1/CL3.6
Climate assessment of emissions scenarios for use in WG3 of the IPCC’s Sixth Assessment ReportMatthew Gidden, Zebedee Nicholls, Edward Byers, Gaurav Ganti, Jarmo Kikstra, Robin Lamboll, Malte Meinshausen, Keywan Riahi, and Joeri Rogelj
Consistent and comparable climate assessments of scenarios are critical within the context of IPCC assessment reports. Given the number of scenarios assessed by WG3, the assessment “pipeline” must be almost completely automated. Here, we present the application of a new assessment pipeline which combines state-of-the-art components into a single workflow in order to derive climate outcomes for integrated assessment model (IAM) scenarios assessed by WG3 of the IPCC. A consistent analysis ensures that WG3’s conclusions about the socioeconomic transformations required to maintain a safe climate are based on the best understanding of our planetary boundaries from WG1. For example, if WG1 determines that climate sensitivity is higher than previously considered, then WG3 could incorporate this insight by e.g. considering much smaller remaining carbon budgets for any given temperature target.
The scenario-climate assessment pipeline is comprised of three primary components. First, a consistent harmonization algorithm which maintains critical model characteristics between harmonized and unharmonized scenarios [1] is employed to harmonize emissions trajectories to a common and consistent historical dataset as used in CMIP6 [2]. Next, a scenario’s reported emissions trajectories are analyzed as to the completeness of its species and sectoral coverage. A consistent set of 14 emissions species are expected, aligning with published work within ScenarioMIP and CMIP6 (see ref [2], Table 2). Should any component of this full set of emissions trajectories be absent for a given scenario, an algorithm (e.g., generalised quantile walk [3]) is employed in order to “back-fill” missing species at the native model regional resolution. Finally, full emissions scenarios are analyzed by an Earth System Model emulator, e.g., MAGICC [4].
In this presentation, we explore differences in climate assessments and estimated remaining carbon budgets across various components of the pipeline for available scenarios in the literature. We consider the impact of alternative choices, especially those made in prior assessments by the IPCC (AR5, SR15), including, for example, the historical emissions database used, the effect of harmonization and back-filling, as well as the version and setup of MAGICC used.
References
[1] Gidden, M.J., Fujimori, S., van den Berg, M., Klein, D., Smith, S.J., van Vuuren, D.P. and Riahi, K., 2018. A methodology and implementation of automated emissions harmonization for use in Integrated Assessment Models. Environmental Modelling & Software, 105, pp.187-200.
[2] Gidden, M. J., Riahi, K., Smith, S. J., Fujimori, S., Luderer, G., Kriegler, E., van Vuuren, D. P., van den Berg, M., Feng, L., Klein, D., Calvin, K., Doelman, J. C., Frank, S., Fricko, O., Harmsen, M., Hasegawa, T., Havlik, P., Hilaire, J., Hoesly, R., Horing, J., Popp, A., Stehfest, E., and Takahashi, K.: Global emissions pathways under different socioeconomic scenarios for use in CMIP6: a dataset of harmonized emissions trajectories through the end of the century, Geosci. Model Dev., 12, 1443-1475, https://doi.org/10.5194/gmd-12-1443-2019, 2019.
[3] Teske, S. et al., Achieving the Paris Climate Agreement Goals. Springer, 2019.
[4] Meinshausen, M., Raper, S.C. and Wigley, T.M., 2011. Emulating coupled atmosphere-ocean and carbon cycle models with a simpler model, MAGICC6–Part 1: Model description and calibration. Atmospheric Chemistry and Physics, 11(4), pp.1417-1456.
How to cite: Gidden, M., Nicholls, Z., Byers, E., Ganti, G., Kikstra, J., Lamboll, R., Meinshausen, M., Riahi, K., and Rogelj, J.: Climate assessment of emissions scenarios for use in WG3 of the IPCC’s Sixth Assessment Report, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18182, https://doi.org/10.5194/egusphere-egu2020-18182, 2020.
Consistent and comparable climate assessments of scenarios are critical within the context of IPCC assessment reports. Given the number of scenarios assessed by WG3, the assessment “pipeline” must be almost completely automated. Here, we present the application of a new assessment pipeline which combines state-of-the-art components into a single workflow in order to derive climate outcomes for integrated assessment model (IAM) scenarios assessed by WG3 of the IPCC. A consistent analysis ensures that WG3’s conclusions about the socioeconomic transformations required to maintain a safe climate are based on the best understanding of our planetary boundaries from WG1. For example, if WG1 determines that climate sensitivity is higher than previously considered, then WG3 could incorporate this insight by e.g. considering much smaller remaining carbon budgets for any given temperature target.
The scenario-climate assessment pipeline is comprised of three primary components. First, a consistent harmonization algorithm which maintains critical model characteristics between harmonized and unharmonized scenarios [1] is employed to harmonize emissions trajectories to a common and consistent historical dataset as used in CMIP6 [2]. Next, a scenario’s reported emissions trajectories are analyzed as to the completeness of its species and sectoral coverage. A consistent set of 14 emissions species are expected, aligning with published work within ScenarioMIP and CMIP6 (see ref [2], Table 2). Should any component of this full set of emissions trajectories be absent for a given scenario, an algorithm (e.g., generalised quantile walk [3]) is employed in order to “back-fill” missing species at the native model regional resolution. Finally, full emissions scenarios are analyzed by an Earth System Model emulator, e.g., MAGICC [4].
In this presentation, we explore differences in climate assessments and estimated remaining carbon budgets across various components of the pipeline for available scenarios in the literature. We consider the impact of alternative choices, especially those made in prior assessments by the IPCC (AR5, SR15), including, for example, the historical emissions database used, the effect of harmonization and back-filling, as well as the version and setup of MAGICC used.
References
[1] Gidden, M.J., Fujimori, S., van den Berg, M., Klein, D., Smith, S.J., van Vuuren, D.P. and Riahi, K., 2018. A methodology and implementation of automated emissions harmonization for use in Integrated Assessment Models. Environmental Modelling & Software, 105, pp.187-200.
[2] Gidden, M. J., Riahi, K., Smith, S. J., Fujimori, S., Luderer, G., Kriegler, E., van Vuuren, D. P., van den Berg, M., Feng, L., Klein, D., Calvin, K., Doelman, J. C., Frank, S., Fricko, O., Harmsen, M., Hasegawa, T., Havlik, P., Hilaire, J., Hoesly, R., Horing, J., Popp, A., Stehfest, E., and Takahashi, K.: Global emissions pathways under different socioeconomic scenarios for use in CMIP6: a dataset of harmonized emissions trajectories through the end of the century, Geosci. Model Dev., 12, 1443-1475, https://doi.org/10.5194/gmd-12-1443-2019, 2019.
[3] Teske, S. et al., Achieving the Paris Climate Agreement Goals. Springer, 2019.
[4] Meinshausen, M., Raper, S.C. and Wigley, T.M., 2011. Emulating coupled atmosphere-ocean and carbon cycle models with a simpler model, MAGICC6–Part 1: Model description and calibration. Atmospheric Chemistry and Physics, 11(4), pp.1417-1456.
How to cite: Gidden, M., Nicholls, Z., Byers, E., Ganti, G., Kikstra, J., Lamboll, R., Meinshausen, M., Riahi, K., and Rogelj, J.: Climate assessment of emissions scenarios for use in WG3 of the IPCC’s Sixth Assessment Report, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18182, https://doi.org/10.5194/egusphere-egu2020-18182, 2020.
EGU2020-18575 | Displays | ITS5.1/CL3.6
Warmer climate projections in CMIP6: the role of changes in the greenhouse gas concentrations from CMIP5 to CMIP6Klaus Wyser, Erik Kjellström, Torben Koenigk, Helena Martins, and Ralf Döscher
Many modelling groups have contributed with CMIP6 scenario experiments to the CMIP6 archive. The analysis of CMIP6 future projections has started and first results indicate that CMIP6 projections are warmer than their counterparts from CMIP5. To some extent this is explained with the higher climate sensitivity of many of the new generation of climate models. However, not only have models been updated since CMIP5 but also the forcings have changed from RCPs to SSPs. The new SSPs have been designed to have the same instantaneous radiative forcing at the end of the 21st century. However, we find that in the EC-Earth3 model the effective radiative forcing differs substantially when the GHG concentrations from the SSP are replaced by those from the corresponding RCP with the same nameplate RF. We estimate that for the SSP5-8.5 and SSP2-4.5 scenarios 50% or more of the stronger warming in CMIP6 than CMIP5 for the EC-Earth model can be explained by changes in GHG gas concentrations. Other changes in the forcing datasets such as aerosols only play a minor role for the additional warming. The discrepancy between RCP and SSP forcing datasets needs to be accounted for when comparing CMIP5 and CMIP6 climate projections and should be properly conveyed to the climate impact, adaptation and mitigation communities.
How to cite: Wyser, K., Kjellström, E., Koenigk, T., Martins, H., and Döscher, R.: Warmer climate projections in CMIP6: the role of changes in the greenhouse gas concentrations from CMIP5 to CMIP6, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18575, https://doi.org/10.5194/egusphere-egu2020-18575, 2020.
Many modelling groups have contributed with CMIP6 scenario experiments to the CMIP6 archive. The analysis of CMIP6 future projections has started and first results indicate that CMIP6 projections are warmer than their counterparts from CMIP5. To some extent this is explained with the higher climate sensitivity of many of the new generation of climate models. However, not only have models been updated since CMIP5 but also the forcings have changed from RCPs to SSPs. The new SSPs have been designed to have the same instantaneous radiative forcing at the end of the 21st century. However, we find that in the EC-Earth3 model the effective radiative forcing differs substantially when the GHG concentrations from the SSP are replaced by those from the corresponding RCP with the same nameplate RF. We estimate that for the SSP5-8.5 and SSP2-4.5 scenarios 50% or more of the stronger warming in CMIP6 than CMIP5 for the EC-Earth model can be explained by changes in GHG gas concentrations. Other changes in the forcing datasets such as aerosols only play a minor role for the additional warming. The discrepancy between RCP and SSP forcing datasets needs to be accounted for when comparing CMIP5 and CMIP6 climate projections and should be properly conveyed to the climate impact, adaptation and mitigation communities.
How to cite: Wyser, K., Kjellström, E., Koenigk, T., Martins, H., and Döscher, R.: Warmer climate projections in CMIP6: the role of changes in the greenhouse gas concentrations from CMIP5 to CMIP6, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18575, https://doi.org/10.5194/egusphere-egu2020-18575, 2020.
EGU2020-5980 | Displays | ITS5.1/CL3.6
Estimates of carbon budgets consistent with global warming of 1.5-2°C from ensembles of simulations with the MIT Earth System Model of intermediate complexity: the role of non-CO2 GHGs and SO2 emissionsAndrei Sokolov, Jennifer Morris, and Sergey Paltsev
We present estimates of carbon budgets for different levels of surface air temperature (SAT) increase from multiple 400-member ensembles of simulations with the MIT Earth System Model of intermediate complexity (MESM). Ensembles were carried out using distributions of climate parameters affecting climate system response to external forcing obtained by comparison of historical simulations with available observations.
First, to evaluate MESM performance, we ran two ensembles: one with MESM forced by 1% per year increase in CO2 concentrations (and with non-CO2 greenhouse gases (GHG) at pre-industrial level) and the other with GHG concentrations from the RCP 8.5 scenario. Distributions of climate characteristics describing model response to increasing CO2 concentrations (e.g. TRC and TCRE) as well as values of carbon budgets of exceeding different SAT levels agree well with published estimates.
Then we ran a number of ensembles with MESM driven by emissions produced by the MIT Economic Projection and Policy Analysis (EPPA) Model. Our results show that under stringent mitigation policy concerning non-CO2 GHGs, the SAT increase can be kept below 2°C relative to pre-industrial with 66% probability through the end of the 21st century without negative CO2 emissions. The SAT increase can also be restricted to 1.5°C with 50% probability if such policy is implemented immediately. If GHG emissions follow the path implied by the Paris Agreement pledges through 2030, then it would require either an unrealistically sharp drop in non-CO2 GHG emissions or negative CO2 emissions to stay below 1.5°C. Keeping the temperature increase below a chosen value (1.5°C or 2°C) beyond 2100 will most likely require negative CO2 emissions in part due to difficulties in restricting agricultural methane emissions.
Further analysis shows that temperature change during the 21st century is also significantly affected by the assumption concerning decrease of SO2 emissions from energy-intensive industries. Implementation of technologies resulting in reduction of those emissions will decrease the probability of SAT staying below a given limit by about 7-12%. This will affect the time when negative CO2 emissions will become necessary to prevent temperature increase.
How to cite: Sokolov, A., Morris, J., and Paltsev, S.: Estimates of carbon budgets consistent with global warming of 1.5-2°C from ensembles of simulations with the MIT Earth System Model of intermediate complexity: the role of non-CO2 GHGs and SO2 emissions, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5980, https://doi.org/10.5194/egusphere-egu2020-5980, 2020.
We present estimates of carbon budgets for different levels of surface air temperature (SAT) increase from multiple 400-member ensembles of simulations with the MIT Earth System Model of intermediate complexity (MESM). Ensembles were carried out using distributions of climate parameters affecting climate system response to external forcing obtained by comparison of historical simulations with available observations.
First, to evaluate MESM performance, we ran two ensembles: one with MESM forced by 1% per year increase in CO2 concentrations (and with non-CO2 greenhouse gases (GHG) at pre-industrial level) and the other with GHG concentrations from the RCP 8.5 scenario. Distributions of climate characteristics describing model response to increasing CO2 concentrations (e.g. TRC and TCRE) as well as values of carbon budgets of exceeding different SAT levels agree well with published estimates.
Then we ran a number of ensembles with MESM driven by emissions produced by the MIT Economic Projection and Policy Analysis (EPPA) Model. Our results show that under stringent mitigation policy concerning non-CO2 GHGs, the SAT increase can be kept below 2°C relative to pre-industrial with 66% probability through the end of the 21st century without negative CO2 emissions. The SAT increase can also be restricted to 1.5°C with 50% probability if such policy is implemented immediately. If GHG emissions follow the path implied by the Paris Agreement pledges through 2030, then it would require either an unrealistically sharp drop in non-CO2 GHG emissions or negative CO2 emissions to stay below 1.5°C. Keeping the temperature increase below a chosen value (1.5°C or 2°C) beyond 2100 will most likely require negative CO2 emissions in part due to difficulties in restricting agricultural methane emissions.
Further analysis shows that temperature change during the 21st century is also significantly affected by the assumption concerning decrease of SO2 emissions from energy-intensive industries. Implementation of technologies resulting in reduction of those emissions will decrease the probability of SAT staying below a given limit by about 7-12%. This will affect the time when negative CO2 emissions will become necessary to prevent temperature increase.
How to cite: Sokolov, A., Morris, J., and Paltsev, S.: Estimates of carbon budgets consistent with global warming of 1.5-2°C from ensembles of simulations with the MIT Earth System Model of intermediate complexity: the role of non-CO2 GHGs and SO2 emissions, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5980, https://doi.org/10.5194/egusphere-egu2020-5980, 2020.
EGU2020-2452 | Displays | ITS5.1/CL3.6
Quantifying non-CO2 contributions to remaining carbon budgetsStuart Jenkins, Michelle Cain, Pierre Friedlingstein, Nathan Gillett, and Myles Allen
The IPCC Special Report on 1.5°C concluded that the maximum level of anthropogenic global warming is “determined by cumulative net global anthropogenic CO2 emissions up to the time of net zero CO2 emissions and the level of non-CO2 radiative forcing” in the decades prior to the time of peak warming. Here we quantify this statement, using CO2-forcing-equivalent (CO2-fe) emissions to calculate remaining carbon budgets without treating available mitigation scenarios as a representative sample of possible futures.
CO2-fe emissions are used to calculate an observationally-constrained estimate of the Transient Climate Response to cumulative Emissions (TCRE) using a large ensemble of historical radaitve forcing timeseries. This observationally-constrained TCRE is used to calculate remaining total CO2-fe budgets from 2018 to 1.5°C, which we compare with results discussed in Chapter 2, SR15. We consider contributions to this total remaining budget from CO2 and non-CO2 sources using both historical observations and the available mitigation scenarios in the IAMC scenario database.
We calculate remaining CO2 budgets for a 33, 50 or 66% chance of limiting peak warming to 1.5°C and use these to assess the extent to which scenarios in the IAMC scenario database are consistent with ambitious mitigation as outlined in the Paris Agreement. We argue that, assuming no change in the definition of observed global warming and no increase in TCRE due to non-linear feedbacks, scenarios currently classified as “lower 2°C-compatible” are consistent with a best-estimate peak warming of 1.5°C.
How to cite: Jenkins, S., Cain, M., Friedlingstein, P., Gillett, N., and Allen, M.: Quantifying non-CO2 contributions to remaining carbon budgets, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2452, https://doi.org/10.5194/egusphere-egu2020-2452, 2020.
The IPCC Special Report on 1.5°C concluded that the maximum level of anthropogenic global warming is “determined by cumulative net global anthropogenic CO2 emissions up to the time of net zero CO2 emissions and the level of non-CO2 radiative forcing” in the decades prior to the time of peak warming. Here we quantify this statement, using CO2-forcing-equivalent (CO2-fe) emissions to calculate remaining carbon budgets without treating available mitigation scenarios as a representative sample of possible futures.
CO2-fe emissions are used to calculate an observationally-constrained estimate of the Transient Climate Response to cumulative Emissions (TCRE) using a large ensemble of historical radaitve forcing timeseries. This observationally-constrained TCRE is used to calculate remaining total CO2-fe budgets from 2018 to 1.5°C, which we compare with results discussed in Chapter 2, SR15. We consider contributions to this total remaining budget from CO2 and non-CO2 sources using both historical observations and the available mitigation scenarios in the IAMC scenario database.
We calculate remaining CO2 budgets for a 33, 50 or 66% chance of limiting peak warming to 1.5°C and use these to assess the extent to which scenarios in the IAMC scenario database are consistent with ambitious mitigation as outlined in the Paris Agreement. We argue that, assuming no change in the definition of observed global warming and no increase in TCRE due to non-linear feedbacks, scenarios currently classified as “lower 2°C-compatible” are consistent with a best-estimate peak warming of 1.5°C.
How to cite: Jenkins, S., Cain, M., Friedlingstein, P., Gillett, N., and Allen, M.: Quantifying non-CO2 contributions to remaining carbon budgets, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2452, https://doi.org/10.5194/egusphere-egu2020-2452, 2020.
EGU2020-19723 | Displays | ITS5.1/CL3.6
Revisiting carbon budgets of IPCC 1.5-degree report by taking into account correlation with non-CO2 warming and TCREAntti-Ilari Partanen, Jia Liu, Christopher J. Smith, and Hannele Korhonen
The discovery of a nearly linear relationship between cumulative CO2 emissions and global mean surface temperature increase has given rise to intensive scientific research to assess the maximum allowable CO2 emissions compatible with some temperature threshold such as the goals of the Paris Agreement. The recent IPCC Special Report on Global warming of 1.5 °C (SR15) used a novel method to calculate remaining carbon budget for the 1.5 °C warming. The first step was to estimate non-CO2 warming contribution based on a perturbed parameter ensemble of two simple models to get the allowable CO2-caused warming. The second step was to use a probability density distribution for Transient Climate Response to cumulative CO2 emissions (TCRE) to calculate the carbon budget from the CO2-caused warming. One shortcoming of this method is that it ignores potential correlation between non-CO2 warming contribution and TCRE. A significant part of the non-CO2 warming comes from decreasing aerosol forcing, and the present-day aerosol forcing linked with TCRE.
Here, we revisit the carbon budgets presented in SR15 by taking correlation into account. We analysed the FaIR model simulations used in SR15 individually and found a linear relationship between TCRE and non-CO2 warming for a given temperature increase. After a slight rescaling to get the revised carbon budget (for 0.5 °C additional warming) match with SR15 budget (600 Gt CO2) for the 50th-percentile TCRE value of 0.45 K/1000 Gt CO2, the 33th-67th percentile revised range was 380-960 Gt CO2, whereas SR15 gave narrower range of 440-850 Gt CO2. The wider range was expected as high TCRE is likely associated with high present-day aerosol forcing and hence with high non-CO2 contribution in future warming when aerosol forcing is decreasing. We analysed only results of FaIR model, and final SR15 numbers are an average of results based on FaIR and MAGICC. Therefore, this analysis should be repeated also for the MAGICC runs. As a conclusion, our results show that TCRE and non-CO2 warming contribution should not be considered independent variables when assessing remaining carbon budgets.
How to cite: Partanen, A.-I., Liu, J., Smith, C. J., and Korhonen, H.: Revisiting carbon budgets of IPCC 1.5-degree report by taking into account correlation with non-CO2 warming and TCRE, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19723, https://doi.org/10.5194/egusphere-egu2020-19723, 2020.
The discovery of a nearly linear relationship between cumulative CO2 emissions and global mean surface temperature increase has given rise to intensive scientific research to assess the maximum allowable CO2 emissions compatible with some temperature threshold such as the goals of the Paris Agreement. The recent IPCC Special Report on Global warming of 1.5 °C (SR15) used a novel method to calculate remaining carbon budget for the 1.5 °C warming. The first step was to estimate non-CO2 warming contribution based on a perturbed parameter ensemble of two simple models to get the allowable CO2-caused warming. The second step was to use a probability density distribution for Transient Climate Response to cumulative CO2 emissions (TCRE) to calculate the carbon budget from the CO2-caused warming. One shortcoming of this method is that it ignores potential correlation between non-CO2 warming contribution and TCRE. A significant part of the non-CO2 warming comes from decreasing aerosol forcing, and the present-day aerosol forcing linked with TCRE.
Here, we revisit the carbon budgets presented in SR15 by taking correlation into account. We analysed the FaIR model simulations used in SR15 individually and found a linear relationship between TCRE and non-CO2 warming for a given temperature increase. After a slight rescaling to get the revised carbon budget (for 0.5 °C additional warming) match with SR15 budget (600 Gt CO2) for the 50th-percentile TCRE value of 0.45 K/1000 Gt CO2, the 33th-67th percentile revised range was 380-960 Gt CO2, whereas SR15 gave narrower range of 440-850 Gt CO2. The wider range was expected as high TCRE is likely associated with high present-day aerosol forcing and hence with high non-CO2 contribution in future warming when aerosol forcing is decreasing. We analysed only results of FaIR model, and final SR15 numbers are an average of results based on FaIR and MAGICC. Therefore, this analysis should be repeated also for the MAGICC runs. As a conclusion, our results show that TCRE and non-CO2 warming contribution should not be considered independent variables when assessing remaining carbon budgets.
How to cite: Partanen, A.-I., Liu, J., Smith, C. J., and Korhonen, H.: Revisiting carbon budgets of IPCC 1.5-degree report by taking into account correlation with non-CO2 warming and TCRE, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19723, https://doi.org/10.5194/egusphere-egu2020-19723, 2020.
EGU2020-4573 | Displays | ITS5.1/CL3.6
Curbing our expectations: Global temperature impacts from strong mitigation of individual climate forcersBjørn H. Samset, Jan S. Fuglestvedt, and Marianne T. Lund
Achieving the goals of the Paris agreement, or a stabilization of the global climate, requires strong and sustained mitigation of a range of anthropogenic emissions. A key stepping stone in this effort would be a measurable reduction in the current rate of global warming, which has been approximately constant over time since the 1970s, or a statistically significant deviation of the time evolution of observations from a predetermined baseline expectation. The various components that contribute to anthropogenic climate change are however markedly different in their total, present day impact, and time scales from emissions reductions to an expected climate system response. Here, we investigate when a significant change in global mean surface temperature could be expected, relative to an emission pathway consistent with current global policies, for a broad range of long and short-lived climate forcers. By combining reduced complexity and Earth System modelling, we investigate a comprehensive set of idealized emission mitigation choices for the near term, while still taking into account natural variability. As expected, mitigation of anthropogenic emissions of CO2 stands out as the most efficient, both in the short and longer term, although very strong mitigation is required to have a clear effect. Further, we find that strong mitigation policy targeting black carbon (BC) emissions would have a rapid, discernible effect, but a low net effect in the longer term. Mitigation of CH4 stands out as an option that combines rapid effects on surface temperature with long term gains.
How to cite: Samset, B. H., Fuglestvedt, J. S., and Lund, M. T.: Curbing our expectations: Global temperature impacts from strong mitigation of individual climate forcers, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4573, https://doi.org/10.5194/egusphere-egu2020-4573, 2020.
Achieving the goals of the Paris agreement, or a stabilization of the global climate, requires strong and sustained mitigation of a range of anthropogenic emissions. A key stepping stone in this effort would be a measurable reduction in the current rate of global warming, which has been approximately constant over time since the 1970s, or a statistically significant deviation of the time evolution of observations from a predetermined baseline expectation. The various components that contribute to anthropogenic climate change are however markedly different in their total, present day impact, and time scales from emissions reductions to an expected climate system response. Here, we investigate when a significant change in global mean surface temperature could be expected, relative to an emission pathway consistent with current global policies, for a broad range of long and short-lived climate forcers. By combining reduced complexity and Earth System modelling, we investigate a comprehensive set of idealized emission mitigation choices for the near term, while still taking into account natural variability. As expected, mitigation of anthropogenic emissions of CO2 stands out as the most efficient, both in the short and longer term, although very strong mitigation is required to have a clear effect. Further, we find that strong mitigation policy targeting black carbon (BC) emissions would have a rapid, discernible effect, but a low net effect in the longer term. Mitigation of CH4 stands out as an option that combines rapid effects on surface temperature with long term gains.
How to cite: Samset, B. H., Fuglestvedt, J. S., and Lund, M. T.: Curbing our expectations: Global temperature impacts from strong mitigation of individual climate forcers, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4573, https://doi.org/10.5194/egusphere-egu2020-4573, 2020.
EGU2020-15062 | Displays | ITS5.1/CL3.6
Uncertainty in remaining carbon budgets increases with less ambitious targetsEndre Falck Mentzoni, Andreas Johansen, Andreas Rostrup Martinsen, Kristoffer Rypdal, and Martin Rypdal
EGU2020-5361 | Displays | ITS5.1/CL3.6
Assessing the role of internal variability in the carbon budgets frameworkKatarzyna Tokarska, Nathan P. Gillett, Vivek K. Arora, and Roland Séférian
Carbon budgets are a policy-relevant tool that provides a cap on global total CO2 emissions to limit global mean warming at the desired level, for example, to meet the Paris Agreement target. Internal variability due to natural fluctuations of the climate system affects the temperature and carbon uptake on land and in the ocean. However, uncertainties arising from internal variability have not been quantified in the Transient Climate Response on Cumulative Emissions (TCRE) framework and related carbon budgets. Here we show that even though land carbon uptake exhibits the highest internal variability, most of the uncertainty in TCRE and carbon budgets arises from the temperature component, in concentration-driven simulations. Resulting remaining carbon budgets for 1.5 and 2.0 °C temperature targets differ even up to ±10 PgC (± 36.7 GtCO2; 5-95% range), due to internal variability, which is approximately equivalent to one year of global annual CO2 emissions. Our results suggest that calculating carbon budgets directly from climate models’ output does not introduce significant biases in TCRE and remaining carbon budgets due to internal variability.
How to cite: Tokarska, K., Gillett, N. P., Arora, V. K., and Séférian, R.: Assessing the role of internal variability in the carbon budgets framework , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5361, https://doi.org/10.5194/egusphere-egu2020-5361, 2020.
Carbon budgets are a policy-relevant tool that provides a cap on global total CO2 emissions to limit global mean warming at the desired level, for example, to meet the Paris Agreement target. Internal variability due to natural fluctuations of the climate system affects the temperature and carbon uptake on land and in the ocean. However, uncertainties arising from internal variability have not been quantified in the Transient Climate Response on Cumulative Emissions (TCRE) framework and related carbon budgets. Here we show that even though land carbon uptake exhibits the highest internal variability, most of the uncertainty in TCRE and carbon budgets arises from the temperature component, in concentration-driven simulations. Resulting remaining carbon budgets for 1.5 and 2.0 °C temperature targets differ even up to ±10 PgC (± 36.7 GtCO2; 5-95% range), due to internal variability, which is approximately equivalent to one year of global annual CO2 emissions. Our results suggest that calculating carbon budgets directly from climate models’ output does not introduce significant biases in TCRE and remaining carbon budgets due to internal variability.
How to cite: Tokarska, K., Gillett, N. P., Arora, V. K., and Séférian, R.: Assessing the role of internal variability in the carbon budgets framework , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5361, https://doi.org/10.5194/egusphere-egu2020-5361, 2020.
EGU2020-13753 | Displays | ITS5.1/CL3.6
Using global sea-level rise targets to find optimal temperature overshoot profileChao Li, Hermann Held, Sascha Hokamp, and Jochem Marotzke
Even if surface warming could be kept below 2.0°C or 1.5°C by 2100, global sea-level rise will occur for several centuries or even millennia. One possible interpretation of a successful climate policy for the next few decades could be that it should avoid global-warming induced impacts on climate, ecosystems and human societies not only within this century, but also for the next centuries and beyond. Here, we perform a proof-of-concept study to introduce a constraint on SLR as a new climate target and compare the economic impact to that of a corresponding temperature target.
In the 21st yearly session of the Conference of the Parties in Paris in 2015, SLR threats to the Small Island Developing States (SIDS) prompted a commitment to strive for a lower global temperature target goal of limiting surface warming below 1.5°C. However, an SLR target more directly relates to their existential threats. We here substantially augmented the climate model of the optimizing climate-energy-economy model MIND (Model of Investment and Technological Development) from an impulse-response model to a three-layer ocean model with much-improved representation of ocean heat uptake. We introduce a global total SLR model with four components, one due to ocean thermal expansion, one due to Greenland ice-sheet melting, one due to Antarctic ice-sheet melting, and one due to mountain glaciers and ice cap melting. The newly developed integrated-assessment framework has enabled us to investigate, for the first time, a sea-level rise climate target.
Our results emphasize a key effect of carbon emissions pathways on the future SLR after the 21st century. The shape of carbon emissions pathways will strongly influence future SLR after the 21st century and generally affect SIDS over centuries. To reduce SLR-induced impacts on SIDS, a target is required that not only keeps surface warming below a certain level but also reduces surface warming substantially thereafter. We find that a global SLR target will provide a more sustainable and a lower-cost solution to limit both short-term and long-term climate changes for stakeholders who primarily care about SLR among all global warming impact categories compared to a temperature target with the same SLR by 2200.
We find that the SLR target can provide a temperature overshoot profile through a physical constraint rather than arbitrarily defining an overshoot range of temperature as acceptable. Temperature targets with a limited overshoot have been invoked to make the 2.0° and 1.5°C targets feasible in the context of real-world United Nations climate policy; however, rational constraints on the temperature overshoot have been unclear. SLR targets can be viewed as a reinterpretation of the 2.0° and 1.5°C targets and can provide a rational justification of a certain temperature overshoot for stakeholders who primarily care about SLR. Our present framework with reinterpretation of the widely agreed temperature targets can, in principle, be transferred from SLR targets to impact-related climate targets and can be used to identify a more sustainable path toward meeting the Paris Agreement.
How to cite: Li, C., Held, H., Hokamp, S., and Marotzke, J.: Using global sea-level rise targets to find optimal temperature overshoot profile, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13753, https://doi.org/10.5194/egusphere-egu2020-13753, 2020.
Even if surface warming could be kept below 2.0°C or 1.5°C by 2100, global sea-level rise will occur for several centuries or even millennia. One possible interpretation of a successful climate policy for the next few decades could be that it should avoid global-warming induced impacts on climate, ecosystems and human societies not only within this century, but also for the next centuries and beyond. Here, we perform a proof-of-concept study to introduce a constraint on SLR as a new climate target and compare the economic impact to that of a corresponding temperature target.
In the 21st yearly session of the Conference of the Parties in Paris in 2015, SLR threats to the Small Island Developing States (SIDS) prompted a commitment to strive for a lower global temperature target goal of limiting surface warming below 1.5°C. However, an SLR target more directly relates to their existential threats. We here substantially augmented the climate model of the optimizing climate-energy-economy model MIND (Model of Investment and Technological Development) from an impulse-response model to a three-layer ocean model with much-improved representation of ocean heat uptake. We introduce a global total SLR model with four components, one due to ocean thermal expansion, one due to Greenland ice-sheet melting, one due to Antarctic ice-sheet melting, and one due to mountain glaciers and ice cap melting. The newly developed integrated-assessment framework has enabled us to investigate, for the first time, a sea-level rise climate target.
Our results emphasize a key effect of carbon emissions pathways on the future SLR after the 21st century. The shape of carbon emissions pathways will strongly influence future SLR after the 21st century and generally affect SIDS over centuries. To reduce SLR-induced impacts on SIDS, a target is required that not only keeps surface warming below a certain level but also reduces surface warming substantially thereafter. We find that a global SLR target will provide a more sustainable and a lower-cost solution to limit both short-term and long-term climate changes for stakeholders who primarily care about SLR among all global warming impact categories compared to a temperature target with the same SLR by 2200.
We find that the SLR target can provide a temperature overshoot profile through a physical constraint rather than arbitrarily defining an overshoot range of temperature as acceptable. Temperature targets with a limited overshoot have been invoked to make the 2.0° and 1.5°C targets feasible in the context of real-world United Nations climate policy; however, rational constraints on the temperature overshoot have been unclear. SLR targets can be viewed as a reinterpretation of the 2.0° and 1.5°C targets and can provide a rational justification of a certain temperature overshoot for stakeholders who primarily care about SLR. Our present framework with reinterpretation of the widely agreed temperature targets can, in principle, be transferred from SLR targets to impact-related climate targets and can be used to identify a more sustainable path toward meeting the Paris Agreement.
How to cite: Li, C., Held, H., Hokamp, S., and Marotzke, J.: Using global sea-level rise targets to find optimal temperature overshoot profile, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13753, https://doi.org/10.5194/egusphere-egu2020-13753, 2020.
EGU2020-18859 | Displays | ITS5.1/CL3.6
Sensitivity of climate mitigation signals to climate engineering choice and implementationKatherine Turner, Richard G. Williams, Anna Katavouta, and David J. Beerling
Unlike historical carbon emissions, which have been driven by economics and politics, climate engineering methods must be scientifically assessed, with consideration as to the type, rate, and total amount implemented. Temperatures reductions from carbon dioxide removal have been found to be proportional to the cumulative amount of carbon removed. Climate engineering “co-benefits”, such as reduced ocean acidification, may also occur and should be considered when optimising an engineered climate solution. In this study we examine the sensitivities of climate engineering to its implementation, focussing on the effects of its time of onset and rate of carbon capture or enhanced weathering, as well as background emissions and ocean physics.
We use two simple coupled models– a Gnanadesikan-style coupled atmosphere-ocean model and the intermediate-complexity Earth system model GENIE – with idealised setups for negative emissions through either carbon capture and sequestration, enhanced weathering, or a combination. The inclusion of enhanced weathering provides insight as to how changes in ocean carbonate chemistry may impact climate, both in terms of temperature and pH changes. We have created ensembles in which the timing, rate, background emissions scenario, and model physics of the model vary and use these ensembles to understand how these decisions may impact the efficacy of climate engineering.
We find that the effectiveness of climate engineering is dependent upon the background carbon emissions and the choice of climate engineering. Carbon capture reduces surface average temperature more per PgC captured than enhanced weathering, and both are more effective under low emissions scenarios. Additionally, background emissions determine how the impact of climate engineering is realised: under high emissions, earlier implementation of climate engineering results in faster temperature mitigation, although the end state is independent of the onset. When considering reductions in ocean acidification, we find that the alkalinity flux in our enhanced weathering experiments leads to a higher pH than for carbon capture, as well as the pH signals being less dependent on the timing. Thus, the timing and pathway of the climate engineering is important in terms of the resulting averted warming and acidification, though the final equilibrium is still effectively determined by the cumulative carbon budget.
How to cite: Turner, K., Williams, R. G., Katavouta, A., and Beerling, D. J.: Sensitivity of climate mitigation signals to climate engineering choice and implementation, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18859, https://doi.org/10.5194/egusphere-egu2020-18859, 2020.
Unlike historical carbon emissions, which have been driven by economics and politics, climate engineering methods must be scientifically assessed, with consideration as to the type, rate, and total amount implemented. Temperatures reductions from carbon dioxide removal have been found to be proportional to the cumulative amount of carbon removed. Climate engineering “co-benefits”, such as reduced ocean acidification, may also occur and should be considered when optimising an engineered climate solution. In this study we examine the sensitivities of climate engineering to its implementation, focussing on the effects of its time of onset and rate of carbon capture or enhanced weathering, as well as background emissions and ocean physics.
We use two simple coupled models– a Gnanadesikan-style coupled atmosphere-ocean model and the intermediate-complexity Earth system model GENIE – with idealised setups for negative emissions through either carbon capture and sequestration, enhanced weathering, or a combination. The inclusion of enhanced weathering provides insight as to how changes in ocean carbonate chemistry may impact climate, both in terms of temperature and pH changes. We have created ensembles in which the timing, rate, background emissions scenario, and model physics of the model vary and use these ensembles to understand how these decisions may impact the efficacy of climate engineering.
We find that the effectiveness of climate engineering is dependent upon the background carbon emissions and the choice of climate engineering. Carbon capture reduces surface average temperature more per PgC captured than enhanced weathering, and both are more effective under low emissions scenarios. Additionally, background emissions determine how the impact of climate engineering is realised: under high emissions, earlier implementation of climate engineering results in faster temperature mitigation, although the end state is independent of the onset. When considering reductions in ocean acidification, we find that the alkalinity flux in our enhanced weathering experiments leads to a higher pH than for carbon capture, as well as the pH signals being less dependent on the timing. Thus, the timing and pathway of the climate engineering is important in terms of the resulting averted warming and acidification, though the final equilibrium is still effectively determined by the cumulative carbon budget.
How to cite: Turner, K., Williams, R. G., Katavouta, A., and Beerling, D. J.: Sensitivity of climate mitigation signals to climate engineering choice and implementation, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18859, https://doi.org/10.5194/egusphere-egu2020-18859, 2020.
EGU2020-12750 | Displays | ITS5.1/CL3.6
The greenhouse gas climate commitment and reversibility of peak warking from historical emissionsAlexander MacIsaac, H. Damon Matthews, Nadine Mengis, and Kirsten Zickfeld
The warming caused by past CO2 emissions is known to persist for centuries to millennia, even in the absence of additional future emissions. Other non-CO2 greenhouse gas emission have caused additional historical warming, though the persistence of this non-CO2 warming varies among gases owing to their different atmospheric lifetimes. Under deep mitigation scenarios or in an idealized scenario of zero future greenhouse gas emissions, the past warming from shorter-lived non-CO2 gases has been shown to be considerably more reversible than that caused by CO2 emissions. Here we use an intermediate-complexity global climate model coupled to an atmospheric chemistry module to quantify the warming commitment and its reversibility for individual and groups of non-CO2 greenhouse gases. We show that warming caused by gases with short atmospheric lifetimes will decrease by more than half its peak value within 30 years following zeroed emissions at present day, with more 80 percent of peak temperature reversed by the end of this century. Despite the fast response of atmospheric temperature to the elimination of non-CO2 emissions, the ocean responds much more slowly: past ocean warming does not reverse, but rather continues for several centuries after zero emissions. Further consequences are shown for the land carbon pool, which decreases as an approximately linear function of historical non-CO2 greenhouse gas induced warming. Given that CO2 and non-CO2 greenhouse gas emissions share common emission sources, we also explore a set of scenarios where sets of emissions are zeroed according to two broad source categories: (1) fossil fuel combustion, and (2) land-use and agriculture. Using these additional mode runs, we investigate the temperature change that is avoided if all CO2 and non-CO2 greenhouse gas emissions from a particular source abruptly stops while others are allowed to continue. These results indicate the possibility of land-use change and agriculture activities continuing under deep mitigation scenarios and ambitious climate targets, without leading to exceedance of global climate targets. Though we analyze unlikely scenarios, our work provides baselines from which more realistic mitigation scenarios can be assessed. The reversibility of peak temperature caused by historical non-CO2 gases is a relevant measure for policy frameworks seeking to limit global warming to ambitious targets, such as the 1.5 oC target adopted by the Paris Agreement
How to cite: MacIsaac, A., Matthews, H. D., Mengis, N., and Zickfeld, K.: The greenhouse gas climate commitment and reversibility of peak warking from historical emissions, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12750, https://doi.org/10.5194/egusphere-egu2020-12750, 2020.
The warming caused by past CO2 emissions is known to persist for centuries to millennia, even in the absence of additional future emissions. Other non-CO2 greenhouse gas emission have caused additional historical warming, though the persistence of this non-CO2 warming varies among gases owing to their different atmospheric lifetimes. Under deep mitigation scenarios or in an idealized scenario of zero future greenhouse gas emissions, the past warming from shorter-lived non-CO2 gases has been shown to be considerably more reversible than that caused by CO2 emissions. Here we use an intermediate-complexity global climate model coupled to an atmospheric chemistry module to quantify the warming commitment and its reversibility for individual and groups of non-CO2 greenhouse gases. We show that warming caused by gases with short atmospheric lifetimes will decrease by more than half its peak value within 30 years following zeroed emissions at present day, with more 80 percent of peak temperature reversed by the end of this century. Despite the fast response of atmospheric temperature to the elimination of non-CO2 emissions, the ocean responds much more slowly: past ocean warming does not reverse, but rather continues for several centuries after zero emissions. Further consequences are shown for the land carbon pool, which decreases as an approximately linear function of historical non-CO2 greenhouse gas induced warming. Given that CO2 and non-CO2 greenhouse gas emissions share common emission sources, we also explore a set of scenarios where sets of emissions are zeroed according to two broad source categories: (1) fossil fuel combustion, and (2) land-use and agriculture. Using these additional mode runs, we investigate the temperature change that is avoided if all CO2 and non-CO2 greenhouse gas emissions from a particular source abruptly stops while others are allowed to continue. These results indicate the possibility of land-use change and agriculture activities continuing under deep mitigation scenarios and ambitious climate targets, without leading to exceedance of global climate targets. Though we analyze unlikely scenarios, our work provides baselines from which more realistic mitigation scenarios can be assessed. The reversibility of peak temperature caused by historical non-CO2 gases is a relevant measure for policy frameworks seeking to limit global warming to ambitious targets, such as the 1.5 oC target adopted by the Paris Agreement
How to cite: MacIsaac, A., Matthews, H. D., Mengis, N., and Zickfeld, K.: The greenhouse gas climate commitment and reversibility of peak warking from historical emissions, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12750, https://doi.org/10.5194/egusphere-egu2020-12750, 2020.
EGU2020-8433 | Displays | ITS5.1/CL3.6
Controls of the Transient Climate Response to Emissions: effects of physical feedbacks, heat uptake and saturation of radiative forcingRic Williams, Paulo Ceppi, and Anna Katavouta
The surface warming response to carbon emissions, defines a climate metric, the Transient Climate Response to cumulative carbon Emissions (TCRE), which is important in estimating how much carbon may be emitted to avoid dangerous climate. The TCRE is diagnosed from a suite of 9 CMIP6 Earth system models following an annual 1% rise in atmospheric CO2 over 140 years. The TCRE is nearly constant in time during emissions for these climate models, but its value differs between individual models. The near constancy of this climate metric is due to a strengthening in the surface warming per unit radiative forcing, involving a weakening in both the climate feedback parameter and fraction of radiative forcing warming the ocean interior, which are compensated by a weakening in the radiative forcing per unit carbon emission from the radiative forcing saturating with increasing atmospheric CO2. Inter-model differences in the TCRE are mainly controlled by the surface warming response to radiative forcing with large inter-model differences in physical climate feedbacks dominating over smaller, partly compensating differences in ocean heat uptake. Inter-model differences in the radiative forcing per unit carbon emission provide smaller inter-model differences in the TCRE, which are mainly due to differences in the ratio of the radiative forcing and change in atmospheric CO2 rather than from differences in the airborne fraction. Hence, providing tighter constraints in the climate projections for the TCRE during emissions requires improving estimates of the physical climate feedbacks, the rate of ocean heat uptake, and how the radiative forcing saturates with atmospheric CO2.
How to cite: Williams, R., Ceppi, P., and Katavouta, A.: Controls of the Transient Climate Response to Emissions: effects of physical feedbacks, heat uptake and saturation of radiative forcing , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8433, https://doi.org/10.5194/egusphere-egu2020-8433, 2020.
The surface warming response to carbon emissions, defines a climate metric, the Transient Climate Response to cumulative carbon Emissions (TCRE), which is important in estimating how much carbon may be emitted to avoid dangerous climate. The TCRE is diagnosed from a suite of 9 CMIP6 Earth system models following an annual 1% rise in atmospheric CO2 over 140 years. The TCRE is nearly constant in time during emissions for these climate models, but its value differs between individual models. The near constancy of this climate metric is due to a strengthening in the surface warming per unit radiative forcing, involving a weakening in both the climate feedback parameter and fraction of radiative forcing warming the ocean interior, which are compensated by a weakening in the radiative forcing per unit carbon emission from the radiative forcing saturating with increasing atmospheric CO2. Inter-model differences in the TCRE are mainly controlled by the surface warming response to radiative forcing with large inter-model differences in physical climate feedbacks dominating over smaller, partly compensating differences in ocean heat uptake. Inter-model differences in the radiative forcing per unit carbon emission provide smaller inter-model differences in the TCRE, which are mainly due to differences in the ratio of the radiative forcing and change in atmospheric CO2 rather than from differences in the airborne fraction. Hence, providing tighter constraints in the climate projections for the TCRE during emissions requires improving estimates of the physical climate feedbacks, the rate of ocean heat uptake, and how the radiative forcing saturates with atmospheric CO2.
How to cite: Williams, R., Ceppi, P., and Katavouta, A.: Controls of the Transient Climate Response to Emissions: effects of physical feedbacks, heat uptake and saturation of radiative forcing , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8433, https://doi.org/10.5194/egusphere-egu2020-8433, 2020.
EGU2020-12790 | Displays | ITS5.1/CL3.6
Asymmetry in the climate-carbon cycle response to positive and negative CO2 emissionsKirsten Zickfeld, Deven Azevedo, and Damon Matthews
The majority of emissions scenarios that limit warming to 2°C, and nearly all emission scenarios that do not exceed 1.5°C warming by the year 2100 require negative CO2 emissions. Negative emission technologies (NETs) in these scenarios are required to offset emissions from sectors that are difficult or costly to decarbonize and to generate global ‘net negative’ emissions, allowing to compensate for earlier emissions and to recover a carbon budget after overshoot. It is commonly assumed that the carbon cycle and climate response to a negative CO2emission is equal in magnitude and opposite in sign to the response to an equivalent positive CO2 emission, i.e. that the climate-carbon cycle response is symmetric. This assumption, however, has not been tested for a range of emissions. Here we explore the symmetry in the climate-carbon cycle response by forcing an Earth system model with positive and negative CO2emission pulses of varying magnitude and applied from different climate states. Our results suggest that an emission of CO2into the atmosphere is more effective at raising atmospheric CO2than a CO2removal is at lowering atmospheric CO2, indicating that the carbon cycleresponse is asymmetric, particularly for emissions/removals > 100 GtC. The surface air temperature response, on the other hand, is largely symmetric. Our findings suggest that the emission and subsequent removal of a given amount of CO2 would not result in the same atmospheric CO2concentration as if the emission were avoided. Furthermore, our results imply using simple models used to estimate negative emission requirements may result in underestimating the amount of negative emissions needed to attain a given CO2concentration target.
How to cite: Zickfeld, K., Azevedo, D., and Matthews, D.: Asymmetry in the climate-carbon cycle response to positive and negative CO2 emissions, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12790, https://doi.org/10.5194/egusphere-egu2020-12790, 2020.
The majority of emissions scenarios that limit warming to 2°C, and nearly all emission scenarios that do not exceed 1.5°C warming by the year 2100 require negative CO2 emissions. Negative emission technologies (NETs) in these scenarios are required to offset emissions from sectors that are difficult or costly to decarbonize and to generate global ‘net negative’ emissions, allowing to compensate for earlier emissions and to recover a carbon budget after overshoot. It is commonly assumed that the carbon cycle and climate response to a negative CO2emission is equal in magnitude and opposite in sign to the response to an equivalent positive CO2 emission, i.e. that the climate-carbon cycle response is symmetric. This assumption, however, has not been tested for a range of emissions. Here we explore the symmetry in the climate-carbon cycle response by forcing an Earth system model with positive and negative CO2emission pulses of varying magnitude and applied from different climate states. Our results suggest that an emission of CO2into the atmosphere is more effective at raising atmospheric CO2than a CO2removal is at lowering atmospheric CO2, indicating that the carbon cycleresponse is asymmetric, particularly for emissions/removals > 100 GtC. The surface air temperature response, on the other hand, is largely symmetric. Our findings suggest that the emission and subsequent removal of a given amount of CO2 would not result in the same atmospheric CO2concentration as if the emission were avoided. Furthermore, our results imply using simple models used to estimate negative emission requirements may result in underestimating the amount of negative emissions needed to attain a given CO2concentration target.
How to cite: Zickfeld, K., Azevedo, D., and Matthews, D.: Asymmetry in the climate-carbon cycle response to positive and negative CO2 emissions, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12790, https://doi.org/10.5194/egusphere-egu2020-12790, 2020.
EGU2020-9435 | Displays | ITS5.1/CL3.6
Is Climate Change Reversible? CDRMIP simulations of the Earth system response to a massive CO2 increase and decrease (emissions followed by negative emissions).David Keller, Andrew Lenton, Vivian Scott, and Naomi Vaughan and the Modelling groups who contributed to the carbon dioxide removal model intercomparison project and CDRMIP steering committee members
To stabilize long-term climate change at well-below 2°C (ideally below 1.5°C) above pre-industrial levels, large and sustained CO2 emission reductions are needed. Despite pledges from numerous governments, the world is not on track to achieve the required reductions within the timeframes outlined in the Paris Agreement, and it appears increasingly likely that an overshoot of the 1.5 or 2 °C temperature target will occur. If this happens, it may be possible to use carbon dioxide removal methods to return atmospheric CO2 concentrations to lower levels or even to reduce the magnitude of the overshoot, with the hope that lower CO2 will rapidly lead to lower temperatures and reverse or limit other climate change impacts. Here we present a multi-model analysis of how the Earth system and climate respond during the CMIP6 CDRMIP cdr-reversibility experiment, an idealized overshoot scenario, where CO2 increases from a pre-industrial level by 1% yr-1 until it is 4 times the initial value, then decrease again at 1% yr-1 until the pre-industrial level is again reached, at which point CO2 is held constant. For many modelled quantities climate change appears to eventually be reversible, at least when viewed at the global mean level. However, at a local level the results suggest some changes may be irreversible, although spatial patterns of change differ considerably between models. For many variables the response time-scales to the CO2 increase are very different than to the decrease in CO2 with a many properties exhibiting long time lags before responding to decreasing CO2, and much longer again to return to their unperturbed values (if this occurs).
How to cite: Keller, D., Lenton, A., Scott, V., and Vaughan, N. and the Modelling groups who contributed to the carbon dioxide removal model intercomparison project and CDRMIP steering committee members: Is Climate Change Reversible? CDRMIP simulations of the Earth system response to a massive CO2 increase and decrease (emissions followed by negative emissions)., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9435, https://doi.org/10.5194/egusphere-egu2020-9435, 2020.
To stabilize long-term climate change at well-below 2°C (ideally below 1.5°C) above pre-industrial levels, large and sustained CO2 emission reductions are needed. Despite pledges from numerous governments, the world is not on track to achieve the required reductions within the timeframes outlined in the Paris Agreement, and it appears increasingly likely that an overshoot of the 1.5 or 2 °C temperature target will occur. If this happens, it may be possible to use carbon dioxide removal methods to return atmospheric CO2 concentrations to lower levels or even to reduce the magnitude of the overshoot, with the hope that lower CO2 will rapidly lead to lower temperatures and reverse or limit other climate change impacts. Here we present a multi-model analysis of how the Earth system and climate respond during the CMIP6 CDRMIP cdr-reversibility experiment, an idealized overshoot scenario, where CO2 increases from a pre-industrial level by 1% yr-1 until it is 4 times the initial value, then decrease again at 1% yr-1 until the pre-industrial level is again reached, at which point CO2 is held constant. For many modelled quantities climate change appears to eventually be reversible, at least when viewed at the global mean level. However, at a local level the results suggest some changes may be irreversible, although spatial patterns of change differ considerably between models. For many variables the response time-scales to the CO2 increase are very different than to the decrease in CO2 with a many properties exhibiting long time lags before responding to decreasing CO2, and much longer again to return to their unperturbed values (if this occurs).
How to cite: Keller, D., Lenton, A., Scott, V., and Vaughan, N. and the Modelling groups who contributed to the carbon dioxide removal model intercomparison project and CDRMIP steering committee members: Is Climate Change Reversible? CDRMIP simulations of the Earth system response to a massive CO2 increase and decrease (emissions followed by negative emissions)., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9435, https://doi.org/10.5194/egusphere-egu2020-9435, 2020.
In order to reach the reduced carbon emission targets proposed by the Paris agreement one of the widely proposed decarbonizing strategies, referred to as negative emissions technologies (NETs), is the production and combustion of second-generation bioenergy crops in conjunction with carbon capture and storage (BECCS). The international research on NETs has grown rapidly and publications have ranged in scope from reviewing potential and assessing feasibility to technological maturity and discussions on deployment opportunities. However, concerns have been increasingly raised that ungrounded optimism in NETs potential could result in delayed reductions in gross CO2 emission, with consequent high-risk of overshooting global temperature targets. Negative emissions as a consequence of BECCS are achieved when the CO2 absorbed from the atmosphere during the growth cycle of biomass is released in combustion and energy production and then captured and stored indefinitely. The simplistic vision of BECCS is that one ton of CO2 captured in the growth of biomass would equate to one ton of CO2 sequestered geologically- which we can regard as a carbon efficiency of 1. However, biomass crops are not carbon neutral as GHG emissions are associated with the cultivation of biomass. Furthermore, throughout the BECCS value chain carbon ‘leaks’. Some life cycle analyses of the entire value chain for a BECCS crop to final carbon storage in the ground have shown leakage of CO2 to be greater than the CO2 captured at the point of combustion and thus it has low carbon efficiency. The deployment of BECCS is ultimately reliant on the availability of sufficient, sustainably sourced, biomass for an active CCS industry operating at scale and a favourable policy and commercial environment to incentivise these investments. It has been suggested that the theoretical global demand for biomass for BECCS could range from 50 EJ/yr up to more than 300 EJ/yr, although the technical and economic potential will be significantly less and will be dependent on uncertain social preferences and economic forces. The two most important factors determining this supply are land availability and land productivity. These factors are in turn determined by competing uses of land and a myriad of environmental and economic considerations. It is suggested that removing 3.3 GtC/year with BECCS could annually require between 360 and 2400 Mha of marginal land. The upper bounds correspond to 3x the world’s harvested land for cereal production. The conclusion is that estimates of biomass availability for the future depends on the evolution of a multitude of social, political, and economic factors including land tenure and regulation, trade, and technology. Consequently, the assumptions, in future climate scenarios, that high rates of NETs can be achieved across many countries and land types is not yet demonstrated.
How to cite: Jones, M.: Can biomass supply meet the demands of BECCS?, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5461, https://doi.org/10.5194/egusphere-egu2020-5461, 2020.
In order to reach the reduced carbon emission targets proposed by the Paris agreement one of the widely proposed decarbonizing strategies, referred to as negative emissions technologies (NETs), is the production and combustion of second-generation bioenergy crops in conjunction with carbon capture and storage (BECCS). The international research on NETs has grown rapidly and publications have ranged in scope from reviewing potential and assessing feasibility to technological maturity and discussions on deployment opportunities. However, concerns have been increasingly raised that ungrounded optimism in NETs potential could result in delayed reductions in gross CO2 emission, with consequent high-risk of overshooting global temperature targets. Negative emissions as a consequence of BECCS are achieved when the CO2 absorbed from the atmosphere during the growth cycle of biomass is released in combustion and energy production and then captured and stored indefinitely. The simplistic vision of BECCS is that one ton of CO2 captured in the growth of biomass would equate to one ton of CO2 sequestered geologically- which we can regard as a carbon efficiency of 1. However, biomass crops are not carbon neutral as GHG emissions are associated with the cultivation of biomass. Furthermore, throughout the BECCS value chain carbon ‘leaks’. Some life cycle analyses of the entire value chain for a BECCS crop to final carbon storage in the ground have shown leakage of CO2 to be greater than the CO2 captured at the point of combustion and thus it has low carbon efficiency. The deployment of BECCS is ultimately reliant on the availability of sufficient, sustainably sourced, biomass for an active CCS industry operating at scale and a favourable policy and commercial environment to incentivise these investments. It has been suggested that the theoretical global demand for biomass for BECCS could range from 50 EJ/yr up to more than 300 EJ/yr, although the technical and economic potential will be significantly less and will be dependent on uncertain social preferences and economic forces. The two most important factors determining this supply are land availability and land productivity. These factors are in turn determined by competing uses of land and a myriad of environmental and economic considerations. It is suggested that removing 3.3 GtC/year with BECCS could annually require between 360 and 2400 Mha of marginal land. The upper bounds correspond to 3x the world’s harvested land for cereal production. The conclusion is that estimates of biomass availability for the future depends on the evolution of a multitude of social, political, and economic factors including land tenure and regulation, trade, and technology. Consequently, the assumptions, in future climate scenarios, that high rates of NETs can be achieved across many countries and land types is not yet demonstrated.
How to cite: Jones, M.: Can biomass supply meet the demands of BECCS?, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5461, https://doi.org/10.5194/egusphere-egu2020-5461, 2020.
EGU2020-8038 | Displays | ITS5.1/CL3.6
The effect of insect disturbance on forest carbon flux: a multisite analysisLingle Chen
Insect outbreaks have a substantial effect on forest carbon cycle, including turning forests from carbon sink to carbon source. However, there is still lack of evaluation of the impact of insect disturbance on forests carbon cycle across different forest types at the global scale. Hence, we conducted a multi-site analysis to compare ecosystem CO2 fluxes’ change after an insect outbreak by compiling flux data from the literature or flux database. The final database consists of 21 site-years of eddy covariance data from 17 forest sites among diversity of forest types, namely temperate forest, mangrove, larch forest, etc. Our research showed that insect outbreak had significant negative effects on GEP that GEP reduces -186 gCm-2y-1 on average, as well as NEP reducing -146 gCm-2y-1 while had no significant positive effect on Re. Additionally, similar conclusion was gained when analyzing the recovery procedure that GEP increases with time at the slope of 67.6 gCm-2y-2 and NEP increases at slightly lower rate of 49.2 gCm-2y-2. But there is no evidence that Re will increase. As insects cause more severe damage to forests, all three carbon variables share negative correlation that GEP drops at the slope of -6.7 gCm-2y-1 and NEP also decreases in relatively similar rate at 66.3 gCm-2y-1 with Re decreasing by -3.4 gCm-2y-1. Also, it was found that girdling experiments has different impact on carbon budget from normal insect outbreaks while drought could dampen the damage to GEP followed with a greater recovery rate.
How to cite: Chen, L.: The effect of insect disturbance on forest carbon flux: a multisite analysis, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8038, https://doi.org/10.5194/egusphere-egu2020-8038, 2020.
Insect outbreaks have a substantial effect on forest carbon cycle, including turning forests from carbon sink to carbon source. However, there is still lack of evaluation of the impact of insect disturbance on forests carbon cycle across different forest types at the global scale. Hence, we conducted a multi-site analysis to compare ecosystem CO2 fluxes’ change after an insect outbreak by compiling flux data from the literature or flux database. The final database consists of 21 site-years of eddy covariance data from 17 forest sites among diversity of forest types, namely temperate forest, mangrove, larch forest, etc. Our research showed that insect outbreak had significant negative effects on GEP that GEP reduces -186 gCm-2y-1 on average, as well as NEP reducing -146 gCm-2y-1 while had no significant positive effect on Re. Additionally, similar conclusion was gained when analyzing the recovery procedure that GEP increases with time at the slope of 67.6 gCm-2y-2 and NEP increases at slightly lower rate of 49.2 gCm-2y-2. But there is no evidence that Re will increase. As insects cause more severe damage to forests, all three carbon variables share negative correlation that GEP drops at the slope of -6.7 gCm-2y-1 and NEP also decreases in relatively similar rate at 66.3 gCm-2y-1 with Re decreasing by -3.4 gCm-2y-1. Also, it was found that girdling experiments has different impact on carbon budget from normal insect outbreaks while drought could dampen the damage to GEP followed with a greater recovery rate.
How to cite: Chen, L.: The effect of insect disturbance on forest carbon flux: a multisite analysis, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8038, https://doi.org/10.5194/egusphere-egu2020-8038, 2020.
ITS5.2/AS3.17 – Science-based Greenhouse Gas Emission Estimates in Support of National and Sub-National Climate Change Mitigation
EGU2020-4411 | Displays | ITS5.2/AS3.17 | Highlight
Tall tower eddy covariance as a tool for evaluating climate change mitigation in Vienna, AustriaBradley Matthews and Helmut Schume
The need for climate action in cities is becoming more and more critical. As such, systems that quantify local greenhouse gas (GHG) emissions to evaluate mitigation measures are growing in importance and are set to undergo increasing levels of scrutiny. Within the CarboWien project, the University of Natural Resources and Life Sciences, Vienna, the Environment Agency Austria and the telecommunications company A1 Telekom Austria AG are currently collaborating to investigate the potential of a tall tower eddy covariance station to support carbon dioxide (CO2) emissions monitoring in Vienna. Due to the tall tower approach (144 m measurement height) the measured turbulent fluxes are representative of net emissions from much of the city area. If maintained in the near- to medium‑term, this facility could provide an additional, independent instrument with which local climate change action can be continuously evaluated.
This conference contribution will present results from the measurement campaign so far (2018-2019). In addition to discussing the early-indicator function of these data and the scope for improving emissions inventories, the presentation will demonstrate how these measurements can be directly used to evaluate local mitigation measures. In particular, analyses of the 30-minute fluxes against local activity/proxy data will show how the performance of measures seeking to reduce CO2 emissions from road traffic and space heating can be inferred.
How to cite: Matthews, B. and Schume, H.: Tall tower eddy covariance as a tool for evaluating climate change mitigation in Vienna, Austria, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4411, https://doi.org/10.5194/egusphere-egu2020-4411, 2020.
The need for climate action in cities is becoming more and more critical. As such, systems that quantify local greenhouse gas (GHG) emissions to evaluate mitigation measures are growing in importance and are set to undergo increasing levels of scrutiny. Within the CarboWien project, the University of Natural Resources and Life Sciences, Vienna, the Environment Agency Austria and the telecommunications company A1 Telekom Austria AG are currently collaborating to investigate the potential of a tall tower eddy covariance station to support carbon dioxide (CO2) emissions monitoring in Vienna. Due to the tall tower approach (144 m measurement height) the measured turbulent fluxes are representative of net emissions from much of the city area. If maintained in the near- to medium‑term, this facility could provide an additional, independent instrument with which local climate change action can be continuously evaluated.
This conference contribution will present results from the measurement campaign so far (2018-2019). In addition to discussing the early-indicator function of these data and the scope for improving emissions inventories, the presentation will demonstrate how these measurements can be directly used to evaluate local mitigation measures. In particular, analyses of the 30-minute fluxes against local activity/proxy data will show how the performance of measures seeking to reduce CO2 emissions from road traffic and space heating can be inferred.
How to cite: Matthews, B. and Schume, H.: Tall tower eddy covariance as a tool for evaluating climate change mitigation in Vienna, Austria, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4411, https://doi.org/10.5194/egusphere-egu2020-4411, 2020.
EGU2020-1643 | Displays | ITS5.2/AS3.17 | Highlight
The CO2 Emissions of US Cities: Status, Dynamics, and ComparisonsKevin Gurney, Jianming Liang, Geoffrey Roest, and Yang Song
Urban areas are rapidly growing and are acknowledged to dominate greenhouse gas (GHG) emissions to the Earth’s atmosphere. They are also emerging as centers of climate mitigation leadership and innovation. However, fundamental quantitative analysis of urban GHG emissions beyond individual city case studies remains challenging due to a lack of comprehensive, quantitative, methodologically consistent emissions data, raising barriers to both scientific and policy progress. Here we present the first such analysis across the entire US urban landscape, answering a series of fundamental questions about emissions responsibility, emissions drivers and emissions integrity. We find that urbanized areas in the U.S. account for 68.1% of total U.S. fossil fuel carbon dioxide (CO2) emissions. Were they counted as a single country, the 5 largest urban emitters in the US would rank as the 8th largest country on the planet; the top 20 US cities as the 5th largest. In contrast to their dominant overall proportion, per capita FFCO2 emissions in urbanized areas of the US are 7% less than the country as a whole, particularly for onroad gasoline emissions (-12.3%).
Contrary to previous findings, we find that emissions grow slower than urban population growth in Eastern US cities, particularly for larger urban centers. The Western US, by contrast, shows emissions growing proportionately with population. Much of the difference between Eastern versus Western cities is determined by the onroad emissions sector. This finding, in particular, suggests that “bigger is better” when considering GHG emissions and U.S. urban population growth.
Finally we find large and persistent differences between the results presented here and 57 self-reported urban inventories. The mean difference between the self-reported inventories and the analysis here is -24% (mean absolute difference: 44.3%) with the majority of self-reported values lower than quantified in this study.
How to cite: Gurney, K., Liang, J., Roest, G., and Song, Y.: The CO2 Emissions of US Cities: Status, Dynamics, and Comparisons, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1643, https://doi.org/10.5194/egusphere-egu2020-1643, 2020.
Urban areas are rapidly growing and are acknowledged to dominate greenhouse gas (GHG) emissions to the Earth’s atmosphere. They are also emerging as centers of climate mitigation leadership and innovation. However, fundamental quantitative analysis of urban GHG emissions beyond individual city case studies remains challenging due to a lack of comprehensive, quantitative, methodologically consistent emissions data, raising barriers to both scientific and policy progress. Here we present the first such analysis across the entire US urban landscape, answering a series of fundamental questions about emissions responsibility, emissions drivers and emissions integrity. We find that urbanized areas in the U.S. account for 68.1% of total U.S. fossil fuel carbon dioxide (CO2) emissions. Were they counted as a single country, the 5 largest urban emitters in the US would rank as the 8th largest country on the planet; the top 20 US cities as the 5th largest. In contrast to their dominant overall proportion, per capita FFCO2 emissions in urbanized areas of the US are 7% less than the country as a whole, particularly for onroad gasoline emissions (-12.3%).
Contrary to previous findings, we find that emissions grow slower than urban population growth in Eastern US cities, particularly for larger urban centers. The Western US, by contrast, shows emissions growing proportionately with population. Much of the difference between Eastern versus Western cities is determined by the onroad emissions sector. This finding, in particular, suggests that “bigger is better” when considering GHG emissions and U.S. urban population growth.
Finally we find large and persistent differences between the results presented here and 57 self-reported urban inventories. The mean difference between the self-reported inventories and the analysis here is -24% (mean absolute difference: 44.3%) with the majority of self-reported values lower than quantified in this study.
How to cite: Gurney, K., Liang, J., Roest, G., and Song, Y.: The CO2 Emissions of US Cities: Status, Dynamics, and Comparisons, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1643, https://doi.org/10.5194/egusphere-egu2020-1643, 2020.
EGU2020-6281 | Displays | ITS5.2/AS3.17 | Highlight
A multi-tiered methane analytic framework for constraining budgets, point source attribution, and anomalous event detectionDaniel Cusworth, Riley Duren, Andrew Thorpe, Natasha Stavros, Brian Bue, Robert Tapella, Vineet Yadav, and Charles Miller
Methane emissions monitoring is rapidly expanding with increasing coverage of surface, airborne, and satellite instruments. However, no single methane instrument or observing strategy can both close emission budgets and pinpoint point sources on regional to global scales. Instead, we present a multi-tiered data analytics system that synthesizes information across various instruments into a single analytic framework. We highlight an example in Los Angeles, where we combine surface measurements from the Los Angeles megacities project, mountaintop measurements from the CLARS-FTS instrument, airborne AVIRIS-NG point source emission estimates, and TROPOMI total column retrievals into a single analytic framework. Surface, mountaintop, and satellite measurements are assimilated into a methane flux inverse model to constrain basin-wide emissions and pinpoint sub-basin methane hotspots. We show an example of a large urban landfill, whose anomalous emissions were detected by the inverse system, and validated using AVIRIS-NG methane plume maps. This general approach of quantifying both methane area and point source emissions is an avenue not just for closing regional to global scale budgets, but also for understanding which emission sources dominate the budget (i.e., so called methane super-emitters). We finally show how this multi-tiered analytic framework can be improved with future satellite missions, and present examples of unexpectedly large methane emissions that were detected by a new generation of satellite imaging spectrometers.
How to cite: Cusworth, D., Duren, R., Thorpe, A., Stavros, N., Bue, B., Tapella, R., Yadav, V., and Miller, C.: A multi-tiered methane analytic framework for constraining budgets, point source attribution, and anomalous event detection, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6281, https://doi.org/10.5194/egusphere-egu2020-6281, 2020.
Methane emissions monitoring is rapidly expanding with increasing coverage of surface, airborne, and satellite instruments. However, no single methane instrument or observing strategy can both close emission budgets and pinpoint point sources on regional to global scales. Instead, we present a multi-tiered data analytics system that synthesizes information across various instruments into a single analytic framework. We highlight an example in Los Angeles, where we combine surface measurements from the Los Angeles megacities project, mountaintop measurements from the CLARS-FTS instrument, airborne AVIRIS-NG point source emission estimates, and TROPOMI total column retrievals into a single analytic framework. Surface, mountaintop, and satellite measurements are assimilated into a methane flux inverse model to constrain basin-wide emissions and pinpoint sub-basin methane hotspots. We show an example of a large urban landfill, whose anomalous emissions were detected by the inverse system, and validated using AVIRIS-NG methane plume maps. This general approach of quantifying both methane area and point source emissions is an avenue not just for closing regional to global scale budgets, but also for understanding which emission sources dominate the budget (i.e., so called methane super-emitters). We finally show how this multi-tiered analytic framework can be improved with future satellite missions, and present examples of unexpectedly large methane emissions that were detected by a new generation of satellite imaging spectrometers.
How to cite: Cusworth, D., Duren, R., Thorpe, A., Stavros, N., Bue, B., Tapella, R., Yadav, V., and Miller, C.: A multi-tiered methane analytic framework for constraining budgets, point source attribution, and anomalous event detection, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6281, https://doi.org/10.5194/egusphere-egu2020-6281, 2020.
EGU2020-11032 | Displays | ITS5.2/AS3.17
Observations of greenhouse gas and short-lived pollutants in the Baltimore Washington area: Quantification and mitigationRussell Dickerson, Tim Canty, Xinrong Ren, Ross Salawitch, Paul Shepson, Israel Lopez Coto, and James Whetstone
For the past five years, we have been measuring greenhouse gases CO2, and CH4 along with a suite of pollutants related to photochemical smog (O3, NO2, VOCs, CO) and particulate matter (SO2 (sulfate precursor), & aerosol optical properties) from a research aircraft. These complement a network of tower-based monitors and provide input to a variety of models used to determine emissions. Initial findings include identification of landfills and leakage from the natural gas delivery system as major local sources of CH4, as well as substantial upwind sources such as oil and gas operations in the Marcellus shale play. Quantification of emissions and flux is complicated by uncertainties in background concentrations and mesoscale dynamics. Comparison of short-lived species has shed light on the efficiency of combustion and pollution control as well as the temperature dependence of emissions. Ratios of CO:CO2, for example, are consistent with emissions inventories and verify the high efficiency catalytic converters.
How to cite: Dickerson, R., Canty, T., Ren, X., Salawitch, R., Shepson, P., Lopez Coto, I., and Whetstone, J.: Observations of greenhouse gas and short-lived pollutants in the Baltimore Washington area: Quantification and mitigation, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11032, https://doi.org/10.5194/egusphere-egu2020-11032, 2020.
For the past five years, we have been measuring greenhouse gases CO2, and CH4 along with a suite of pollutants related to photochemical smog (O3, NO2, VOCs, CO) and particulate matter (SO2 (sulfate precursor), & aerosol optical properties) from a research aircraft. These complement a network of tower-based monitors and provide input to a variety of models used to determine emissions. Initial findings include identification of landfills and leakage from the natural gas delivery system as major local sources of CH4, as well as substantial upwind sources such as oil and gas operations in the Marcellus shale play. Quantification of emissions and flux is complicated by uncertainties in background concentrations and mesoscale dynamics. Comparison of short-lived species has shed light on the efficiency of combustion and pollution control as well as the temperature dependence of emissions. Ratios of CO:CO2, for example, are consistent with emissions inventories and verify the high efficiency catalytic converters.
How to cite: Dickerson, R., Canty, T., Ren, X., Salawitch, R., Shepson, P., Lopez Coto, I., and Whetstone, J.: Observations of greenhouse gas and short-lived pollutants in the Baltimore Washington area: Quantification and mitigation, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11032, https://doi.org/10.5194/egusphere-egu2020-11032, 2020.
EGU2020-12366 | Displays | ITS5.2/AS3.17
Atmospheric observations of CO2, 14CO2 and O2 concentrations to capture fossil fuel CO2 emissions from the Greater Tokyo AreaYukio Terao, Yasunori Tohjima, Shigeyuki Ishidoya, Mai Ouchi, Yumi Osonoi, Hitoshi Mukai, Toshinobu Machida, Hirofumi Sugawara, Naoki Kaneyasu, and Yosuke Niwa
The Grater Tokyo Area is the most populated (38 million) metropolitan area in the world. To capture fossil fuel carbon dioxide (CO2) emissions from the Grater Tokyo Area, we performed ground-based atmospheric observations for measuring concentrations of CO2, radiocarbon in CO2 (14CO2), oxygen (O2) and carbon monoxide (CO) at Tokyo Skytree (TST, with high altitude (250m) inlet) and Yoyogi (YYG, turbulentCO2 flux measurement site located in resident area) in Tokyo and at National Institute for Environmental Studies (NIES, suburb/rural area) in Ibaraki, Japan. The 14CO2 measurement was used for separating the fossil fuel CO2 emissions from the biotic emissions. Results from 14CO2 measurements showed that a ratio of fossil fuel-derived CO2 to the variation of CO2 concentrations was 71% in average for winter both at TST and YYG but varied from 44% to 92%, indicating significant contribution of biotic CO2 in Tokyo. The O2:CO2 exchange ratio (oxidation ratio, OR) was used for the partitioning of CO2 into emissions from gas fuels and gasoline. We observed larger OR in winter than in summer (due to both wintertime increases of fossil fuel combustion and summertime terrestrial biospheric activities) at TST and YYG and larger OR in the morning and late evening in winter due to increase of gas fuel combustion at YYG. We showed that the O2 concentrations might be also used as a proxy for continuous monitoring of fossil fuel CO2 content by assuming typical ratio of gas fuels and gasoline combustions. The presenter will introduce the related projects including development of building/road-scale dynamic CO2 mapping and grid-based CO2 emission inventory with high special resolution in Tokyo.
How to cite: Terao, Y., Tohjima, Y., Ishidoya, S., Ouchi, M., Osonoi, Y., Mukai, H., Machida, T., Sugawara, H., Kaneyasu, N., and Niwa, Y.: Atmospheric observations of CO2, 14CO2 and O2 concentrations to capture fossil fuel CO2 emissions from the Greater Tokyo Area, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12366, https://doi.org/10.5194/egusphere-egu2020-12366, 2020.
The Grater Tokyo Area is the most populated (38 million) metropolitan area in the world. To capture fossil fuel carbon dioxide (CO2) emissions from the Grater Tokyo Area, we performed ground-based atmospheric observations for measuring concentrations of CO2, radiocarbon in CO2 (14CO2), oxygen (O2) and carbon monoxide (CO) at Tokyo Skytree (TST, with high altitude (250m) inlet) and Yoyogi (YYG, turbulentCO2 flux measurement site located in resident area) in Tokyo and at National Institute for Environmental Studies (NIES, suburb/rural area) in Ibaraki, Japan. The 14CO2 measurement was used for separating the fossil fuel CO2 emissions from the biotic emissions. Results from 14CO2 measurements showed that a ratio of fossil fuel-derived CO2 to the variation of CO2 concentrations was 71% in average for winter both at TST and YYG but varied from 44% to 92%, indicating significant contribution of biotic CO2 in Tokyo. The O2:CO2 exchange ratio (oxidation ratio, OR) was used for the partitioning of CO2 into emissions from gas fuels and gasoline. We observed larger OR in winter than in summer (due to both wintertime increases of fossil fuel combustion and summertime terrestrial biospheric activities) at TST and YYG and larger OR in the morning and late evening in winter due to increase of gas fuel combustion at YYG. We showed that the O2 concentrations might be also used as a proxy for continuous monitoring of fossil fuel CO2 content by assuming typical ratio of gas fuels and gasoline combustions. The presenter will introduce the related projects including development of building/road-scale dynamic CO2 mapping and grid-based CO2 emission inventory with high special resolution in Tokyo.
How to cite: Terao, Y., Tohjima, Y., Ishidoya, S., Ouchi, M., Osonoi, Y., Mukai, H., Machida, T., Sugawara, H., Kaneyasu, N., and Niwa, Y.: Atmospheric observations of CO2, 14CO2 and O2 concentrations to capture fossil fuel CO2 emissions from the Greater Tokyo Area, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12366, https://doi.org/10.5194/egusphere-egu2020-12366, 2020.
EGU2020-5930 | Displays | ITS5.2/AS3.17
COoL-AMmetropolis : towards establishing virtuous greenhouse gas emission mitigation scenarios for 2035 in the Aix-Marseille metropolis area (France) through atmospheric top-down technics and social sciences methods in interaction with local stakeholders.Irène Xueref-Remy, Aurélie Riandet, Ludovic Lelandais, Brian Nathan, Mélissa Milne, Valéry Masson, Marie-Laure Lambert, Alexandre Armengaud, Jocelyn Turnbull, Christophe Yohia, Antoine Nicault, Thomas Lauvaux, Jacques Piazzola, Christine Lac, Thierry Hedde, Samuel Robert, Guillaume Simioni, Wolfgang Cramer, and Alberte Bondeau
Most of the global population leaves in cities, expected to expand rapidly in the next decades. Cities and their industrial facilities are estimated to release more than 70% of fossil fuel CO2, although these estimates need to be verified at the city scale. Furthermore, cities are undergoing higher temperatures than their surrounding rural areas due to the Urban Heat Island (UHI) which also directly influence some CO2 fluxes (for example from buildings domestic heating, car air-conditioning, urban and rural vegetation uptake). Cities are thus strategic places where actions on mitigating CO2 emissions and also on lowering down atmospheric temperature elevation should be undertaken in priority.
The ANR COoL-AMmetropolis project focuses on characterizing and mitigating CO2 emissions and UHI in the Aix-Marseille metropolis (AMm), of which the new governance entity is the “Metropole Aix-Marseille-Provence" (noted AMPM). AMm is the second most populated area of France (1.8 M inhabitants), is much industrialized, and is located in the PACA region strongly exposed to the risks of Climate Change. The objectives of the project are : (1) verifying and improving the spatio-temporal distribution of the AMm FFCO2 emissions estimates and quantifying their current contribution against natural fluxes, (2) characterizing the variability of the UHI and atmospheric CO2 at the diurnal, synoptic and seasonal scales in the AMm area, and modeling UHI and CO2 sources and sinks interactions at the local to the AMm scale; and (3) defining and evaluating the benefits of development scenarios of the AMm urban ecosystem to the horizon 2035 for mitigating both CO2 emissions and UHI, at the different scales, and find the most effective way to integrate the vertuous scenarios, defined in interaction with stakeholders, into legal and urban planning schemes, tools, charters or practices.
To reach these objectives, a multidisciplinary Consortium made of 5 main partners (IMBE, CNRM, LIEU, AtmoSud, UMS Pytheas) and 6 non-funded partners (LSCE, INRA/URM, ESPACE, MIO, DTN, GNZ New-Zeeland) is proposed, ensuring complementarity between atmospheric physicists, urbanists, territorial jurists, emission stockcounters and AMm socio-economic actors with privileged links with local/regional stakeholders. Through its expertise and the organisation of annual seminars, GREC-SUD (sub-contract.) will reinforce these interactions.
The project is organized in 4 workpackages. WP0 is dedicated to the project coordination. WP1 is assigned to the collection and analyzes of CO2 and UHI observations, and WP2 to the development and assessment of the CO2 and UHI modelling framework. WP1 and WP2 will feed WP3, dedicated to the role of the several levels in the AMPM in the governance for the urban adaptation strategies on the UHI and CO2 issues. It relies on legal documents analyses, multi-indicators evaluation of scenarios, and a strategy of ensuring regular interactions between the research community, local stakeholders & civil society throughout the full project duration.
The ANR COoL-AMmetropolis is funded for 4 years, starting on January 2020.
How to cite: Xueref-Remy, I., Riandet, A., Lelandais, L., Nathan, B., Milne, M., Masson, V., Lambert, M.-L., Armengaud, A., Turnbull, J., Yohia, C., Nicault, A., Lauvaux, T., Piazzola, J., Lac, C., Hedde, T., Robert, S., Simioni, G., Cramer, W., and Bondeau, A.: COoL-AMmetropolis : towards establishing virtuous greenhouse gas emission mitigation scenarios for 2035 in the Aix-Marseille metropolis area (France) through atmospheric top-down technics and social sciences methods in interaction with local stakeholders., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5930, https://doi.org/10.5194/egusphere-egu2020-5930, 2020.
Most of the global population leaves in cities, expected to expand rapidly in the next decades. Cities and their industrial facilities are estimated to release more than 70% of fossil fuel CO2, although these estimates need to be verified at the city scale. Furthermore, cities are undergoing higher temperatures than their surrounding rural areas due to the Urban Heat Island (UHI) which also directly influence some CO2 fluxes (for example from buildings domestic heating, car air-conditioning, urban and rural vegetation uptake). Cities are thus strategic places where actions on mitigating CO2 emissions and also on lowering down atmospheric temperature elevation should be undertaken in priority.
The ANR COoL-AMmetropolis project focuses on characterizing and mitigating CO2 emissions and UHI in the Aix-Marseille metropolis (AMm), of which the new governance entity is the “Metropole Aix-Marseille-Provence" (noted AMPM). AMm is the second most populated area of France (1.8 M inhabitants), is much industrialized, and is located in the PACA region strongly exposed to the risks of Climate Change. The objectives of the project are : (1) verifying and improving the spatio-temporal distribution of the AMm FFCO2 emissions estimates and quantifying their current contribution against natural fluxes, (2) characterizing the variability of the UHI and atmospheric CO2 at the diurnal, synoptic and seasonal scales in the AMm area, and modeling UHI and CO2 sources and sinks interactions at the local to the AMm scale; and (3) defining and evaluating the benefits of development scenarios of the AMm urban ecosystem to the horizon 2035 for mitigating both CO2 emissions and UHI, at the different scales, and find the most effective way to integrate the vertuous scenarios, defined in interaction with stakeholders, into legal and urban planning schemes, tools, charters or practices.
To reach these objectives, a multidisciplinary Consortium made of 5 main partners (IMBE, CNRM, LIEU, AtmoSud, UMS Pytheas) and 6 non-funded partners (LSCE, INRA/URM, ESPACE, MIO, DTN, GNZ New-Zeeland) is proposed, ensuring complementarity between atmospheric physicists, urbanists, territorial jurists, emission stockcounters and AMm socio-economic actors with privileged links with local/regional stakeholders. Through its expertise and the organisation of annual seminars, GREC-SUD (sub-contract.) will reinforce these interactions.
The project is organized in 4 workpackages. WP0 is dedicated to the project coordination. WP1 is assigned to the collection and analyzes of CO2 and UHI observations, and WP2 to the development and assessment of the CO2 and UHI modelling framework. WP1 and WP2 will feed WP3, dedicated to the role of the several levels in the AMPM in the governance for the urban adaptation strategies on the UHI and CO2 issues. It relies on legal documents analyses, multi-indicators evaluation of scenarios, and a strategy of ensuring regular interactions between the research community, local stakeholders & civil society throughout the full project duration.
The ANR COoL-AMmetropolis is funded for 4 years, starting on January 2020.
How to cite: Xueref-Remy, I., Riandet, A., Lelandais, L., Nathan, B., Milne, M., Masson, V., Lambert, M.-L., Armengaud, A., Turnbull, J., Yohia, C., Nicault, A., Lauvaux, T., Piazzola, J., Lac, C., Hedde, T., Robert, S., Simioni, G., Cramer, W., and Bondeau, A.: COoL-AMmetropolis : towards establishing virtuous greenhouse gas emission mitigation scenarios for 2035 in the Aix-Marseille metropolis area (France) through atmospheric top-down technics and social sciences methods in interaction with local stakeholders., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5930, https://doi.org/10.5194/egusphere-egu2020-5930, 2020.
EGU2020-20364 | Displays | ITS5.2/AS3.17
Intensive CO2 and CH4 measurement campaign at Mexico-CityMichel Ramonet, Noémie Taquet, and Michel Grutter and the MERCI-CO2
Mexico City (MC) is the home of 21.2M people, 19% of the country's population. The MC urban area has intense emissions of pollutants and greenhouse gases, which accumulate in the overlying air-shed due to the location of the city in a high-altitude basin surrounded by mountains. Local and national authorities have engaged into aggressive emission reduction strategies. The Mexican-French collaborative project, MERCI-CO2, aims to develop atmospheric CO2 measurements that will enable, with the support of atmospheric inversion, to verify the effectiveness of CO2 emission reductions taken by the city authorities. The MERCI-CO2 combines high-precision analysers and low-cost sensors for surface measurements with total column observations up- and down-wind of MC. In addition to the long-term infrastructure currently deployed, an intensive campaign in the spring 2020 will produce an unprecedented data set. For this campaign we will deploy during one month six EM27 spectrometers for total column CO2, CH4 and CO observations; two high-precision analyzer at fixed position and one on board a car for transect measurements; and ten low-cost CO2 sensors which will be setup at air quality stations from the local city network measuring CO, NOx and O3. The dense network will be deployed before, during and after the Eastern vacation period in early April. During this week the traffic, which represents about 70% of CO2 emissions, will be significantly reduced. The atmosphere will be analyzed with a high-resolution transport model to infer the reduction of the surface emissions. This result will be compared to the reduction of the traffic inferred from car counting statistics, and bottom-up estimates. The EM27 instruments will be moved around a large landfill, in order to measure the CH4 enhancement due to this installation, and estimate its emission. The waste sector represent by far the largest CH4 contributor (about 90%) in Mexico, and remains subject to large uncertainties.
How to cite: Ramonet, M., Taquet, N., and Grutter, M. and the MERCI-CO2: Intensive CO2 and CH4 measurement campaign at Mexico-City, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20364, https://doi.org/10.5194/egusphere-egu2020-20364, 2020.
Mexico City (MC) is the home of 21.2M people, 19% of the country's population. The MC urban area has intense emissions of pollutants and greenhouse gases, which accumulate in the overlying air-shed due to the location of the city in a high-altitude basin surrounded by mountains. Local and national authorities have engaged into aggressive emission reduction strategies. The Mexican-French collaborative project, MERCI-CO2, aims to develop atmospheric CO2 measurements that will enable, with the support of atmospheric inversion, to verify the effectiveness of CO2 emission reductions taken by the city authorities. The MERCI-CO2 combines high-precision analysers and low-cost sensors for surface measurements with total column observations up- and down-wind of MC. In addition to the long-term infrastructure currently deployed, an intensive campaign in the spring 2020 will produce an unprecedented data set. For this campaign we will deploy during one month six EM27 spectrometers for total column CO2, CH4 and CO observations; two high-precision analyzer at fixed position and one on board a car for transect measurements; and ten low-cost CO2 sensors which will be setup at air quality stations from the local city network measuring CO, NOx and O3. The dense network will be deployed before, during and after the Eastern vacation period in early April. During this week the traffic, which represents about 70% of CO2 emissions, will be significantly reduced. The atmosphere will be analyzed with a high-resolution transport model to infer the reduction of the surface emissions. This result will be compared to the reduction of the traffic inferred from car counting statistics, and bottom-up estimates. The EM27 instruments will be moved around a large landfill, in order to measure the CH4 enhancement due to this installation, and estimate its emission. The waste sector represent by far the largest CH4 contributor (about 90%) in Mexico, and remains subject to large uncertainties.
How to cite: Ramonet, M., Taquet, N., and Grutter, M. and the MERCI-CO2: Intensive CO2 and CH4 measurement campaign at Mexico-City, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20364, https://doi.org/10.5194/egusphere-egu2020-20364, 2020.
EGU2020-8683 | Displays | ITS5.2/AS3.17 | Highlight
From science to policy: how can research community contribute to the reporting and verification needs under the Paris Agreement?Lucia Perugini, Guido Pellis, Giacomo Grassi, Philippe Ciais, Han Dolman, Joanna I. House, Glen Peters, Pete Smith, Dirk Günter, and Philippe Peylin
The time for action on the Paris Agreement is upon us, requiring all signatory countries to have a robust reporting and accounting system that is transparent, accurate, complete, consistent and comparable (through the Enhanced Transparency Framework), with a periodical review of the collective achievement of the 2°C temperature goal (the Global Stocktake). The research community is therefore called to reinforce databases and methodologies to improve national greenhouse gas inventory estimates, especially for developing countries that are subject to new reporting obligations, but also to define a comparable scientific “benchmark" to assess the achievement of the Paris Agreement goal.
Despite the key role of science in the process, often research communities working on emission statistics have approached the problem of climate change through different angles and by using terminologies, metrics, rules and approaches (e.g. spatial and temporal scales) that do not always match with those used by the inventory communities. Within the VERIFY project (Horizon 2020, grant agreement No 776810) a networking between the two communities (research and inventory) has been established. The discussion between them highlighted the importance of a continue exchange to increase the mutual understanding of needs, terms, rules, procedures and guidelines in use, especially those adopted under the UNFCCC and Paris Agreement process.
The presentation will therefore guide the researchers through the monitoring, reporting and verification frameworks under the UNFCCC and Paris Agreement, identifying how and where science production can assist the inventory communities in improving greenhouse gasses estimations and verification systems. Land Use, Land-Use Change and Forestry is the most complicate sector to deal with because of intricacy of flux attribution (that can be both anthropogenic and non-anthropogenic) and methodological complexity, affected also by common misunderstandings in the use of terminologies and different definitions.
On the basis of the available literature and the outcomes of the work undertaken under VERIFY project, we provide an analysis on the possible critical issues and main misunderstanding that could arise, identifying options on how to solve them.
How to cite: Perugini, L., Pellis, G., Grassi, G., Ciais, P., Dolman, H., House, J. I., Peters, G., Smith, P., Günter, D., and Peylin, P.: From science to policy: how can research community contribute to the reporting and verification needs under the Paris Agreement?, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8683, https://doi.org/10.5194/egusphere-egu2020-8683, 2020.
The time for action on the Paris Agreement is upon us, requiring all signatory countries to have a robust reporting and accounting system that is transparent, accurate, complete, consistent and comparable (through the Enhanced Transparency Framework), with a periodical review of the collective achievement of the 2°C temperature goal (the Global Stocktake). The research community is therefore called to reinforce databases and methodologies to improve national greenhouse gas inventory estimates, especially for developing countries that are subject to new reporting obligations, but also to define a comparable scientific “benchmark" to assess the achievement of the Paris Agreement goal.
Despite the key role of science in the process, often research communities working on emission statistics have approached the problem of climate change through different angles and by using terminologies, metrics, rules and approaches (e.g. spatial and temporal scales) that do not always match with those used by the inventory communities. Within the VERIFY project (Horizon 2020, grant agreement No 776810) a networking between the two communities (research and inventory) has been established. The discussion between them highlighted the importance of a continue exchange to increase the mutual understanding of needs, terms, rules, procedures and guidelines in use, especially those adopted under the UNFCCC and Paris Agreement process.
The presentation will therefore guide the researchers through the monitoring, reporting and verification frameworks under the UNFCCC and Paris Agreement, identifying how and where science production can assist the inventory communities in improving greenhouse gasses estimations and verification systems. Land Use, Land-Use Change and Forestry is the most complicate sector to deal with because of intricacy of flux attribution (that can be both anthropogenic and non-anthropogenic) and methodological complexity, affected also by common misunderstandings in the use of terminologies and different definitions.
On the basis of the available literature and the outcomes of the work undertaken under VERIFY project, we provide an analysis on the possible critical issues and main misunderstanding that could arise, identifying options on how to solve them.
How to cite: Perugini, L., Pellis, G., Grassi, G., Ciais, P., Dolman, H., House, J. I., Peters, G., Smith, P., Günter, D., and Peylin, P.: From science to policy: how can research community contribute to the reporting and verification needs under the Paris Agreement?, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8683, https://doi.org/10.5194/egusphere-egu2020-8683, 2020.
EGU2020-6432 | Displays | ITS5.2/AS3.17
Was Australia a sink or source of carbon dioxide in 2015? Data assimilation using OCO-2 satellite dataYohanna Villalobos, Peter Rayner, Steven Thomas, and Jeremy Silver
Estimates of the net CO2 flux at a continental scale are essential to building up confidence in the global carbon budget. In this study, we present the assimilation of the satellite data from the Orbiting Carbon Observatory-2 (OCO-2) (land nadir and glint data) to estimate the Australian CO2 surface fluxes for 2015. We used the Community Multiscale Air Quality (CMAQ) model and a four-dimensional variational scheme. Our preliminary results suggest that Australia was a slight carbon sink during 2015 of -0.15 +- 0.11 PgC y-1 compared to the prior estimate of 0.13 +- 0.55 PgC y-1. The monthly seasonal cycle shows there was not a good agreement between the prior and posterior fluxes in 2015. Our monthly posterior estimates suggest that from May to August, Australia was a sink of CO2 and that from October to December, it was a source of CO2 compared to the prior estimates, which showed an opposite sign. To understand these results more deeply, we aggregated the CO2 surface fluxes into six categories using Land Cover Type Product the Moderate Resolution Imaging Spectroradiometer (MODIS) and divided them into two areas (north and south). Our posterior fluxes aggregated in the southern and northern Australia indicates that most of the uptake of CO2 is driven by grasses and cereal crops. Grasses and cereal crops in these two regions represent -0.11 +- 0.027 and -0.06 +- 0.05 PgC/y respectively. In the southern region, the monthly time series of this category shows that this uptake occurs mainly from June to September, whereas in the north, it occurs from January to March. We evaluate our posterior CO2 concentration against The Total Carbon Column Observing Network (TCCON) and in-situ measurements. We use the TCCON stations from Darwin, Wollongong, and Lauder (in New Zealand). Amongst the in-situ measurements, we considered stations located at Gunn Point (near Darwin), Cape Grim (in Tasmania) and Iron Bark and Burncluith (in Queensland). Analysis of the monthly biases indicates that CO2 concentration simulated by posterior fluxes are in better agreement with TCCON data compared to in-situ measurements. In general, monthly mean biases in TCCON Darwin are improved by almost 70 per cent. Lauder and Wollongong stations are strongly affected by ocean fluxes which have small prior uncertainty in this inversion. Biases are hence not much improved here. We verify this by relating bias to wind direction. If the winds come from the ocean, fluxes over Australia are less constrained by OCO-2 data. Biases against in situ data are generally not improved by assimilation, suggesting either problems with the transport model or an inability for OCO-2 data to constrain fluxes at scales relevant to these measurements.
How to cite: Villalobos, Y., Rayner, P., Thomas, S., and Silver, J.: Was Australia a sink or source of carbon dioxide in 2015? Data assimilation using OCO-2 satellite data, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6432, https://doi.org/10.5194/egusphere-egu2020-6432, 2020.
Estimates of the net CO2 flux at a continental scale are essential to building up confidence in the global carbon budget. In this study, we present the assimilation of the satellite data from the Orbiting Carbon Observatory-2 (OCO-2) (land nadir and glint data) to estimate the Australian CO2 surface fluxes for 2015. We used the Community Multiscale Air Quality (CMAQ) model and a four-dimensional variational scheme. Our preliminary results suggest that Australia was a slight carbon sink during 2015 of -0.15 +- 0.11 PgC y-1 compared to the prior estimate of 0.13 +- 0.55 PgC y-1. The monthly seasonal cycle shows there was not a good agreement between the prior and posterior fluxes in 2015. Our monthly posterior estimates suggest that from May to August, Australia was a sink of CO2 and that from October to December, it was a source of CO2 compared to the prior estimates, which showed an opposite sign. To understand these results more deeply, we aggregated the CO2 surface fluxes into six categories using Land Cover Type Product the Moderate Resolution Imaging Spectroradiometer (MODIS) and divided them into two areas (north and south). Our posterior fluxes aggregated in the southern and northern Australia indicates that most of the uptake of CO2 is driven by grasses and cereal crops. Grasses and cereal crops in these two regions represent -0.11 +- 0.027 and -0.06 +- 0.05 PgC/y respectively. In the southern region, the monthly time series of this category shows that this uptake occurs mainly from June to September, whereas in the north, it occurs from January to March. We evaluate our posterior CO2 concentration against The Total Carbon Column Observing Network (TCCON) and in-situ measurements. We use the TCCON stations from Darwin, Wollongong, and Lauder (in New Zealand). Amongst the in-situ measurements, we considered stations located at Gunn Point (near Darwin), Cape Grim (in Tasmania) and Iron Bark and Burncluith (in Queensland). Analysis of the monthly biases indicates that CO2 concentration simulated by posterior fluxes are in better agreement with TCCON data compared to in-situ measurements. In general, monthly mean biases in TCCON Darwin are improved by almost 70 per cent. Lauder and Wollongong stations are strongly affected by ocean fluxes which have small prior uncertainty in this inversion. Biases are hence not much improved here. We verify this by relating bias to wind direction. If the winds come from the ocean, fluxes over Australia are less constrained by OCO-2 data. Biases against in situ data are generally not improved by assimilation, suggesting either problems with the transport model or an inability for OCO-2 data to constrain fluxes at scales relevant to these measurements.
How to cite: Villalobos, Y., Rayner, P., Thomas, S., and Silver, J.: Was Australia a sink or source of carbon dioxide in 2015? Data assimilation using OCO-2 satellite data, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6432, https://doi.org/10.5194/egusphere-egu2020-6432, 2020.
EGU2020-21917 | Displays | ITS5.2/AS3.17
On forest monitoring and reporting in developing countries: lessons learnt and way forwardInge Jonckheere and Esther Mertens
Under the COP Paris Agreement, countries need to prepare their GHG inventories with emissions by source and removals by sinks. In order to meet the UNFCCC quality standards, those inventories should be transparent, accurate, comparable, consistent and complete. For the LULUCF sector, emissions are a result from a change in one of the five IPCC carbon pools (e.g. aboveground biomass, etc.). The change in the carbon stock is not easily directly measured, but usually estimated using proxies of land area and area change and the average carbon stocks in the area. Countries encounter several challenges when collecting forestry and land use data information on land related to the inherent complexity of the measurement and monitoring of LULUCF sector and limited by their institutional arrangements. The REDD+ program of the United Nations has a long history of supporting developing countries on setting up the forest (and land use) monitoring system which has supported several countries to produce regular data and make it publicly available, even using web-geoportals. In this paper, we list the challenges of forestry and land data collection and demonstrate the potential leading role of REDD+ countries in the context of reporting regular GHG estimates for the LULUCF sector and the preparation of GHG baselines for the NDC progress reporting under the Paris Agreement, also in light with the recent developments in the COP25.
Key terms: Institutional arrangements, institutional memory, data management systems, legal instruments, sustainability, national forest monitoring system, LULUCF reporting, regular monitoring of land use data, preparation of land use change data. Data portals for increased transparency and stakeholder involvement. Targeted finance for data measurements at different agencies involved in the GHG inventory
How to cite: Jonckheere, I. and Mertens, E.: On forest monitoring and reporting in developing countries: lessons learnt and way forward , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21917, https://doi.org/10.5194/egusphere-egu2020-21917, 2020.
Under the COP Paris Agreement, countries need to prepare their GHG inventories with emissions by source and removals by sinks. In order to meet the UNFCCC quality standards, those inventories should be transparent, accurate, comparable, consistent and complete. For the LULUCF sector, emissions are a result from a change in one of the five IPCC carbon pools (e.g. aboveground biomass, etc.). The change in the carbon stock is not easily directly measured, but usually estimated using proxies of land area and area change and the average carbon stocks in the area. Countries encounter several challenges when collecting forestry and land use data information on land related to the inherent complexity of the measurement and monitoring of LULUCF sector and limited by their institutional arrangements. The REDD+ program of the United Nations has a long history of supporting developing countries on setting up the forest (and land use) monitoring system which has supported several countries to produce regular data and make it publicly available, even using web-geoportals. In this paper, we list the challenges of forestry and land data collection and demonstrate the potential leading role of REDD+ countries in the context of reporting regular GHG estimates for the LULUCF sector and the preparation of GHG baselines for the NDC progress reporting under the Paris Agreement, also in light with the recent developments in the COP25.
Key terms: Institutional arrangements, institutional memory, data management systems, legal instruments, sustainability, national forest monitoring system, LULUCF reporting, regular monitoring of land use data, preparation of land use change data. Data portals for increased transparency and stakeholder involvement. Targeted finance for data measurements at different agencies involved in the GHG inventory
How to cite: Jonckheere, I. and Mertens, E.: On forest monitoring and reporting in developing countries: lessons learnt and way forward , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21917, https://doi.org/10.5194/egusphere-egu2020-21917, 2020.
EGU2020-6202 | Displays | ITS5.2/AS3.17
From concentrations requirements to emission committments: prospects and challenges for the Global StocktakeKevin Bowman, Junjie Liu, Anthony Bloom, Sassan Saatchi, Liang Xu, Kazayuki Miyazaki, Meemong Lee, Dimitris Menemenlis, Dustin Carroll, and David Schimel
The Paris Agreement was a watershed moment in providing a framework to address the mitigation of climate change. The Global Stocktake is a bi-decadal process to assess progress in greenhouse gas emission reductions in light of climate feedbacks and response. However, the relationship between emission commitments and concentration requirements is confounded by complex natural biogeochemical processes potentially modulated by climate feedbacks. We investigate the prospects and challenges of mediating between emissions and concentrations through the NASA Carbon Monitoring System Flux (CMS-Flux) project, which is an inverse modeling and data assimilation system that ingests a suite of observations including the Orbital Carbon Observatory (OCO-2) and state-of-the-art biomass change maps across the carbon cycle to attribute atmospheric carbon variability to anthropogenic and biogeochemical processes. We decompose the spatial drivers of CO2 accumulation since the beginning of the decade into component fluxes and emissions in the context of the historic 2010 and 2015 El Ninos, which had a tremendous influence on the CO2 growth rate. These processes reshuffle the primary contributors of CO2 growth at Stocktake time scales that must be reconciled with Nationally Determine Contributions and concentration targets. Based on these findings, we investigate how systems such as CMS-Flux can harness the carbon constellation to fill a vital gap between policy needs and scientific assessment needed for the Stocktake.
How to cite: Bowman, K., Liu, J., Bloom, A., Saatchi, S., Xu, L., Miyazaki, K., Lee, M., Menemenlis, D., Carroll, D., and Schimel, D.: From concentrations requirements to emission committments: prospects and challenges for the Global Stocktake, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6202, https://doi.org/10.5194/egusphere-egu2020-6202, 2020.
The Paris Agreement was a watershed moment in providing a framework to address the mitigation of climate change. The Global Stocktake is a bi-decadal process to assess progress in greenhouse gas emission reductions in light of climate feedbacks and response. However, the relationship between emission commitments and concentration requirements is confounded by complex natural biogeochemical processes potentially modulated by climate feedbacks. We investigate the prospects and challenges of mediating between emissions and concentrations through the NASA Carbon Monitoring System Flux (CMS-Flux) project, which is an inverse modeling and data assimilation system that ingests a suite of observations including the Orbital Carbon Observatory (OCO-2) and state-of-the-art biomass change maps across the carbon cycle to attribute atmospheric carbon variability to anthropogenic and biogeochemical processes. We decompose the spatial drivers of CO2 accumulation since the beginning of the decade into component fluxes and emissions in the context of the historic 2010 and 2015 El Ninos, which had a tremendous influence on the CO2 growth rate. These processes reshuffle the primary contributors of CO2 growth at Stocktake time scales that must be reconciled with Nationally Determine Contributions and concentration targets. Based on these findings, we investigate how systems such as CMS-Flux can harness the carbon constellation to fill a vital gap between policy needs and scientific assessment needed for the Stocktake.
How to cite: Bowman, K., Liu, J., Bloom, A., Saatchi, S., Xu, L., Miyazaki, K., Lee, M., Menemenlis, D., Carroll, D., and Schimel, D.: From concentrations requirements to emission committments: prospects and challenges for the Global Stocktake, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6202, https://doi.org/10.5194/egusphere-egu2020-6202, 2020.
EGU2020-8356 | Displays | ITS5.2/AS3.17
Assessments of in situ and remotely sensed CO2 observations in a Carbon Cycle Fossil Fuel Data Assimilation System to estimate fossil fuel emissionsMarko Scholze, Thomas Kaminski, Peter Rayner, Michael Vossbeck, Michael Buchwitz, Maximilian Reuter, Wolfgang Knorr, Hans Chen, Anna Agusti-Panareda, Armin Löscher, and Yasjka Mejer
The Paris Agreement establishes a transparency framework that builds upon inventory-based national greenhouse gas emission reports, complemented by independent emission estimates derived from atmospheric measurements through inverse modelling. The capability of such a Monitoring and Verification Support (MVS) capacity to constrain fossil fuel emissions to a sufficient extent has not yet been assessed. The CO2 Monitoring Mission, planned as a constellation of satellites measuring column-integrated atmospheric CO2 concentration (XCO2), is expected to become a key component of an MVS capacity.
Here we provide an assessment of the potential of a Carbon Cycle Fossil Fuel Data Assimilation System using synthetic XCO2 and other observations to constrain fossil fuel CO2 emissions for an exemplary 1-week period in 2008. We find that the system can provide useful weekly estimates of country-scale fossil fuel emissions independent of national inventories. When extrapolated from the weekly to the annual scale, uncertainties in emissions are comparable to uncertainties in inventories, so that estimates from inventories and from the MVS capacity can be used for mutual verification.
We further demonstrate an alternative, synergistic mode of operation, which delivers a best emission estimate through assimilation of the inventory information as an additional data stream. We show the sensitivity of the results to the setup of the CCFFDAS and to various aspects of the data streams that are assimilated, including assessments of surface networks.
How to cite: Scholze, M., Kaminski, T., Rayner, P., Vossbeck, M., Buchwitz, M., Reuter, M., Knorr, W., Chen, H., Agusti-Panareda, A., Löscher, A., and Mejer, Y.: Assessments of in situ and remotely sensed CO2 observations in a Carbon Cycle Fossil Fuel Data Assimilation System to estimate fossil fuel emissions, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8356, https://doi.org/10.5194/egusphere-egu2020-8356, 2020.
The Paris Agreement establishes a transparency framework that builds upon inventory-based national greenhouse gas emission reports, complemented by independent emission estimates derived from atmospheric measurements through inverse modelling. The capability of such a Monitoring and Verification Support (MVS) capacity to constrain fossil fuel emissions to a sufficient extent has not yet been assessed. The CO2 Monitoring Mission, planned as a constellation of satellites measuring column-integrated atmospheric CO2 concentration (XCO2), is expected to become a key component of an MVS capacity.
Here we provide an assessment of the potential of a Carbon Cycle Fossil Fuel Data Assimilation System using synthetic XCO2 and other observations to constrain fossil fuel CO2 emissions for an exemplary 1-week period in 2008. We find that the system can provide useful weekly estimates of country-scale fossil fuel emissions independent of national inventories. When extrapolated from the weekly to the annual scale, uncertainties in emissions are comparable to uncertainties in inventories, so that estimates from inventories and from the MVS capacity can be used for mutual verification.
We further demonstrate an alternative, synergistic mode of operation, which delivers a best emission estimate through assimilation of the inventory information as an additional data stream. We show the sensitivity of the results to the setup of the CCFFDAS and to various aspects of the data streams that are assimilated, including assessments of surface networks.
How to cite: Scholze, M., Kaminski, T., Rayner, P., Vossbeck, M., Buchwitz, M., Reuter, M., Knorr, W., Chen, H., Agusti-Panareda, A., Löscher, A., and Mejer, Y.: Assessments of in situ and remotely sensed CO2 observations in a Carbon Cycle Fossil Fuel Data Assimilation System to estimate fossil fuel emissions, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8356, https://doi.org/10.5194/egusphere-egu2020-8356, 2020.
EGU2020-12638 | Displays | ITS5.2/AS3.17
Country scale analysis of natural and anthropogenic methane emissions with a high-resolution inverse model using GOSAT and ground-based observationsRajesh Janardanan, Shamil Maksyutov, Aki Tsuruta, Fenjuan Wang, Yogesh Tiwari, Vinu Valsala, Akihiko Ito, Yukio Yoshida, Johannes Kaiser, Greet Janssens-Maenhout, and Tsuneo Matsunaga and the observation group
EGU2020-9151 | Displays | ITS5.2/AS3.17 | Highlight
The changing global methane budget. NERC’s MOYA, ZWAMPS and methane reduction projects, and the need for better tropical information and mitigation.Euan G. Nisbet, David Lowry, Rebecca E. Fisher, James L. France, Grant Allen, and James Lee
The UK NERC MOYA Global Methane Budget consortium (2016-2020) tracks the changing methane burden, with time series measurement of methane and its isotopes at remote sites, field campaigns in the Arctic, Europe, and Tropics, and through modelling studies. The methane rise that began in 2007 and accelerated in 2014 has continued, as apparently has the isotopic shift to lighter, more C-12 rich values (Nisbet et al. 2019, GBC).
MOYA flight campaigns in S. America (Bolivian Amazonia) and Africa (including ZWAMPS, an aircraft campaign over Upper Congo wetlands around Lake Bangweulu, Zambia), have shown significant tropical emissions from wetlands, cattle and fires. Isotopic values of emissions from wetlands and cattle show strong C4 plant input. Fire is a major source, including biomass burning of seasonal C4 grassland and also of C3 leaf litter in wooded savanna. MOYA campaigns have also identified widespread and significant urban and rural air pollution in tropical Africa from crop waste and urban waste fires, including plastic burning.
MOYA’s work has identified strong opportunities for reducing anthropogenic emissions, and highlights the need for better emissions quantification in tropical nations. Mitigation is feasible not only in northern nations, for example drastically cutting fossil fuel emissions, but also is urgently necessary in tropical nations, where much better inventory information is urgently needed. Natural sources such as wetlands are intractable to mitigation, and emissions are likely to increase, with climate warming feeding warming. Cost-effective low technology actions in tropical nations, such as covering landfills with soil, and reducing waste fires, would have significant impact on emissions. Emission reduction from landfills, sewage, and waste fires, especially around the rapidly growing tropical megacities would also bring significant health benefit by cutting air pollution.
Sharp near-future reductions in anthropogenic methane emissions are indeed possible (Nisbet et al. Rev Geophys. 2020), and are probably inexpensive compared to other ways of decarbonation, but cutting methane will need strong action, including determined effort from tropical nations.
How to cite: Nisbet, E. G., Lowry, D., Fisher, R. E., France, J. L., Allen, G., and Lee, J.: The changing global methane budget. NERC’s MOYA, ZWAMPS and methane reduction projects, and the need for better tropical information and mitigation., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9151, https://doi.org/10.5194/egusphere-egu2020-9151, 2020.
The UK NERC MOYA Global Methane Budget consortium (2016-2020) tracks the changing methane burden, with time series measurement of methane and its isotopes at remote sites, field campaigns in the Arctic, Europe, and Tropics, and through modelling studies. The methane rise that began in 2007 and accelerated in 2014 has continued, as apparently has the isotopic shift to lighter, more C-12 rich values (Nisbet et al. 2019, GBC).
MOYA flight campaigns in S. America (Bolivian Amazonia) and Africa (including ZWAMPS, an aircraft campaign over Upper Congo wetlands around Lake Bangweulu, Zambia), have shown significant tropical emissions from wetlands, cattle and fires. Isotopic values of emissions from wetlands and cattle show strong C4 plant input. Fire is a major source, including biomass burning of seasonal C4 grassland and also of C3 leaf litter in wooded savanna. MOYA campaigns have also identified widespread and significant urban and rural air pollution in tropical Africa from crop waste and urban waste fires, including plastic burning.
MOYA’s work has identified strong opportunities for reducing anthropogenic emissions, and highlights the need for better emissions quantification in tropical nations. Mitigation is feasible not only in northern nations, for example drastically cutting fossil fuel emissions, but also is urgently necessary in tropical nations, where much better inventory information is urgently needed. Natural sources such as wetlands are intractable to mitigation, and emissions are likely to increase, with climate warming feeding warming. Cost-effective low technology actions in tropical nations, such as covering landfills with soil, and reducing waste fires, would have significant impact on emissions. Emission reduction from landfills, sewage, and waste fires, especially around the rapidly growing tropical megacities would also bring significant health benefit by cutting air pollution.
Sharp near-future reductions in anthropogenic methane emissions are indeed possible (Nisbet et al. Rev Geophys. 2020), and are probably inexpensive compared to other ways of decarbonation, but cutting methane will need strong action, including determined effort from tropical nations.
How to cite: Nisbet, E. G., Lowry, D., Fisher, R. E., France, J. L., Allen, G., and Lee, J.: The changing global methane budget. NERC’s MOYA, ZWAMPS and methane reduction projects, and the need for better tropical information and mitigation., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9151, https://doi.org/10.5194/egusphere-egu2020-9151, 2020.
EGU2020-7767 | Displays | ITS5.2/AS3.17
Simulation experiments for studying an optimal carbon dioxide monitoring network for Osaka, JapanTakayuki Hayashida, Tomohiro Oda, Takashi Machimura, Takanori Matsui, Akihiko Kuze, Hiroshi Suto, Kei Shiomi, Fumie Kataoka, Tetsuo Fukui, Yukio Terao, Masahide Nishihashi, Kazutaka Murakami, Takahiro Sasai, Makoto Saito, and Hiroshi Tanimoto
We prototype an Observing System Simulation Experiment (OSSE) system for studying an optimal carbon dioxide (CO2) monitoring network in Osaka city, one of the populated cities in Japan (population: 8.8 million). In the first phase of our project, we built a multi-resolution, spatially-explicit fossil fuel CO2 emissions model to better quantify CO2 emissions with an updated information and detailed geospatial information. In the second phase, we coupled the emission model to the WRF Chem model, and developed an OSSE capability to study an optimal CO2 observation network for Osaka. After completing an evaluation of the meteorological fields and emission fields, we have started simulating atmospheric CO2 concentration using possible emission scenarios and examined the emission change detectability by an imaginary ground-based observation networks. We started from existing observational sites for air quality monitoring sites and the selected suitable sites based on how much useful signals can be obtained. In order to fully examine the detectability of CO2 emission changes in the presence of potential strong local and inflow biospheric CO2 contributions, we included biospheric fluxes calculated from the BEAMS model. We have also attempted to calculate the cost for establishing the observational sites. Our ultimate goal is to help decision makers to design an effective observation network given their emission reduction target as well as the budget constrain.
How to cite: Hayashida, T., Oda, T., Machimura, T., Matsui, T., Kuze, A., Suto, H., Shiomi, K., Kataoka, F., Fukui, T., Terao, Y., Nishihashi, M., Murakami, K., Sasai, T., Saito, M., and Tanimoto, H.: Simulation experiments for studying an optimal carbon dioxide monitoring network for Osaka, Japan, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7767, https://doi.org/10.5194/egusphere-egu2020-7767, 2020.
We prototype an Observing System Simulation Experiment (OSSE) system for studying an optimal carbon dioxide (CO2) monitoring network in Osaka city, one of the populated cities in Japan (population: 8.8 million). In the first phase of our project, we built a multi-resolution, spatially-explicit fossil fuel CO2 emissions model to better quantify CO2 emissions with an updated information and detailed geospatial information. In the second phase, we coupled the emission model to the WRF Chem model, and developed an OSSE capability to study an optimal CO2 observation network for Osaka. After completing an evaluation of the meteorological fields and emission fields, we have started simulating atmospheric CO2 concentration using possible emission scenarios and examined the emission change detectability by an imaginary ground-based observation networks. We started from existing observational sites for air quality monitoring sites and the selected suitable sites based on how much useful signals can be obtained. In order to fully examine the detectability of CO2 emission changes in the presence of potential strong local and inflow biospheric CO2 contributions, we included biospheric fluxes calculated from the BEAMS model. We have also attempted to calculate the cost for establishing the observational sites. Our ultimate goal is to help decision makers to design an effective observation network given their emission reduction target as well as the budget constrain.
How to cite: Hayashida, T., Oda, T., Machimura, T., Matsui, T., Kuze, A., Suto, H., Shiomi, K., Kataoka, F., Fukui, T., Terao, Y., Nishihashi, M., Murakami, K., Sasai, T., Saito, M., and Tanimoto, H.: Simulation experiments for studying an optimal carbon dioxide monitoring network for Osaka, Japan, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7767, https://doi.org/10.5194/egusphere-egu2020-7767, 2020.
EGU2020-10116 | Displays | ITS5.2/AS3.17
OCO-3 Snapshot Area Mapping Mode: Early ResultsRobert Roland Nelson, Annmarie Eldering, Thomas Kurosu, Matthäus Kiel, Brendan Fisher, Ryan Pavlick, Gary Spiers, Rob Rosenberg, David Crisp, Christopher O'Dell, Peter Somkuti, Thomas Taylor, Eric Kort, Tomohiro Oda, Ray Nassar, and Thomas Lauvaux
The NASA Orbiting Carbon Observatory-3 (OCO-3) was launched on May 4, 2019 to the International Space Station and has been taking measurements since August. OCO-3, like its predecessor OCO-2, makes hyperspectral measurements of reflected sunlight in three near-infrared bands. However, one of the unique features of OCO-3 is its ability to scan large contiguous areas on the order of 80 km by 80 km using a pointing mirror assembly. This capability, known as snapshot area mapping (SAM) mode, is being used to look at cities, forests, volcanos, and multiple other areas that are of interest to the carbon dioxide (CO2) and solar-induced chlorophyll fluorescence (SIF) scientific communities. For example, OCO-3 can measure column-mean CO2 (XCO2) over the entire Los Angeles, CA basin during the span of only two minutes. With several hundred SAMs having been collected so far and upwards of 25 possible per day, there is a wealth of data to investigate for scientific features and for any potential instrument biases. Additionally, this type of dense sampling will be a proof-of-concept for multiple future wide-swath CO2 missions. Here, we present several OCO-3 SAM mode measurements and discuss interesting features, XCO2 results, and future mission plans.
How to cite: Nelson, R. R., Eldering, A., Kurosu, T., Kiel, M., Fisher, B., Pavlick, R., Spiers, G., Rosenberg, R., Crisp, D., O'Dell, C., Somkuti, P., Taylor, T., Kort, E., Oda, T., Nassar, R., and Lauvaux, T.: OCO-3 Snapshot Area Mapping Mode: Early Results, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10116, https://doi.org/10.5194/egusphere-egu2020-10116, 2020.
The NASA Orbiting Carbon Observatory-3 (OCO-3) was launched on May 4, 2019 to the International Space Station and has been taking measurements since August. OCO-3, like its predecessor OCO-2, makes hyperspectral measurements of reflected sunlight in three near-infrared bands. However, one of the unique features of OCO-3 is its ability to scan large contiguous areas on the order of 80 km by 80 km using a pointing mirror assembly. This capability, known as snapshot area mapping (SAM) mode, is being used to look at cities, forests, volcanos, and multiple other areas that are of interest to the carbon dioxide (CO2) and solar-induced chlorophyll fluorescence (SIF) scientific communities. For example, OCO-3 can measure column-mean CO2 (XCO2) over the entire Los Angeles, CA basin during the span of only two minutes. With several hundred SAMs having been collected so far and upwards of 25 possible per day, there is a wealth of data to investigate for scientific features and for any potential instrument biases. Additionally, this type of dense sampling will be a proof-of-concept for multiple future wide-swath CO2 missions. Here, we present several OCO-3 SAM mode measurements and discuss interesting features, XCO2 results, and future mission plans.
How to cite: Nelson, R. R., Eldering, A., Kurosu, T., Kiel, M., Fisher, B., Pavlick, R., Spiers, G., Rosenberg, R., Crisp, D., O'Dell, C., Somkuti, P., Taylor, T., Kort, E., Oda, T., Nassar, R., and Lauvaux, T.: OCO-3 Snapshot Area Mapping Mode: Early Results, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10116, https://doi.org/10.5194/egusphere-egu2020-10116, 2020.
EGU2020-11578 | Displays | ITS5.2/AS3.17
Informing urban greenhouse gas quantification and mitigation using high-resolution CO2 emissions: a case study in Baltimore, USAGeoffrey Roest, Kevin Gurney, Scot Miller, and Jianming Liang
As atmospheric carbon dioxide (CO2) levels continue to rise, a global effort to mitigate greenhouse gas (GHG) emissions is underway. Urban domains, which are responsible for more than 70% of global anthropogenic CO2 emissions, are emerging as leaders in mitigation policy and planning – especially in the United States of America (US), which has formally withdrawn from the Paris Agreement. However, cities face obstacles in developing comprehensive and spatially explicit GHG inventories to inform specific actions and goals. The Vulcan emission product provides highly resolved Scope 1 fossil fuel CO2 (FFCO2) emissions in space and time for the entire US, while the Hestia emission products utilize even more granular spatiotemporal data within four US urban domains. Here, we present results from Hestia for Baltimore – a colonial-era city on the Atlantic Coast of the US. Scope 1 FFCO2 emissions are dominated by energy consumption in buildings, onroad vehicle emissions, and industrial point sources. Large, systematic differences exist between Hestia and Baltimore’s self-reported GHG inventory, which follows the Global Protocol for Community-scale Greenhouse Gas Emission Inventories (GPC). These differences include entire sectors being omitted from emissions reporting due to a determination of ownership (e.g. Scope 1 vs. Scope 3), data gaps and limitations, and a conflation of Scope 1 and Scope 2 electricity production emissions. Urban planning may be better informed by utilizing additional data sources on fuel and energy consumption – especially fuel and energy that are not provided by a centralized utility – to develop comprehensive GHG emission estimates.
How to cite: Roest, G., Gurney, K., Miller, S., and Liang, J.: Informing urban greenhouse gas quantification and mitigation using high-resolution CO2 emissions: a case study in Baltimore, USA, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11578, https://doi.org/10.5194/egusphere-egu2020-11578, 2020.
As atmospheric carbon dioxide (CO2) levels continue to rise, a global effort to mitigate greenhouse gas (GHG) emissions is underway. Urban domains, which are responsible for more than 70% of global anthropogenic CO2 emissions, are emerging as leaders in mitigation policy and planning – especially in the United States of America (US), which has formally withdrawn from the Paris Agreement. However, cities face obstacles in developing comprehensive and spatially explicit GHG inventories to inform specific actions and goals. The Vulcan emission product provides highly resolved Scope 1 fossil fuel CO2 (FFCO2) emissions in space and time for the entire US, while the Hestia emission products utilize even more granular spatiotemporal data within four US urban domains. Here, we present results from Hestia for Baltimore – a colonial-era city on the Atlantic Coast of the US. Scope 1 FFCO2 emissions are dominated by energy consumption in buildings, onroad vehicle emissions, and industrial point sources. Large, systematic differences exist between Hestia and Baltimore’s self-reported GHG inventory, which follows the Global Protocol for Community-scale Greenhouse Gas Emission Inventories (GPC). These differences include entire sectors being omitted from emissions reporting due to a determination of ownership (e.g. Scope 1 vs. Scope 3), data gaps and limitations, and a conflation of Scope 1 and Scope 2 electricity production emissions. Urban planning may be better informed by utilizing additional data sources on fuel and energy consumption – especially fuel and energy that are not provided by a centralized utility – to develop comprehensive GHG emission estimates.
How to cite: Roest, G., Gurney, K., Miller, S., and Liang, J.: Informing urban greenhouse gas quantification and mitigation using high-resolution CO2 emissions: a case study in Baltimore, USA, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11578, https://doi.org/10.5194/egusphere-egu2020-11578, 2020.
EGU2020-19237 | Displays | ITS5.2/AS3.17
Greenhouse Gas Emission Estimate Using a Fully-automated Permanent Sensor Network in MunichFlorian Dietrich, Jia Chen, Benno Voggenreiter, and Xinxu Zhao
To effectively mitigate climate change, it is indispensable to know the locations of the emission sources and their respective emission strength. As the majority of greenhouse gases (GHG) such as carbon dioxide (CO2), methane (CH4) and carbon monoxide (CO) are generated in cities, our focus lies in the emission determination of urban areas. For this reason, we established a fully-automated sensor network in Munich, Germany to permanently measure GHGs.
Our permanent network is based on the differential column measurement principle [1] and measures the city emissions using five FTIR spectrometer systems (EM27/SUN from Bruker [2]). For these spectrometers we built a self-developed enclosure system and equipped them with several sensors (e.g. computer vision based solar radiation sensor) to measure the column-averaged concentrations of CO2, CH4 and CO in a fully-automated way. The difference between the column amounts inside and outside of the city reflects the pollutants abundance generated in the city. Four stations are placed at the city outskirts to capture the inflow/outflow column amounts in arbitrary wind conditions. One inner-city station, which has already been operating successfully since 2016 [3], is serving as a permanent downwind site for half of the city.
With the help of atmospheric transport models, combined with a Bayesian inverse modelling approach, those concentration differences are transferred into spatially resolved emission estimates of the city. After testing the network in two campaigns (2017 and 2018), the network is finally long-term operating since summer 2019 and continuously measures the GHG concentrations in Munich. We will show both the hardware achievements and first measurement and emission results after ten month of operation.
[1] Chen et al.: Differential column measurements using compact solar-tracking spectrometers. Atmos. Chem. Phys., 16: 8479–8498, 2016.
[2] Gisi et al.: XCO2-measurements with a tabletop FTS using solar absorption spectroscopy, Atmos. Meas. Tech., 5, 2969-2980, 2012
[3] Heinle and Chen: Automated Enclosure and Protection System for Compact Solar-Tracking Spectrometers, Atmos. Meas. Tech., 11, 2173-2185, 2018
How to cite: Dietrich, F., Chen, J., Voggenreiter, B., and Zhao, X.: Greenhouse Gas Emission Estimate Using a Fully-automated Permanent Sensor Network in Munich, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19237, https://doi.org/10.5194/egusphere-egu2020-19237, 2020.
To effectively mitigate climate change, it is indispensable to know the locations of the emission sources and their respective emission strength. As the majority of greenhouse gases (GHG) such as carbon dioxide (CO2), methane (CH4) and carbon monoxide (CO) are generated in cities, our focus lies in the emission determination of urban areas. For this reason, we established a fully-automated sensor network in Munich, Germany to permanently measure GHGs.
Our permanent network is based on the differential column measurement principle [1] and measures the city emissions using five FTIR spectrometer systems (EM27/SUN from Bruker [2]). For these spectrometers we built a self-developed enclosure system and equipped them with several sensors (e.g. computer vision based solar radiation sensor) to measure the column-averaged concentrations of CO2, CH4 and CO in a fully-automated way. The difference between the column amounts inside and outside of the city reflects the pollutants abundance generated in the city. Four stations are placed at the city outskirts to capture the inflow/outflow column amounts in arbitrary wind conditions. One inner-city station, which has already been operating successfully since 2016 [3], is serving as a permanent downwind site for half of the city.
With the help of atmospheric transport models, combined with a Bayesian inverse modelling approach, those concentration differences are transferred into spatially resolved emission estimates of the city. After testing the network in two campaigns (2017 and 2018), the network is finally long-term operating since summer 2019 and continuously measures the GHG concentrations in Munich. We will show both the hardware achievements and first measurement and emission results after ten month of operation.
[1] Chen et al.: Differential column measurements using compact solar-tracking spectrometers. Atmos. Chem. Phys., 16: 8479–8498, 2016.
[2] Gisi et al.: XCO2-measurements with a tabletop FTS using solar absorption spectroscopy, Atmos. Meas. Tech., 5, 2969-2980, 2012
[3] Heinle and Chen: Automated Enclosure and Protection System for Compact Solar-Tracking Spectrometers, Atmos. Meas. Tech., 11, 2173-2185, 2018
How to cite: Dietrich, F., Chen, J., Voggenreiter, B., and Zhao, X.: Greenhouse Gas Emission Estimate Using a Fully-automated Permanent Sensor Network in Munich, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19237, https://doi.org/10.5194/egusphere-egu2020-19237, 2020.
EGU2020-19498 | Displays | ITS5.2/AS3.17
Urban carbon dioxide flux monitoring using Eddy Covariance and Earth Observation: An introduction to diFUME projectStavros Stagakis, Christian Feigenwinter, and Roland Vogt
Monitoring CO2 emissions originating from urban areas has become a necessity to support sustainable urban planning strategies and climate change mitigation efforts. Integrative decision support, where net effects of various emission/sink components are considered and compared, is now an increasingly relevant part of urban planning processes. The current emission inventories rely on indirect approaches that use fuel and electricity consumption statistics for determining CO2 emissions. The consistency of such approaches is questionable and they usually neglect the contribution of the biogenic components of the urban carbon cycle (i.e. vegetation, soil). Moreover, their spatial and temporal scales are restricted because consumption statistics are often available in coarse spatial scales (national, provincial/state, municipal) and usually scaled down using proxy data (e.g. population density) to city-scale annual estimates. The diFUME project (https://mcr.unibas.ch/difume/) is developing a methodology for mapping and monitoring the actual urban CO2 flux at optimum spatial and temporal scales, meaningful for urban design decisions. The goal is to develop, apply and evaluate independent models, capable to estimate all the different components of the urban carbon cycle (i.e. building emissions, traffic emissions, human metabolism, photosynthetic uptake, plant respiration, soil respiration), combining mainly Eddy Covariance (EC) with Earth Observation (EO) data. EC provides continuous in-situ measurements of CO2 flux at the local scale. Processing, analysis and interpretation of urban EC measurements is challenging due to the inherent spatial complexity of CO2 source and sink configurations of the urban structure. The diFUME methodology is using multiple EO datasets to achieve multi-scale monitoring of urban cover, morphology and vegetation phenology in order to characterize the urban source/sink configurations and parameterize turbulent flux source area models. Such combination of EC and EO provides enhanced interpretation of the measured CO2 flux, analysis of its controlling factors and therefore the potential of fine scale mapping and monitoring. The diFUME methodology is being developed and applied in the city of Basel, exploiting the available long-term database (> 15 years) of urban EC measurements. The first results highlight the potential of EO-derived geospatial data to interpret the complexity of urban EC measurements. Seasonal and land cover related trends in the EC-measured CO2 flux are recognized, while the use of environmental, census and mobility datasets are increasing the interpretation capabilities and the modelling potential of the urban CO2 flux patterns.
How to cite: Stagakis, S., Feigenwinter, C., and Vogt, R.: Urban carbon dioxide flux monitoring using Eddy Covariance and Earth Observation: An introduction to diFUME project, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19498, https://doi.org/10.5194/egusphere-egu2020-19498, 2020.
Monitoring CO2 emissions originating from urban areas has become a necessity to support sustainable urban planning strategies and climate change mitigation efforts. Integrative decision support, where net effects of various emission/sink components are considered and compared, is now an increasingly relevant part of urban planning processes. The current emission inventories rely on indirect approaches that use fuel and electricity consumption statistics for determining CO2 emissions. The consistency of such approaches is questionable and they usually neglect the contribution of the biogenic components of the urban carbon cycle (i.e. vegetation, soil). Moreover, their spatial and temporal scales are restricted because consumption statistics are often available in coarse spatial scales (national, provincial/state, municipal) and usually scaled down using proxy data (e.g. population density) to city-scale annual estimates. The diFUME project (https://mcr.unibas.ch/difume/) is developing a methodology for mapping and monitoring the actual urban CO2 flux at optimum spatial and temporal scales, meaningful for urban design decisions. The goal is to develop, apply and evaluate independent models, capable to estimate all the different components of the urban carbon cycle (i.e. building emissions, traffic emissions, human metabolism, photosynthetic uptake, plant respiration, soil respiration), combining mainly Eddy Covariance (EC) with Earth Observation (EO) data. EC provides continuous in-situ measurements of CO2 flux at the local scale. Processing, analysis and interpretation of urban EC measurements is challenging due to the inherent spatial complexity of CO2 source and sink configurations of the urban structure. The diFUME methodology is using multiple EO datasets to achieve multi-scale monitoring of urban cover, morphology and vegetation phenology in order to characterize the urban source/sink configurations and parameterize turbulent flux source area models. Such combination of EC and EO provides enhanced interpretation of the measured CO2 flux, analysis of its controlling factors and therefore the potential of fine scale mapping and monitoring. The diFUME methodology is being developed and applied in the city of Basel, exploiting the available long-term database (> 15 years) of urban EC measurements. The first results highlight the potential of EO-derived geospatial data to interpret the complexity of urban EC measurements. Seasonal and land cover related trends in the EC-measured CO2 flux are recognized, while the use of environmental, census and mobility datasets are increasing the interpretation capabilities and the modelling potential of the urban CO2 flux patterns.
How to cite: Stagakis, S., Feigenwinter, C., and Vogt, R.: Urban carbon dioxide flux monitoring using Eddy Covariance and Earth Observation: An introduction to diFUME project, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19498, https://doi.org/10.5194/egusphere-egu2020-19498, 2020.
EGU2020-19798 | Displays | ITS5.2/AS3.17
Methane Estimates in the Northeastern US using Continuous Measurements from a Regional Tower NetworkKimberly Mueller, Subhomoy Ghosh, Anna Karion, Sharon Gourdji, Israel Lopez-Coto, and Lee Murray
In the past decade, there has been a scientific focus on improving the accuracy and precision of methane (CH4) emission estimates in the United States, with much effort targeting oil and natural gas producing basins. Yet, regional CH4 emissions and their attribution to specific sources continue to have significant associated uncertainties. Recent urban work using aircraft observations have suggested that CH4 emissions are not well characterized in major cities along the U.S. East Coast; discrepancies have been attributed to an under-estimation of fugitive emissions from the distribution of natural gas. However, much of regional and urban research has involved the use of aircraft campaigns that can only provide a spatio-temporal snapshot of the CH4 emission landscape. As such, the annual representation and the seasonal variability of emissions remain largely unknown. To further investigate CH4 emissions, we present preliminary CH4 emissions estimates in the Northeastern US as part of NIST’s Northeast Corridor (NEC) testbed project using a regional inversion framework. This area encompasses over 20% of the US and contains many of the dominant CH4 emissions sources important at both regional and local scales. The atmospheric inversion can estimate sub-monthly 0.1-degree emissions using observations from a regional network of up to 37 in-situ towers; some towers are in non-urban areas while others are in cities or suburban areas. The inversion uses different emission products to help provide a prior constraint within the inversion including anthropogenic emissions from both the EDGAR v42 for the year 2008 and the US EPA for the year 2012, and natural wetland CH4 emissions from the WetCHARTs ensemble mean for the year 2010. Results include the comparison of synthetic model simulated CH4 concentrations (i.e., convolutions of the emission products with WRF-STILT footprints + background) to mole-fractions measured at the regional in-situ sites. The comparison provides an indication as to how well our prior understanding of emissions and incoming air flow matches the atmospheric signatures due to the underlying CH4 sources. We also present a preliminary set of CH4 fluxes for a selected number of urban centers and discuss challenges estimating highly-resolved methane emissions using high-frequency in-situ observations for a regional domain (e.g. few constraints, skewness in underlying fluxes, representing incoming background, etc.). Overall, this work provides the basis for a year-long inversion that will yields regional CH4 emissions over the Northeast US with a focus on Eastern urban areas.
How to cite: Mueller, K., Ghosh, S., Karion, A., Gourdji, S., Lopez-Coto, I., and Murray, L.: Methane Estimates in the Northeastern US using Continuous Measurements from a Regional Tower Network, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19798, https://doi.org/10.5194/egusphere-egu2020-19798, 2020.
In the past decade, there has been a scientific focus on improving the accuracy and precision of methane (CH4) emission estimates in the United States, with much effort targeting oil and natural gas producing basins. Yet, regional CH4 emissions and their attribution to specific sources continue to have significant associated uncertainties. Recent urban work using aircraft observations have suggested that CH4 emissions are not well characterized in major cities along the U.S. East Coast; discrepancies have been attributed to an under-estimation of fugitive emissions from the distribution of natural gas. However, much of regional and urban research has involved the use of aircraft campaigns that can only provide a spatio-temporal snapshot of the CH4 emission landscape. As such, the annual representation and the seasonal variability of emissions remain largely unknown. To further investigate CH4 emissions, we present preliminary CH4 emissions estimates in the Northeastern US as part of NIST’s Northeast Corridor (NEC) testbed project using a regional inversion framework. This area encompasses over 20% of the US and contains many of the dominant CH4 emissions sources important at both regional and local scales. The atmospheric inversion can estimate sub-monthly 0.1-degree emissions using observations from a regional network of up to 37 in-situ towers; some towers are in non-urban areas while others are in cities or suburban areas. The inversion uses different emission products to help provide a prior constraint within the inversion including anthropogenic emissions from both the EDGAR v42 for the year 2008 and the US EPA for the year 2012, and natural wetland CH4 emissions from the WetCHARTs ensemble mean for the year 2010. Results include the comparison of synthetic model simulated CH4 concentrations (i.e., convolutions of the emission products with WRF-STILT footprints + background) to mole-fractions measured at the regional in-situ sites. The comparison provides an indication as to how well our prior understanding of emissions and incoming air flow matches the atmospheric signatures due to the underlying CH4 sources. We also present a preliminary set of CH4 fluxes for a selected number of urban centers and discuss challenges estimating highly-resolved methane emissions using high-frequency in-situ observations for a regional domain (e.g. few constraints, skewness in underlying fluxes, representing incoming background, etc.). Overall, this work provides the basis for a year-long inversion that will yields regional CH4 emissions over the Northeast US with a focus on Eastern urban areas.
How to cite: Mueller, K., Ghosh, S., Karion, A., Gourdji, S., Lopez-Coto, I., and Murray, L.: Methane Estimates in the Northeastern US using Continuous Measurements from a Regional Tower Network, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19798, https://doi.org/10.5194/egusphere-egu2020-19798, 2020.
EGU2020-21177 | Displays | ITS5.2/AS3.17
London Carbon Emission ExperimentNeil Humpage, Hartmut Boesch, Robbie Ramsay, Andrew Gray, Jack Gillespie, Jerome Woodwark, and Mathew Williams
Carbon emissions related to fossil-fuel use are particularly localized, with urban areas being the dominant contributor responsible for more than 70% of global emissions. In the future, the share of the urban population is expected to continue to rise, leading to further increased focusing of fossil-fuel related emissions in urban areas. Cities are also the focal point of many political decisions on mitigating and stabilization of emissions, often setting more ambitious targets than national governments (e.g. C40 cities). For example, the Mayor of London has set the ambitious target for London to be a zero-carbon city by 2050. If we want to devise robust, well-informed climate change mitigation policies, we need a much better understanding of the carbon budget for cities and the nature of the diverse emission sources underpinned by new approaches that allow verifying and optimizing city carbon emissions and their trends.
New satellite observations of CO2 from missions such as OCO-3, MicroCarb and CO2M, especially when used in conjunction with ground-based sensors networks provide a powerful novel capability for evaluating and eventually improving existing CO2 emission inventories. We will set up a measurement network up-and downwind of London using portable greenhouse gas (CO2, CH4, CO) column sensors (Bruker EM27/SUN) together with UV/VIS DOAS spectrometers (NO2), which will be operated for extended time periods thanks to automatization of the sensors. The data acquired from the network will not only allow us to critically assess the quality of satellite observations over urban environments, but also to derive data-driven emission estimates using a measurement-modelling framework. In this presentation we will discuss the setup of the experiment, give a description of the sensors, and show some first observations obtained with the sensors.
How to cite: Humpage, N., Boesch, H., Ramsay, R., Gray, A., Gillespie, J., Woodwark, J., and Williams, M.: London Carbon Emission Experiment, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21177, https://doi.org/10.5194/egusphere-egu2020-21177, 2020.
Carbon emissions related to fossil-fuel use are particularly localized, with urban areas being the dominant contributor responsible for more than 70% of global emissions. In the future, the share of the urban population is expected to continue to rise, leading to further increased focusing of fossil-fuel related emissions in urban areas. Cities are also the focal point of many political decisions on mitigating and stabilization of emissions, often setting more ambitious targets than national governments (e.g. C40 cities). For example, the Mayor of London has set the ambitious target for London to be a zero-carbon city by 2050. If we want to devise robust, well-informed climate change mitigation policies, we need a much better understanding of the carbon budget for cities and the nature of the diverse emission sources underpinned by new approaches that allow verifying and optimizing city carbon emissions and their trends.
New satellite observations of CO2 from missions such as OCO-3, MicroCarb and CO2M, especially when used in conjunction with ground-based sensors networks provide a powerful novel capability for evaluating and eventually improving existing CO2 emission inventories. We will set up a measurement network up-and downwind of London using portable greenhouse gas (CO2, CH4, CO) column sensors (Bruker EM27/SUN) together with UV/VIS DOAS spectrometers (NO2), which will be operated for extended time periods thanks to automatization of the sensors. The data acquired from the network will not only allow us to critically assess the quality of satellite observations over urban environments, but also to derive data-driven emission estimates using a measurement-modelling framework. In this presentation we will discuss the setup of the experiment, give a description of the sensors, and show some first observations obtained with the sensors.
How to cite: Humpage, N., Boesch, H., Ramsay, R., Gray, A., Gillespie, J., Woodwark, J., and Williams, M.: London Carbon Emission Experiment, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21177, https://doi.org/10.5194/egusphere-egu2020-21177, 2020.
EGU2020-10996 | Displays | ITS5.2/AS3.17
Downscaling fossil-fuel CO2 emissions to policy relevant scales: Current errors and biases, expected improvements, and future perspectivesTomohiro Oda, Rostyslav Bun, Miguel Román, Zhuosen Wang, Ranjay Shrestha, Thomas Lauvaux, Cong Richao, Makoto Saito, Masahide Nishihashi, Yukio Terao, Shamil Maksyutov, Bradley Matthews, Michael Anderl, Lesley Ott, and Steven Pawson
Many of the global and regional gridded emission inventories used in atmospheric are based on downscaling techniques. Regardless of their limitations compared to locally-constructed mechanistic emission inventories, such gridded datasets will keep a key role of transferring the information reported as emission inventories into science-based emission verification support (EVS) systems. Given the use of inverse modeling in the EVS systems, characterizing errors and biases associated with the downscaled emission field is critical in order to obtain robust verification results. However, such error characterization is often challenging due to the lack of objective metrics.
This study compares downscaled emissions from the ODIAC global high-resolution dataset to values taken from the reported inventories and from other independent emission products with the intent of assessing the validity (e.g., error, bias, or accuracy ) of downscaled emissions databases at different policy relevant scales. ODIAC is based on its flagship high-resolution emission downscaling using satellite-observed nighttime lights (NTL) and point source information. The sole use of the NTL proxy for diffuse emissions has limitations. However, that provides a good opportunity to solely evaluate the performance of NTL as an emission proxy. It is now relatively straightforward to create detailed, high-resolution emission maps due to the advancements in geospatial modeling. However, such geospatial modeling techniques, which combine multiple pieces of information from different sources, are often neither validated nor even carefully evaluated.
As commonly done in previous emission uncertainty studies, we use the differences and agreements as a proxy for errors and improvements. We collect emission information reported at policy relevant scales, such as state/province/prefecture, cities and facility level (only for point sources). We also use locally-constructed fine-grained emission inventories as a quasi-truth for the emission distribution. We also assess the performance of NASA’s Black Marble NTL product suites as a new emission proxy in relation to current the ODIAC proxy that is based on older NTL datasets. We also look at how these emission differences translate into atmospheric concentration differences using high-resolution WRF simulations.
Based on results from the comparison, we identify and discuss the challenges and limitations in the use of downscaled emissions in carbon monitoring at different policy-relevant scales, especially at the city level, and propose possible ways to overcome some of the challenges and provide emission fields that are useful for both science and policy applications.
How to cite: Oda, T., Bun, R., Román, M., Wang, Z., Shrestha, R., Lauvaux, T., Richao, C., Saito, M., Nishihashi, M., Terao, Y., Maksyutov, S., Matthews, B., Anderl, M., Ott, L., and Pawson, S.: Downscaling fossil-fuel CO2 emissions to policy relevant scales: Current errors and biases, expected improvements, and future perspectives , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10996, https://doi.org/10.5194/egusphere-egu2020-10996, 2020.
Many of the global and regional gridded emission inventories used in atmospheric are based on downscaling techniques. Regardless of their limitations compared to locally-constructed mechanistic emission inventories, such gridded datasets will keep a key role of transferring the information reported as emission inventories into science-based emission verification support (EVS) systems. Given the use of inverse modeling in the EVS systems, characterizing errors and biases associated with the downscaled emission field is critical in order to obtain robust verification results. However, such error characterization is often challenging due to the lack of objective metrics.
This study compares downscaled emissions from the ODIAC global high-resolution dataset to values taken from the reported inventories and from other independent emission products with the intent of assessing the validity (e.g., error, bias, or accuracy ) of downscaled emissions databases at different policy relevant scales. ODIAC is based on its flagship high-resolution emission downscaling using satellite-observed nighttime lights (NTL) and point source information. The sole use of the NTL proxy for diffuse emissions has limitations. However, that provides a good opportunity to solely evaluate the performance of NTL as an emission proxy. It is now relatively straightforward to create detailed, high-resolution emission maps due to the advancements in geospatial modeling. However, such geospatial modeling techniques, which combine multiple pieces of information from different sources, are often neither validated nor even carefully evaluated.
As commonly done in previous emission uncertainty studies, we use the differences and agreements as a proxy for errors and improvements. We collect emission information reported at policy relevant scales, such as state/province/prefecture, cities and facility level (only for point sources). We also use locally-constructed fine-grained emission inventories as a quasi-truth for the emission distribution. We also assess the performance of NASA’s Black Marble NTL product suites as a new emission proxy in relation to current the ODIAC proxy that is based on older NTL datasets. We also look at how these emission differences translate into atmospheric concentration differences using high-resolution WRF simulations.
Based on results from the comparison, we identify and discuss the challenges and limitations in the use of downscaled emissions in carbon monitoring at different policy-relevant scales, especially at the city level, and propose possible ways to overcome some of the challenges and provide emission fields that are useful for both science and policy applications.
How to cite: Oda, T., Bun, R., Román, M., Wang, Z., Shrestha, R., Lauvaux, T., Richao, C., Saito, M., Nishihashi, M., Terao, Y., Maksyutov, S., Matthews, B., Anderl, M., Ott, L., and Pawson, S.: Downscaling fossil-fuel CO2 emissions to policy relevant scales: Current errors and biases, expected improvements, and future perspectives , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10996, https://doi.org/10.5194/egusphere-egu2020-10996, 2020.
EGU2020-19293 | Displays | ITS5.2/AS3.17
Performance of upcoming CO2 monitoring satellites in the new high-resolution inverse model CTDAS-WRFFriedemann Reum, Liesbeth Florentie, Wouter Peters, Matthieu Dogniaux, Cyril Crevoisier, Bojan Sic, and Sander Houweling
Efforts to reduce greenhouse gas (ghg) emissions require support by independent monitoring. The inverse modeling emission quantification approach, based on measurements of atmospheric ghg mixing ratios, promises objective ghg flux estimates consistent across country borders. Yet, ghg flux quantification on national scales and below is impeded both by the sparsity of atmospheric data and uncertainties in atmospheric ghg transport modeling. To overcome these challenges, the EU supports two concept studies for ghg monitoring satellites via the H2020 projects CHE (CO2M satellite) and SCARBO. Both systems aim at vast coverage and high accuracy and precision. Within these projects, we developed a variant of the CarbonTracker Europe inverse model (van der Laan-Luijkx et al., 2017) that uses WRF-GHG (Beck et al., 2011) to model atmospheric transport (CTDAS-WRF). In this presentation, we first introduce how the versatility of WRF-Chem and modular structure of CTDAS enables our model to estimate ghg fluxes across scales, from point sources to integrated continental fluxes. Next, we used our new model to demonstrate the potential skill of the proposed SCARBO satellite constellation for reducing uncertainties of national-scale CO2 fluxes, focusing on aerosol-induced errors. We demonstrate that this concept has the potential to greatly improve upon existing CO2 monitoring systems because of its unprecedented coverage. Lastly, we outline our plans for using CTDAS-WRF to assess the skill of the proposed CO2M monitoring system to estimate city-scale CO2 emissions.
References:
Beck, V., et al.: The WRF Greenhouse Gas Model (WRF-GHG) Technical Report, [online] Available from: https://www.bgc-jena.mpg.de/bgc-systems/pmwiki2/uploads/Download/Wrf-ghg/WRF-GHG_Techn_Report.pdf, 2011.
van der Laan-Luijkx, I. T., et al.: The CarbonTracker Data Assimilation Shell (CTDAS) v1.0: Implementation and global carbon balance 2001-2015, Geosci. Model Dev., 10(7), 2785–2800, doi:10.5194/gmd-10-2785-2017, 2017.
Acknowledgements:
This work has received funding from the European Union’s H2020 research and innovation programme under grant agreement No 769032 (SCARBO) and 776186 (CHE).
How to cite: Reum, F., Florentie, L., Peters, W., Dogniaux, M., Crevoisier, C., Sic, B., and Houweling, S.: Performance of upcoming CO2 monitoring satellites in the new high-resolution inverse model CTDAS-WRF, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19293, https://doi.org/10.5194/egusphere-egu2020-19293, 2020.
Efforts to reduce greenhouse gas (ghg) emissions require support by independent monitoring. The inverse modeling emission quantification approach, based on measurements of atmospheric ghg mixing ratios, promises objective ghg flux estimates consistent across country borders. Yet, ghg flux quantification on national scales and below is impeded both by the sparsity of atmospheric data and uncertainties in atmospheric ghg transport modeling. To overcome these challenges, the EU supports two concept studies for ghg monitoring satellites via the H2020 projects CHE (CO2M satellite) and SCARBO. Both systems aim at vast coverage and high accuracy and precision. Within these projects, we developed a variant of the CarbonTracker Europe inverse model (van der Laan-Luijkx et al., 2017) that uses WRF-GHG (Beck et al., 2011) to model atmospheric transport (CTDAS-WRF). In this presentation, we first introduce how the versatility of WRF-Chem and modular structure of CTDAS enables our model to estimate ghg fluxes across scales, from point sources to integrated continental fluxes. Next, we used our new model to demonstrate the potential skill of the proposed SCARBO satellite constellation for reducing uncertainties of national-scale CO2 fluxes, focusing on aerosol-induced errors. We demonstrate that this concept has the potential to greatly improve upon existing CO2 monitoring systems because of its unprecedented coverage. Lastly, we outline our plans for using CTDAS-WRF to assess the skill of the proposed CO2M monitoring system to estimate city-scale CO2 emissions.
References:
Beck, V., et al.: The WRF Greenhouse Gas Model (WRF-GHG) Technical Report, [online] Available from: https://www.bgc-jena.mpg.de/bgc-systems/pmwiki2/uploads/Download/Wrf-ghg/WRF-GHG_Techn_Report.pdf, 2011.
van der Laan-Luijkx, I. T., et al.: The CarbonTracker Data Assimilation Shell (CTDAS) v1.0: Implementation and global carbon balance 2001-2015, Geosci. Model Dev., 10(7), 2785–2800, doi:10.5194/gmd-10-2785-2017, 2017.
Acknowledgements:
This work has received funding from the European Union’s H2020 research and innovation programme under grant agreement No 769032 (SCARBO) and 776186 (CHE).
How to cite: Reum, F., Florentie, L., Peters, W., Dogniaux, M., Crevoisier, C., Sic, B., and Houweling, S.: Performance of upcoming CO2 monitoring satellites in the new high-resolution inverse model CTDAS-WRF, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19293, https://doi.org/10.5194/egusphere-egu2020-19293, 2020.
EGU2020-17176 | Displays | ITS5.2/AS3.17
Top-down Support of Swiss non-CO2 Greenhouse Gas Emissions Reporting to UNFCCCStephan Henne, Martin K. Vollmer, Martin Steinbacher, Markus Leuenberger, Frank Meinhardt, Joachim Mohn, Lukas Emmenegger, Dominik Brunner, and Stefan Reimann
Globally, emissions of long-lived non-CO2 greenhouse gases (GHG; methane, nitrous oxide and halogenated compounds) account for approximately 30 % of the radiative forcing of all anthropogenic GHG emissions. In industrialised countries, ‘bottom-up’ estimates come with relatively large uncertainties for anthropogenic non-CO2 GHGs when compared with those of anthropogenic CO2. 'Top-down' methods on the country scale offer an independent support tool to reduce these uncertainties and detect biases in emissions reported to the UNFCCC. Based on atmospheric concentration observations these tools are also able to detect the effectiveness of emission mitigation measures on the long term.
Since 2012 the Swiss national inventory reporting (NIR) contains an appendix on 'top-down' studies for selected halogenated compound. Subsequently, this appendix was extended to include methane and nitrous oxide. Here, we present these updated (2020 submission) regional-scale (~300 x 200 km2) atmospheric inversion studies for non-CO2 GHG emission estimates in Switzerland, making use of observations on the Swiss Plateau (Beromünster tall tower) as well as the neighbouring mountain-top sites Jungfraujoch and Schauinsland.
We report spatially and temporally resolved Swiss emissions for CH4 (2013-2019), N2O (2017-2019) and total Swiss emissions for hydrofluorocarbons (HFCs) and SF6 (2009-2019) based on a Bayesian inversion system and a tracer ratio method, respectively. Both approaches make use of transport simulations applying the high-resolution (7 x 7 km2) Lagrangian particle dispersion model (FLEXPART-COSMO). We compare these 'top-down' estimates to the 'bottom-up' results reported by Switzerland to the UNFCCC. Although we find good agreement between the two estimates for some species (CH4, N2O), emissions of other compounds (e.g., considerably lower 'top-down' estimates for HFC-134a) show larger discrepancies. Potential reasons for the disagreements are discussed. Currently, our 'top-down' information is only used for comparative purposes and does not feed back into the 'bottom-up' inventory.
How to cite: Henne, S., Vollmer, M. K., Steinbacher, M., Leuenberger, M., Meinhardt, F., Mohn, J., Emmenegger, L., Brunner, D., and Reimann, S.: Top-down Support of Swiss non-CO2 Greenhouse Gas Emissions Reporting to UNFCCC , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-17176, https://doi.org/10.5194/egusphere-egu2020-17176, 2020.
Globally, emissions of long-lived non-CO2 greenhouse gases (GHG; methane, nitrous oxide and halogenated compounds) account for approximately 30 % of the radiative forcing of all anthropogenic GHG emissions. In industrialised countries, ‘bottom-up’ estimates come with relatively large uncertainties for anthropogenic non-CO2 GHGs when compared with those of anthropogenic CO2. 'Top-down' methods on the country scale offer an independent support tool to reduce these uncertainties and detect biases in emissions reported to the UNFCCC. Based on atmospheric concentration observations these tools are also able to detect the effectiveness of emission mitigation measures on the long term.
Since 2012 the Swiss national inventory reporting (NIR) contains an appendix on 'top-down' studies for selected halogenated compound. Subsequently, this appendix was extended to include methane and nitrous oxide. Here, we present these updated (2020 submission) regional-scale (~300 x 200 km2) atmospheric inversion studies for non-CO2 GHG emission estimates in Switzerland, making use of observations on the Swiss Plateau (Beromünster tall tower) as well as the neighbouring mountain-top sites Jungfraujoch and Schauinsland.
We report spatially and temporally resolved Swiss emissions for CH4 (2013-2019), N2O (2017-2019) and total Swiss emissions for hydrofluorocarbons (HFCs) and SF6 (2009-2019) based on a Bayesian inversion system and a tracer ratio method, respectively. Both approaches make use of transport simulations applying the high-resolution (7 x 7 km2) Lagrangian particle dispersion model (FLEXPART-COSMO). We compare these 'top-down' estimates to the 'bottom-up' results reported by Switzerland to the UNFCCC. Although we find good agreement between the two estimates for some species (CH4, N2O), emissions of other compounds (e.g., considerably lower 'top-down' estimates for HFC-134a) show larger discrepancies. Potential reasons for the disagreements are discussed. Currently, our 'top-down' information is only used for comparative purposes and does not feed back into the 'bottom-up' inventory.
How to cite: Henne, S., Vollmer, M. K., Steinbacher, M., Leuenberger, M., Meinhardt, F., Mohn, J., Emmenegger, L., Brunner, D., and Reimann, S.: Top-down Support of Swiss non-CO2 Greenhouse Gas Emissions Reporting to UNFCCC , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-17176, https://doi.org/10.5194/egusphere-egu2020-17176, 2020.
EGU2020-19458 | Displays | ITS5.2/AS3.17
Global and regional emission estimates for three ozone-depleting hydrochlorofluoro-carbons (HCFCs) with no known end-usesMartin Vollmer and the AGAGE (Advanced Global Atmospheric Gases Experiment) Team
We present first results on atmospheric abundances and inferred emissions of the previously undetected ozone depleting hydrochlorofluorocarbon HCFC-132b (1,2-dichloro-1,1-difluoroethane). In addition we report significant updates on observations and inferred emissions for HCFC-133a (2-chloro-1,1,1-trifluoroethane) and HCFC-31 (chlorofluoromethane). All three compounds are Ozone Depleting Substances (ODSs) and their productions are regulated under the Montreal Protocol on Substances that Deplete the Ozone Layer. However, they are not known as end-user products from which potential emissions to the atmosphere could occur. Rather, we hypothesize that the compounds are emitted as byproducts during the production of hydrofluorocarbons (HFCs). If this holds true, then the phase-out regulations of the Protocol do not apply to them, nevertheless the Protocol's overarching Vienna Convention encourages the parties to minimize such ODS byproduct emissions.
In-situ fully intercalibrated high-precision measurements of the recently discovered HCFC-132b have been made for several years at the stations of the Advanced Global Atmospheric Gases Experiment (AGAGE) and are complemented with measurements from archived air samples (1978 – present) of the Cape Grim Air Archive. Based on these measurements we reconstruct global HCFC-132b trends showing its first appearance in the atmosphere in the late 1990s, followed by a general growth in the atmosphere to current globally-averaged mole fractions of approx. 0.13 ppt (picomol mol-1). Global emissions, which are derived from these observations using the AGAGE 12-box model, show a general increase to approx. 1 Gg yr-1 in 2019. Observation-based top-down regional emission estimates for the East-Asian region, as derived from a Bayesian inversion with the FLEXPART Lagrangian model, can explain all of the global emissions within the uncertainties of the method. Half of these emissions are allocated to Eastern China, a region where enhanced emissions for other ODSs were previously found. Emissions from Europe are comparably insignificant, but an analysis of the source locations supports the hypothesis that HCFC-132b emissions are a byproduct from HFC production. In addition to HCFC-132b, we present significant updates on observations of HCFC-133a and HCFC-31. HCFC-133a measurements are now fully integrated into the AGAGE network and provide a wealth of atmospheric observations. Similar to HCFC-132b, we show, for example, that abundances and global emissions of these two compounds have generally increased over the last few years.
How to cite: Vollmer, M. and the AGAGE (Advanced Global Atmospheric Gases Experiment) Team: Global and regional emission estimates for three ozone-depleting hydrochlorofluoro-carbons (HCFCs) with no known end-uses, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19458, https://doi.org/10.5194/egusphere-egu2020-19458, 2020.
We present first results on atmospheric abundances and inferred emissions of the previously undetected ozone depleting hydrochlorofluorocarbon HCFC-132b (1,2-dichloro-1,1-difluoroethane). In addition we report significant updates on observations and inferred emissions for HCFC-133a (2-chloro-1,1,1-trifluoroethane) and HCFC-31 (chlorofluoromethane). All three compounds are Ozone Depleting Substances (ODSs) and their productions are regulated under the Montreal Protocol on Substances that Deplete the Ozone Layer. However, they are not known as end-user products from which potential emissions to the atmosphere could occur. Rather, we hypothesize that the compounds are emitted as byproducts during the production of hydrofluorocarbons (HFCs). If this holds true, then the phase-out regulations of the Protocol do not apply to them, nevertheless the Protocol's overarching Vienna Convention encourages the parties to minimize such ODS byproduct emissions.
In-situ fully intercalibrated high-precision measurements of the recently discovered HCFC-132b have been made for several years at the stations of the Advanced Global Atmospheric Gases Experiment (AGAGE) and are complemented with measurements from archived air samples (1978 – present) of the Cape Grim Air Archive. Based on these measurements we reconstruct global HCFC-132b trends showing its first appearance in the atmosphere in the late 1990s, followed by a general growth in the atmosphere to current globally-averaged mole fractions of approx. 0.13 ppt (picomol mol-1). Global emissions, which are derived from these observations using the AGAGE 12-box model, show a general increase to approx. 1 Gg yr-1 in 2019. Observation-based top-down regional emission estimates for the East-Asian region, as derived from a Bayesian inversion with the FLEXPART Lagrangian model, can explain all of the global emissions within the uncertainties of the method. Half of these emissions are allocated to Eastern China, a region where enhanced emissions for other ODSs were previously found. Emissions from Europe are comparably insignificant, but an analysis of the source locations supports the hypothesis that HCFC-132b emissions are a byproduct from HFC production. In addition to HCFC-132b, we present significant updates on observations of HCFC-133a and HCFC-31. HCFC-133a measurements are now fully integrated into the AGAGE network and provide a wealth of atmospheric observations. Similar to HCFC-132b, we show, for example, that abundances and global emissions of these two compounds have generally increased over the last few years.
How to cite: Vollmer, M. and the AGAGE (Advanced Global Atmospheric Gases Experiment) Team: Global and regional emission estimates for three ozone-depleting hydrochlorofluoro-carbons (HCFCs) with no known end-uses, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19458, https://doi.org/10.5194/egusphere-egu2020-19458, 2020.
EGU2020-12362 | Displays | ITS5.2/AS3.17
Assessment of hydrological and biogeochemical effects on N2O emission factors in river networks of eastern China based on long-term studyMinpeng Hu, Randy Dahlgren, and Dingjiang Chen
The N2O emission factors (EF) in river networks remains a major source of uncertainty due to limited data availability. This study integrated three years of multiple stable isotope (15N-NO3-/18O-NO3- and 2H-H2O/18O-H2O) and hydrochemistry measurements for river water and groundwater to evaluate the effects of hydrological and biogeochemical processes on riverine N2O emission factors in the Yongan watershed (2474 km2) of subtropical eastern China. The EF in groundwater (0.00195 ± 0.00146) was about one magnitude higher than that in surface water (0.00038 ± 0.00020). The N2O EF displayed seasonal and spatial variability in surface water and groundwater. The emission factors in surface water showed negative relationship with N levels and positive relationship with dissolved organic carbon: DIN (C:N) ratio. In contrast, N2O EF in groundwater showed positive relationship with N level and negative relationship with DO concentration, implying quite different processes undergoing in surface water and groundwater. The 2H-H2O/18O-H2O information suggested high base flow contribution (~70%) to rivers, implying the potential N2O contribution from groundwater to riverine N2O. Information from 15N-NO3- and 18O-NO3- indicated that N2O in groundwater were regulated by nitrification and denitrification, while N2O in river networks was mainly derived from nitrification and may be also regulated by hydrological processes. The strong positive relationship between riverine N2O concentrations and that in groundwater may indicate the potential high contribution of groundwater N2O to surface water. This study highlights the importance of combining multiple isotope tracers and hydrochemistry to assess the riverine N2O dynamics, as well as the necessity to consider the potential impact from groundwater N2O contribution during the determination of riverine N2O emission factors in rivers with high groundwater recharge.
How to cite: Hu, M., Dahlgren, R., and Chen, D.: Assessment of hydrological and biogeochemical effects on N2O emission factors in river networks of eastern China based on long-term study, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12362, https://doi.org/10.5194/egusphere-egu2020-12362, 2020.
The N2O emission factors (EF) in river networks remains a major source of uncertainty due to limited data availability. This study integrated three years of multiple stable isotope (15N-NO3-/18O-NO3- and 2H-H2O/18O-H2O) and hydrochemistry measurements for river water and groundwater to evaluate the effects of hydrological and biogeochemical processes on riverine N2O emission factors in the Yongan watershed (2474 km2) of subtropical eastern China. The EF in groundwater (0.00195 ± 0.00146) was about one magnitude higher than that in surface water (0.00038 ± 0.00020). The N2O EF displayed seasonal and spatial variability in surface water and groundwater. The emission factors in surface water showed negative relationship with N levels and positive relationship with dissolved organic carbon: DIN (C:N) ratio. In contrast, N2O EF in groundwater showed positive relationship with N level and negative relationship with DO concentration, implying quite different processes undergoing in surface water and groundwater. The 2H-H2O/18O-H2O information suggested high base flow contribution (~70%) to rivers, implying the potential N2O contribution from groundwater to riverine N2O. Information from 15N-NO3- and 18O-NO3- indicated that N2O in groundwater were regulated by nitrification and denitrification, while N2O in river networks was mainly derived from nitrification and may be also regulated by hydrological processes. The strong positive relationship between riverine N2O concentrations and that in groundwater may indicate the potential high contribution of groundwater N2O to surface water. This study highlights the importance of combining multiple isotope tracers and hydrochemistry to assess the riverine N2O dynamics, as well as the necessity to consider the potential impact from groundwater N2O contribution during the determination of riverine N2O emission factors in rivers with high groundwater recharge.
How to cite: Hu, M., Dahlgren, R., and Chen, D.: Assessment of hydrological and biogeochemical effects on N2O emission factors in river networks of eastern China based on long-term study, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12362, https://doi.org/10.5194/egusphere-egu2020-12362, 2020.
EGU2020-18741 | Displays | ITS5.2/AS3.17
Towards an European operational monitoring capacity for CO2 emissions: the CO2 Human Emission project at ECMWFNicolas Bousserez, Joe McNorton, Melanie Ades, Anna Agusti-Panareda, Gianpaolo Balsamo, Margarita Choulga, Richard Engelen, Johannes Flemming, Antje Innes, Zak Kipling, Sebastien Massart, Mark Parrington, Vincent-Henri Peuch, and Jerome Barre
The CO2 Human Emission (CHE) project is an European initiative bringing together a consortium of 22 European partners to build a prototype global CO2 source inversion system that can provide policy-relevant information on the spatiotemporal characteristics of anthropogenic CO2 emissions. This prototype shall evolve toward a new Copernicus CO2 service, which will provide a Monitoring and Verification Support (MVS) capacity that can address the challenge of the global stocktake (GST) devised under the Paris Agreement. The global inversion system will build on existing operational infrastructures (CAMS, C3S) at the European Centre for Medium-range Weather Forecast (ECMWF) to exploit ground-based measurements as well as space-based observations from current and future satellite missions (e.g., Sentinel 5p and CO2M). We will present ongoing efforts at ECMWF to develop a source inversion capability in the current operational Integrated Forecasting System (IFS), which will serve as the basis for the future global CO2 inversion prototype. Preliminary results will be discussed, that include model transport error estimations based on Monte-Carlo ensemble simulations as well as the first chemical source optimization experiments performed with the IFS 4D-Var system.
How to cite: Bousserez, N., McNorton, J., Ades, M., Agusti-Panareda, A., Balsamo, G., Choulga, M., Engelen, R., Flemming, J., Innes, A., Kipling, Z., Massart, S., Parrington, M., Peuch, V.-H., and Barre, J.: Towards an European operational monitoring capacity for CO2 emissions: the CO2 Human Emission project at ECMWF, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18741, https://doi.org/10.5194/egusphere-egu2020-18741, 2020.
The CO2 Human Emission (CHE) project is an European initiative bringing together a consortium of 22 European partners to build a prototype global CO2 source inversion system that can provide policy-relevant information on the spatiotemporal characteristics of anthropogenic CO2 emissions. This prototype shall evolve toward a new Copernicus CO2 service, which will provide a Monitoring and Verification Support (MVS) capacity that can address the challenge of the global stocktake (GST) devised under the Paris Agreement. The global inversion system will build on existing operational infrastructures (CAMS, C3S) at the European Centre for Medium-range Weather Forecast (ECMWF) to exploit ground-based measurements as well as space-based observations from current and future satellite missions (e.g., Sentinel 5p and CO2M). We will present ongoing efforts at ECMWF to develop a source inversion capability in the current operational Integrated Forecasting System (IFS), which will serve as the basis for the future global CO2 inversion prototype. Preliminary results will be discussed, that include model transport error estimations based on Monte-Carlo ensemble simulations as well as the first chemical source optimization experiments performed with the IFS 4D-Var system.
How to cite: Bousserez, N., McNorton, J., Ades, M., Agusti-Panareda, A., Balsamo, G., Choulga, M., Engelen, R., Flemming, J., Innes, A., Kipling, Z., Massart, S., Parrington, M., Peuch, V.-H., and Barre, J.: Towards an European operational monitoring capacity for CO2 emissions: the CO2 Human Emission project at ECMWF, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18741, https://doi.org/10.5194/egusphere-egu2020-18741, 2020.
EGU2020-8714 | Displays | ITS5.2/AS3.17
Greenhouse Gas Analyzing Platform using Ground Sites, Aircraft, Ships, and Satellite-based Data: Japan's Contribution to the Paris AgreementNobuko Saigusa, Toshinobu Machida, Shin-ichiro Nakaoka, Tsuneo Matsunaga, Hiroshi Tanimoto, Yosuke Niwa, Yukio Terao, and Akihiko Ito
Asia, as one of the world’s largest greenhouse gas (GHG) emitters, has a responsibility to play an important role to turn the goals of Paris Agreement into reality. Urgent needs in Earth observations for GHGs are to reduce uncertainties in their source and sink estimations and to identify current knowledge gaps and requirement for further international collaboration. Estimating anthropogenic and natural emissions based on observations for GHGs has a great potential for providing additional sources of information that can support estimating the impacts of mitigation actions. Discussions will be focused on current status and challenges from Japan's relevant GHG observation and analysis to improve up-to-date analysis systems and data coverage particularly in Asia–Oceania for better estimation of the distribution of anthropogenic and natural sinks and sources with sufficient accuracy.
How to cite: Saigusa, N., Machida, T., Nakaoka, S., Matsunaga, T., Tanimoto, H., Niwa, Y., Terao, Y., and Ito, A.: Greenhouse Gas Analyzing Platform using Ground Sites, Aircraft, Ships, and Satellite-based Data: Japan's Contribution to the Paris Agreement, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8714, https://doi.org/10.5194/egusphere-egu2020-8714, 2020.
Asia, as one of the world’s largest greenhouse gas (GHG) emitters, has a responsibility to play an important role to turn the goals of Paris Agreement into reality. Urgent needs in Earth observations for GHGs are to reduce uncertainties in their source and sink estimations and to identify current knowledge gaps and requirement for further international collaboration. Estimating anthropogenic and natural emissions based on observations for GHGs has a great potential for providing additional sources of information that can support estimating the impacts of mitigation actions. Discussions will be focused on current status and challenges from Japan's relevant GHG observation and analysis to improve up-to-date analysis systems and data coverage particularly in Asia–Oceania for better estimation of the distribution of anthropogenic and natural sinks and sources with sufficient accuracy.
How to cite: Saigusa, N., Machida, T., Nakaoka, S., Matsunaga, T., Tanimoto, H., Niwa, Y., Terao, Y., and Ito, A.: Greenhouse Gas Analyzing Platform using Ground Sites, Aircraft, Ships, and Satellite-based Data: Japan's Contribution to the Paris Agreement, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8714, https://doi.org/10.5194/egusphere-egu2020-8714, 2020.
EGU2020-5567 | Displays | ITS5.2/AS3.17
Impact of using various prior flux models on the posterior NEE derived from the Jena Carboscope regional inversion systemSaqr Munassar, Christoph Gerbig, Frank-Thomas Koch, and Christian Rödenbeck
Regional flux estimates over Europe have been calculated using the two-step inverse system of the Jena CarboScope Regional inversion (CSR) to estimate the annual CO2 budgets for recent years, in cooperation with the research project VERIFY. The CSR system assimilates observational datasets of CO2 mixing ratio provided by the Integrated Carbon Observation System (ICOS) across the European domain to optimize Net Ecosystem Exchange (NEE) fluxes computed from biosphere models at a spatial resolution of 0.25 degree. Ocean fluxes are assumed to be constant over time. Fossil fuel emissions are obtained from EDGAR_v4.3 and updated based on British Petroleum (BP) statistics. Therefore, only biosphere-atmosphere exchange fluxes are considered to be optimized against the atmospheric data.
In this study we focus on the impact of using a-priori fluxes from different biosphere and ocean models on the annual CO2 budget of posterior fluxes. Results calculated using the Vegetation and Photosynthesis Respiration Model (VPRM) and Simple Biosphere/Carnegie-Ames Stanford Approach (SiBCASA) models show a consistent posterior interannual variability, largely independent of which prior fluxes are used, even though those prior fluxes show considerable differences on annual scales.
How to cite: Munassar, S., Gerbig, C., Koch, F.-T., and Rödenbeck, C.: Impact of using various prior flux models on the posterior NEE derived from the Jena Carboscope regional inversion system, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5567, https://doi.org/10.5194/egusphere-egu2020-5567, 2020.
Regional flux estimates over Europe have been calculated using the two-step inverse system of the Jena CarboScope Regional inversion (CSR) to estimate the annual CO2 budgets for recent years, in cooperation with the research project VERIFY. The CSR system assimilates observational datasets of CO2 mixing ratio provided by the Integrated Carbon Observation System (ICOS) across the European domain to optimize Net Ecosystem Exchange (NEE) fluxes computed from biosphere models at a spatial resolution of 0.25 degree. Ocean fluxes are assumed to be constant over time. Fossil fuel emissions are obtained from EDGAR_v4.3 and updated based on British Petroleum (BP) statistics. Therefore, only biosphere-atmosphere exchange fluxes are considered to be optimized against the atmospheric data.
In this study we focus on the impact of using a-priori fluxes from different biosphere and ocean models on the annual CO2 budget of posterior fluxes. Results calculated using the Vegetation and Photosynthesis Respiration Model (VPRM) and Simple Biosphere/Carnegie-Ames Stanford Approach (SiBCASA) models show a consistent posterior interannual variability, largely independent of which prior fluxes are used, even though those prior fluxes show considerable differences on annual scales.
How to cite: Munassar, S., Gerbig, C., Koch, F.-T., and Rödenbeck, C.: Impact of using various prior flux models on the posterior NEE derived from the Jena Carboscope regional inversion system, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5567, https://doi.org/10.5194/egusphere-egu2020-5567, 2020.
EGU2020-13819 | Displays | ITS5.2/AS3.17
Estimation of surface CO2 flux in East Asia and comparison with the national greenhouse gases emission inventoryMinkwang Cho and Hyun Mee Kim
In this study, surface carbon dioxide (CO2) flux was estimated over East Asia using the inverse modeling approach. Two CO2 mole fraction datasets observed from South Korea (Anmyeon-do (AMY) and Gosan (GSN)), along with ObsPack observation data package, were additionally assimilated in the CarbonTracker system, and the characteristics of the estimated surface CO2 flux was analyzed over ten years. To see the impact of the inclusion of the two observation datasets in the Korean Peninsula, the other experiment which only assimilated the ObsPack data was conducted and used for comparison.
The result showed that by including two more datasets in the data assimilation process, the surface CO2 flux absorption was slightly enhanced in summer and the surface CO2 flux emission was weakened in late autumn and spring. This characteristic was shown particularly in Eurasian boreal and Eurasian temperate regions. Validation was done using independent observations from surface and aircraft (Comprehensive Observation Network for Trace gases by Airliner; CONTRAIL), and it showed smaller root mean square error (RMSE) values and bigger uncertainty reduction effect with the experiment which additionally assimilated two Korean observation datasets.
Meanwhile, the estimated biosphere CO2 flux from the CarbonTracker was compared with Land Use, Land Use Change and Forest (LULUCF) sector CO2 emission (or absorption) from the national greenhouse gases emission inventory (NIR). In case of South Korea, the observation density (number of observation sites or number of assimilated data on the area of the region) seemed to be related to some statistic parameters between inventory and CarbonTracker result. More results from model-inventory comparison using other data will be presented in the meeting.
Acknowledgements
This study was supported by the Korea Meteorological Administration Research and Development Program under grant KMI2018-03712 and a National Research Foundation of Korea (NRF) grant funded by the South Korean government (Ministry of Science and ICT) (Grant 2017R1E1A1A03070968). The authors thank Andrew R. Jacobson for providing the CarbonTracker used for this study.
How to cite: Cho, M. and Kim, H. M.: Estimation of surface CO2 flux in East Asia and comparison with the national greenhouse gases emission inventory, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13819, https://doi.org/10.5194/egusphere-egu2020-13819, 2020.
In this study, surface carbon dioxide (CO2) flux was estimated over East Asia using the inverse modeling approach. Two CO2 mole fraction datasets observed from South Korea (Anmyeon-do (AMY) and Gosan (GSN)), along with ObsPack observation data package, were additionally assimilated in the CarbonTracker system, and the characteristics of the estimated surface CO2 flux was analyzed over ten years. To see the impact of the inclusion of the two observation datasets in the Korean Peninsula, the other experiment which only assimilated the ObsPack data was conducted and used for comparison.
The result showed that by including two more datasets in the data assimilation process, the surface CO2 flux absorption was slightly enhanced in summer and the surface CO2 flux emission was weakened in late autumn and spring. This characteristic was shown particularly in Eurasian boreal and Eurasian temperate regions. Validation was done using independent observations from surface and aircraft (Comprehensive Observation Network for Trace gases by Airliner; CONTRAIL), and it showed smaller root mean square error (RMSE) values and bigger uncertainty reduction effect with the experiment which additionally assimilated two Korean observation datasets.
Meanwhile, the estimated biosphere CO2 flux from the CarbonTracker was compared with Land Use, Land Use Change and Forest (LULUCF) sector CO2 emission (or absorption) from the national greenhouse gases emission inventory (NIR). In case of South Korea, the observation density (number of observation sites or number of assimilated data on the area of the region) seemed to be related to some statistic parameters between inventory and CarbonTracker result. More results from model-inventory comparison using other data will be presented in the meeting.
Acknowledgements
This study was supported by the Korea Meteorological Administration Research and Development Program under grant KMI2018-03712 and a National Research Foundation of Korea (NRF) grant funded by the South Korean government (Ministry of Science and ICT) (Grant 2017R1E1A1A03070968). The authors thank Andrew R. Jacobson for providing the CarbonTracker used for this study.
How to cite: Cho, M. and Kim, H. M.: Estimation of surface CO2 flux in East Asia and comparison with the national greenhouse gases emission inventory, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13819, https://doi.org/10.5194/egusphere-egu2020-13819, 2020.
EGU2020-16416 | Displays | ITS5.2/AS3.17
Natural and anthropogenic methane emissions in West Siberia estimated using a wetland inventory, GOSAT and a regional tower networkShamil Maksyutov, Motoki Sasakawa, Rajesh Janardanan, Fenjuan Wang, Aki Tsuruta, Irina Terentieva, Alexander Sabrekov, Mikhail Glagolev, Toshinobu Machida, Mikhail Arshinov, Denis Davydov, Oleg Krasnov, Boris Belan, Ed Dlugokencky, Jost V. Lavric, Akihiko Ito, Greet Janssens-Maenhout, Johannes Kaiser, Yukio Yoshida, and Tsuneo Matsunaga
West Siberia contributes a large fraction of Russian methane emissions, with both natural emissions from peatlands and anthropogenic emissions by oil and gas industries. To quantify anthropogenic emissions with atmospheric observations and inventories, we must better understand the natural wetland emissions. We combine high-resolution wetland mapping based on Landsat data for whole West Siberian lowland with a database of in situ flux measurements to derive bottom-up wetland emission estimates. We use a global high-resolution methane flux inversion based on a Lagrangian-Eulerian coupled tracer transport model to estimate methane emissions in West Siberia using atmospheric methane data collected at the Siberian GHG monitoring network JR-STATION, ZOTTO, data by the global in situ network and GOSAT satellite observations. High-resolution prior fluxes were prepared for anthropogenic emissions (EDGAR), biomass burning (GFAS), and wetlands (VISIT). A global high-resolution wetland emission dataset was constructed using 0.5-degree monthly emission data simulated by the VISIT model and wetland area fraction map by the Global Lake and Wetlands Database (GLWD). We estimate biweekly flux corrections to prior flux fields for 2010 to 2015. The inverse model optimizes corrections to two categories of fluxes: anthropogenic and natural (wetlands). Based on fitting the model simulations to the observations, the inverse model provides upward corrections to West Siberian anthropogenic emissions in winter and wetland emissions in summer. The use of high-resolution atmospheric transport in the flux inversion, when compared to low-resolution transport modeling, enables a better fit to observations in winter, when anthropogenic emissions dominate variability of the near-surface methane concentration. We estimate 15% higher anthropogenic emissions than EDGAR v.4.3.2 inventory for whole Russia, with most of the correction attributed to West Siberia and the European part of Russia. Comparison of the inversion estimates with the bottom-up wetland emission inventory for West Siberia suggests a need to adjust the wetland emissions to match observed north-south gradient of emissions with higher emissions in the southern taiga zone.
How to cite: Maksyutov, S., Sasakawa, M., Janardanan, R., Wang, F., Tsuruta, A., Terentieva, I., Sabrekov, A., Glagolev, M., Machida, T., Arshinov, M., Davydov, D., Krasnov, O., Belan, B., Dlugokencky, E., Lavric, J. V., Ito, A., Janssens-Maenhout, G., Kaiser, J., Yoshida, Y., and Matsunaga, T.: Natural and anthropogenic methane emissions in West Siberia estimated using a wetland inventory, GOSAT and a regional tower network, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-16416, https://doi.org/10.5194/egusphere-egu2020-16416, 2020.
West Siberia contributes a large fraction of Russian methane emissions, with both natural emissions from peatlands and anthropogenic emissions by oil and gas industries. To quantify anthropogenic emissions with atmospheric observations and inventories, we must better understand the natural wetland emissions. We combine high-resolution wetland mapping based on Landsat data for whole West Siberian lowland with a database of in situ flux measurements to derive bottom-up wetland emission estimates. We use a global high-resolution methane flux inversion based on a Lagrangian-Eulerian coupled tracer transport model to estimate methane emissions in West Siberia using atmospheric methane data collected at the Siberian GHG monitoring network JR-STATION, ZOTTO, data by the global in situ network and GOSAT satellite observations. High-resolution prior fluxes were prepared for anthropogenic emissions (EDGAR), biomass burning (GFAS), and wetlands (VISIT). A global high-resolution wetland emission dataset was constructed using 0.5-degree monthly emission data simulated by the VISIT model and wetland area fraction map by the Global Lake and Wetlands Database (GLWD). We estimate biweekly flux corrections to prior flux fields for 2010 to 2015. The inverse model optimizes corrections to two categories of fluxes: anthropogenic and natural (wetlands). Based on fitting the model simulations to the observations, the inverse model provides upward corrections to West Siberian anthropogenic emissions in winter and wetland emissions in summer. The use of high-resolution atmospheric transport in the flux inversion, when compared to low-resolution transport modeling, enables a better fit to observations in winter, when anthropogenic emissions dominate variability of the near-surface methane concentration. We estimate 15% higher anthropogenic emissions than EDGAR v.4.3.2 inventory for whole Russia, with most of the correction attributed to West Siberia and the European part of Russia. Comparison of the inversion estimates with the bottom-up wetland emission inventory for West Siberia suggests a need to adjust the wetland emissions to match observed north-south gradient of emissions with higher emissions in the southern taiga zone.
How to cite: Maksyutov, S., Sasakawa, M., Janardanan, R., Wang, F., Tsuruta, A., Terentieva, I., Sabrekov, A., Glagolev, M., Machida, T., Arshinov, M., Davydov, D., Krasnov, O., Belan, B., Dlugokencky, E., Lavric, J. V., Ito, A., Janssens-Maenhout, G., Kaiser, J., Yoshida, Y., and Matsunaga, T.: Natural and anthropogenic methane emissions in West Siberia estimated using a wetland inventory, GOSAT and a regional tower network, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-16416, https://doi.org/10.5194/egusphere-egu2020-16416, 2020.
EGU2020-12037 | Displays | ITS5.2/AS3.17
Isotopic fingerprinting of fugitive methane and CO2 from the Western Canada Sedimentary Basin (WCSB): Data documentation and impactKalis Muehlenbachs and Gabriela Gonzalez Arismendi
The general public, industry and regulators seek information about both intentional and unintentional greenhouse gas (GHG) emissions from energy wells. Minimizing these emissions may be one of the easiest steps to reaching reduction targets. ẟ13C is a common tool used to assess sources of atmospheric methane. Here we report and map the isotopic composition of 1280 production gases from energy wells in the Western Canada Sedimentary Basin (WCSB), which mark the ẟ13C of downstream GHG emissions in the production and transmission network. The WCSB is a worldwide recognized hydrocarbon producer, with more than 450,000 energy wells drilled only in Alberta. Produced methane ẟ13C ranges from -70‰ (VPDB – biogenic source) to -23‰ (VPDB – over mature shale) averaging, -47.2‰. Many of the currently producing, shut-in and abandoned wells also emit fugitive gas through surface casings (SCVF) and soil/ground migration (GM). Their ẟ13C of the fugitive gases usually indicates a shallower source than the production target (average SCVF ẟ13Cmethane= - 55.6 ‰, GM ẟ13Cmethane= - 58.0 ‰, and average SCVF ẟ13CO2= - 55.6 ‰, GM ẟ13 ẟ13CO2= -15.8 ‰). Mapping (isoscapes) of isotope values from 2800 SCVF, and 1800 GM gases sampled across WCSB, show that geology and topography constrain the source of leaks. The spatial distribution and wide range of ẟ13C of fugitive methane across the WCSB provide insights and data to climate modellers seeking to attribute atmospheric methane sources but is also relevant for mitigation of emissions as well as informing regulators.
How to cite: Muehlenbachs, K. and Gonzalez Arismendi, G.: Isotopic fingerprinting of fugitive methane and CO2 from the Western Canada Sedimentary Basin (WCSB): Data documentation and impact , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12037, https://doi.org/10.5194/egusphere-egu2020-12037, 2020.
The general public, industry and regulators seek information about both intentional and unintentional greenhouse gas (GHG) emissions from energy wells. Minimizing these emissions may be one of the easiest steps to reaching reduction targets. ẟ13C is a common tool used to assess sources of atmospheric methane. Here we report and map the isotopic composition of 1280 production gases from energy wells in the Western Canada Sedimentary Basin (WCSB), which mark the ẟ13C of downstream GHG emissions in the production and transmission network. The WCSB is a worldwide recognized hydrocarbon producer, with more than 450,000 energy wells drilled only in Alberta. Produced methane ẟ13C ranges from -70‰ (VPDB – biogenic source) to -23‰ (VPDB – over mature shale) averaging, -47.2‰. Many of the currently producing, shut-in and abandoned wells also emit fugitive gas through surface casings (SCVF) and soil/ground migration (GM). Their ẟ13C of the fugitive gases usually indicates a shallower source than the production target (average SCVF ẟ13Cmethane= - 55.6 ‰, GM ẟ13Cmethane= - 58.0 ‰, and average SCVF ẟ13CO2= - 55.6 ‰, GM ẟ13 ẟ13CO2= -15.8 ‰). Mapping (isoscapes) of isotope values from 2800 SCVF, and 1800 GM gases sampled across WCSB, show that geology and topography constrain the source of leaks. The spatial distribution and wide range of ẟ13C of fugitive methane across the WCSB provide insights and data to climate modellers seeking to attribute atmospheric methane sources but is also relevant for mitigation of emissions as well as informing regulators.
How to cite: Muehlenbachs, K. and Gonzalez Arismendi, G.: Isotopic fingerprinting of fugitive methane and CO2 from the Western Canada Sedimentary Basin (WCSB): Data documentation and impact , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12037, https://doi.org/10.5194/egusphere-egu2020-12037, 2020.
EGU2020-20825 | Displays | ITS5.2/AS3.17
Assimilation of GOSAT XCO2 data to optimize surface CO2 flux in East AsiaMin-Gyung Seo and Hyun Mee Kim
Because East Asia is the third-largest source region of CO2 after North America and Europe, there is a need to estimate surface CO2 fluxes accurately over East Asia. Nevertheless, since the number of surface CO2 observations in East Asia is relatively small compared to that in North America and Europe, the estimation of surface CO2 fluxes in East Asia has relatively large uncertainties. To supplement sparse surface CO2 observations, satellite observations can be used.
In this study, the column-averaged dry-air mole fraction (XCO2) concentration data from the Greenhouse gas Observing SATellite (GOSAT) Project was used to estimate the surface CO2 fluxes in East Asia. CarbonTracker developed by Earth System Research Laboratory was used as an inverse modeling system. To assimilate GOSAT XCO2 data in CarbonTracker, the observation operator for GOSAT XCO2 data was developed. To determine the appropriate Model-Data-Mismatch (MDM) for GOSAT XCO2 data, a sensitivity test was conducted. The experiment assimilating GOSAT data showed lower BIAS and RMSE than that without assimilating GOSAT data. In addition, the experiment using 2 ppm MDM for GOSAT data showed lower BIAS and RMSE than that using 3 ppm MDM.
The surface CO2 fluxes over East Asia from the experiments with and without GOSAT data were also compared. By assimilating GOSAT observations, the absorption of surface CO2 fluxes in the ocean became strong and that in land became weaker. Especially, the absorption of surface CO2 fluxes in the Eurasian Boreal region became much weaker than in other regions. The uncertainty reduction was also the largest in the Eurasian Boreal region where the surface CO2 observations are sparse.
Therefore GOSAT XCO2 data have a profound impact on estimating the surface CO2 fluxes in East Asia where the surface observations are insufficient.
Acknowledgments
This study was supported by the Korea Meteorological Administration Research and Development Program under grant KMI2018-03712 and a National Research Foundation of Korea (NRF) grant funded by the South Korean government (Ministry of Science and ICT) (Grant 2017R1E1A1A03070968). The authors thank Andrew R. Jacobson for providing the CarbonTracker and JAXA/NIES/MOE for providing GOSAT data.
How to cite: Seo, M.-G. and Kim, H. M.: Assimilation of GOSAT XCO2 data to optimize surface CO2 flux in East Asia, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20825, https://doi.org/10.5194/egusphere-egu2020-20825, 2020.
Because East Asia is the third-largest source region of CO2 after North America and Europe, there is a need to estimate surface CO2 fluxes accurately over East Asia. Nevertheless, since the number of surface CO2 observations in East Asia is relatively small compared to that in North America and Europe, the estimation of surface CO2 fluxes in East Asia has relatively large uncertainties. To supplement sparse surface CO2 observations, satellite observations can be used.
In this study, the column-averaged dry-air mole fraction (XCO2) concentration data from the Greenhouse gas Observing SATellite (GOSAT) Project was used to estimate the surface CO2 fluxes in East Asia. CarbonTracker developed by Earth System Research Laboratory was used as an inverse modeling system. To assimilate GOSAT XCO2 data in CarbonTracker, the observation operator for GOSAT XCO2 data was developed. To determine the appropriate Model-Data-Mismatch (MDM) for GOSAT XCO2 data, a sensitivity test was conducted. The experiment assimilating GOSAT data showed lower BIAS and RMSE than that without assimilating GOSAT data. In addition, the experiment using 2 ppm MDM for GOSAT data showed lower BIAS and RMSE than that using 3 ppm MDM.
The surface CO2 fluxes over East Asia from the experiments with and without GOSAT data were also compared. By assimilating GOSAT observations, the absorption of surface CO2 fluxes in the ocean became strong and that in land became weaker. Especially, the absorption of surface CO2 fluxes in the Eurasian Boreal region became much weaker than in other regions. The uncertainty reduction was also the largest in the Eurasian Boreal region where the surface CO2 observations are sparse.
Therefore GOSAT XCO2 data have a profound impact on estimating the surface CO2 fluxes in East Asia where the surface observations are insufficient.
Acknowledgments
This study was supported by the Korea Meteorological Administration Research and Development Program under grant KMI2018-03712 and a National Research Foundation of Korea (NRF) grant funded by the South Korean government (Ministry of Science and ICT) (Grant 2017R1E1A1A03070968). The authors thank Andrew R. Jacobson for providing the CarbonTracker and JAXA/NIES/MOE for providing GOSAT data.
How to cite: Seo, M.-G. and Kim, H. M.: Assimilation of GOSAT XCO2 data to optimize surface CO2 flux in East Asia, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20825, https://doi.org/10.5194/egusphere-egu2020-20825, 2020.
EGU2020-8480 | Displays | ITS5.2/AS3.17
Evaluation of model-data mismatch errors in the CarboScope-Regional Inversion SystemFrank-Thomas Koch, Saqr Munas, Christian Roedenbeck, and Christoph Gerbig
With an increasing network of atmospheric stations that produce a constant data stream, top-down inverse transport modelling of GHGs in a quasi-operational way becomes feasible. The CarboScope-Regional inversion system embeds the regional inversion, within a global inversion using the two-step approach. The regional inversion consists of Lagrangian mesoscale transport from STILT, prior fluxes from the diagnostic VPRM biosphere model, and anthropogenic emissions from a combination of EDGAR v4.3 with the annually updated BP statistical report. Regional ocean fluxes were derived from the CarboScope ocean flux product based on SOCATv2019 data. The inversion uses atmospheric observations from 44 stations to infer biosphere-atmosphere exchange. The regional domain covers most of Europe (33 – 73N, 15W – 35E) with a spatial resolution of 0.25 degree for fluxes and 0.5 degree for flux corrections inferred by the inversion (i.e. the state space).
One of the critical parameters is the assumed uncertainty of the observations, and the major contribution to this is the model-data mismatch error, or representation error. Within CarboScope-Regional, this model-data mismatch error is specified differently for different station types, such as tall towers, mountain or coastal stations, etc. To evaluate the validity and appropriateness of these assumed uncertainties, a leave-one-out cross-validation is applied for a single year, using all stations except one for the inversion, and comparing posterior concentrations predicted for the omitted station with the observed concentrations. Results of this cross-validation will be presented separately for the different station types, and will be used to evaluate the magnitude of the assumed model-data mismatch errors.
How to cite: Koch, F.-T., Munas, S., Roedenbeck, C., and Gerbig, C.: Evaluation of model-data mismatch errors in the CarboScope-Regional Inversion System, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8480, https://doi.org/10.5194/egusphere-egu2020-8480, 2020.
With an increasing network of atmospheric stations that produce a constant data stream, top-down inverse transport modelling of GHGs in a quasi-operational way becomes feasible. The CarboScope-Regional inversion system embeds the regional inversion, within a global inversion using the two-step approach. The regional inversion consists of Lagrangian mesoscale transport from STILT, prior fluxes from the diagnostic VPRM biosphere model, and anthropogenic emissions from a combination of EDGAR v4.3 with the annually updated BP statistical report. Regional ocean fluxes were derived from the CarboScope ocean flux product based on SOCATv2019 data. The inversion uses atmospheric observations from 44 stations to infer biosphere-atmosphere exchange. The regional domain covers most of Europe (33 – 73N, 15W – 35E) with a spatial resolution of 0.25 degree for fluxes and 0.5 degree for flux corrections inferred by the inversion (i.e. the state space).
One of the critical parameters is the assumed uncertainty of the observations, and the major contribution to this is the model-data mismatch error, or representation error. Within CarboScope-Regional, this model-data mismatch error is specified differently for different station types, such as tall towers, mountain or coastal stations, etc. To evaluate the validity and appropriateness of these assumed uncertainties, a leave-one-out cross-validation is applied for a single year, using all stations except one for the inversion, and comparing posterior concentrations predicted for the omitted station with the observed concentrations. Results of this cross-validation will be presented separately for the different station types, and will be used to evaluate the magnitude of the assumed model-data mismatch errors.
How to cite: Koch, F.-T., Munas, S., Roedenbeck, C., and Gerbig, C.: Evaluation of model-data mismatch errors in the CarboScope-Regional Inversion System, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8480, https://doi.org/10.5194/egusphere-egu2020-8480, 2020.
EGU2020-9409 | Displays | ITS5.2/AS3.17
Simulations of greenhouse gas emissions and soil organic carbon with ECOSSE for a rice field in Northern ItalyMatthias Kuhnert, Viktoria Oliver, Andrea Volante, Stefano Monaco, Yit Arn Teh, and Pete Smith
Rice cultivation has high water consumption and emits large quantities of greenhouse gases. Therefore, rice fields provide great potential to mitigate GHG emissions by modifications to cultivation practices or external inputs. Previous studies showed differences for impacts of alternated wetting and drying (AWD) practices for above-ground and below-ground biomass, which might have long term impacts on soil organic carbon stocks. The objective of this study is to parameterise and evaluate the model ECOSSE for rice simulations based on data from an Italian rice test site where the effects of different water management practices and 12 common European cultivars, on yield and GHG emissions, were investigated. Special focus is on the differences of the impacts on the greenhouse gas emissions for AWD and continuous flooding (CF). The model is calibrated and tested for field measurements and is used for model experiments to explore climate change impacts and long-term effects. Long term carbon storage is of particular interest since it is a suitable mitigation strategy. As experiments showed different impacts of management practices on the below ground biomass, long term model experiments are used to estimate impacts on SOC of the different practices. The measurements also allow an analysis of the impacts of different cultivars and the uncertainty of model approaches using a single data set for calibration.
How to cite: Kuhnert, M., Oliver, V., Volante, A., Monaco, S., Teh, Y. A., and Smith, P.: Simulations of greenhouse gas emissions and soil organic carbon with ECOSSE for a rice field in Northern Italy, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9409, https://doi.org/10.5194/egusphere-egu2020-9409, 2020.
Rice cultivation has high water consumption and emits large quantities of greenhouse gases. Therefore, rice fields provide great potential to mitigate GHG emissions by modifications to cultivation practices or external inputs. Previous studies showed differences for impacts of alternated wetting and drying (AWD) practices for above-ground and below-ground biomass, which might have long term impacts on soil organic carbon stocks. The objective of this study is to parameterise and evaluate the model ECOSSE for rice simulations based on data from an Italian rice test site where the effects of different water management practices and 12 common European cultivars, on yield and GHG emissions, were investigated. Special focus is on the differences of the impacts on the greenhouse gas emissions for AWD and continuous flooding (CF). The model is calibrated and tested for field measurements and is used for model experiments to explore climate change impacts and long-term effects. Long term carbon storage is of particular interest since it is a suitable mitigation strategy. As experiments showed different impacts of management practices on the below ground biomass, long term model experiments are used to estimate impacts on SOC of the different practices. The measurements also allow an analysis of the impacts of different cultivars and the uncertainty of model approaches using a single data set for calibration.
How to cite: Kuhnert, M., Oliver, V., Volante, A., Monaco, S., Teh, Y. A., and Smith, P.: Simulations of greenhouse gas emissions and soil organic carbon with ECOSSE for a rice field in Northern Italy, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9409, https://doi.org/10.5194/egusphere-egu2020-9409, 2020.
EGU2020-9469 | Displays | ITS5.2/AS3.17
Greenhouse gas emissions of European croplandsMatthias Kuhnert, Michael Martin, Adrian Leip, Matthew McGrath, and Pete Smith
Agriculture is a significant source of greenhouse gas (GHG) emissions in Europe. Croplands contribute to this source, but these contributions are difficult to estimate, as the influencing factors are complex. Human management actions are even more important than environmental drivers for agricultural emissions, but spatially-explicit datasets are scarce. This causes a high uncertainty for GHG emission simulations. We simulated GHG emissions (2010-2015) for selected crops (wheat and maize) in Europe with the biogeochemical model platform SPATIAL ECOSSE, using spatially-explicit management data from the model CAPRI used by JRC and model approaches based on Waha et al. (2012) to derive spatial management data (grid maps) for EU27. First results reveal that emissions estimates are highly sensitive to soil organic carbon (SOC), which results in hotspots of GHG emissions in northern Europe where SOC content is high. This effect is stronger for wheat than for maize. The first results show changes in SOC ranging from 374 --456 g C m2 yr-1 and 317 to 399 C m2 yr-1 across Europe (EU27) for wheat and maize, respectively, which are larger than the values reported in previous studies (e.g., 299 g C m2 yr-1 by Ciais et al., 2010).
Ciais, P., Wattenbach, M., Vuichard, N., Smith, P., Piao, S.L., Don, A., Luyssaert, S.,Janssens, I., Bondeau, A., Dechow, R., Leip, A., Smith, P., Beer, C., van der Werf,G.R., Gervois, S., Van Oost, K., Tomelleri, E., Freibauer, A., Schulze, E.D., 2010. The European greenhouse gas balance. Part 2: croplands. Global Change Biology 16,1409–1428.
Waha, K., van Bussel, L. G. J., Müller, C., & Bondeau, A. (2012). Climate-driven simulation of global crop sowing dates. Global Ecology and Biogeography, 21(2), 247–259. https://doi.org/10.1111/j.1466-8238.2011.00678.x
How to cite: Kuhnert, M., Martin, M., Leip, A., McGrath, M., and Smith, P.: Greenhouse gas emissions of European croplands, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9469, https://doi.org/10.5194/egusphere-egu2020-9469, 2020.
Agriculture is a significant source of greenhouse gas (GHG) emissions in Europe. Croplands contribute to this source, but these contributions are difficult to estimate, as the influencing factors are complex. Human management actions are even more important than environmental drivers for agricultural emissions, but spatially-explicit datasets are scarce. This causes a high uncertainty for GHG emission simulations. We simulated GHG emissions (2010-2015) for selected crops (wheat and maize) in Europe with the biogeochemical model platform SPATIAL ECOSSE, using spatially-explicit management data from the model CAPRI used by JRC and model approaches based on Waha et al. (2012) to derive spatial management data (grid maps) for EU27. First results reveal that emissions estimates are highly sensitive to soil organic carbon (SOC), which results in hotspots of GHG emissions in northern Europe where SOC content is high. This effect is stronger for wheat than for maize. The first results show changes in SOC ranging from 374 --456 g C m2 yr-1 and 317 to 399 C m2 yr-1 across Europe (EU27) for wheat and maize, respectively, which are larger than the values reported in previous studies (e.g., 299 g C m2 yr-1 by Ciais et al., 2010).
Ciais, P., Wattenbach, M., Vuichard, N., Smith, P., Piao, S.L., Don, A., Luyssaert, S.,Janssens, I., Bondeau, A., Dechow, R., Leip, A., Smith, P., Beer, C., van der Werf,G.R., Gervois, S., Van Oost, K., Tomelleri, E., Freibauer, A., Schulze, E.D., 2010. The European greenhouse gas balance. Part 2: croplands. Global Change Biology 16,1409–1428.
Waha, K., van Bussel, L. G. J., Müller, C., & Bondeau, A. (2012). Climate-driven simulation of global crop sowing dates. Global Ecology and Biogeography, 21(2), 247–259. https://doi.org/10.1111/j.1466-8238.2011.00678.x
How to cite: Kuhnert, M., Martin, M., Leip, A., McGrath, M., and Smith, P.: Greenhouse gas emissions of European croplands, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9469, https://doi.org/10.5194/egusphere-egu2020-9469, 2020.
EGU2020-12574 | Displays | ITS5.2/AS3.17
World’s potential energy production from microalgae on marginal landQingtao Zhang
Petrochemical fuel usage abuse has caused the depletion of oil reservoirs and increasing environmental problems such as greenhouse gas emission and global warming. Therefore, it is necessary to develop greener and sustainable alternatives. Carbon dioxide is the main contributor to the global warming crisis. Biomass energy has received the most attention in many integrated assessment model studies and the latest IPCC reports. Among various existing carbon capture technologies, microalgae-based biological carbon capture is one of the promising and lower energy consumption technologies.
Microalgae rise as 3rd generation bioenergy feedstock due to its attractive higher carbon dioxide fixation efficiency, higher biomass productivity, and relatively easy pretreatment processes for various biofuel extractions. Besides, microalgae have a low demand for water quality and soil fertility compared to traditional energy plants. It means growing microalgae on the marginal land (non-fertile land that is not suitable for agriculture) could be a promising agent for bioenergy production and CO2 mitigation.
The study is aimed to evaluate the potential energy production from microalgae on marginal land. We combined geospatial data with climate, soil, and terrain to estimate the marginal land of each country. By using Williams and Laurens’ model (2010), we calculated the annual microalgae areal biomass yields for different latitudes and evaluated annual potential energy production from microalgae on marginal land. It is estimated that microalgae may generate up to 67.9 billion tons of coal equivalent of potential energy per year on the total marginal land. By replacing fossil fuels, there will be emission reduction potential 290.6 billion tons of carbon dioxide.
How to cite: Zhang, Q.: World’s potential energy production from microalgae on marginal land, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12574, https://doi.org/10.5194/egusphere-egu2020-12574, 2020.
Petrochemical fuel usage abuse has caused the depletion of oil reservoirs and increasing environmental problems such as greenhouse gas emission and global warming. Therefore, it is necessary to develop greener and sustainable alternatives. Carbon dioxide is the main contributor to the global warming crisis. Biomass energy has received the most attention in many integrated assessment model studies and the latest IPCC reports. Among various existing carbon capture technologies, microalgae-based biological carbon capture is one of the promising and lower energy consumption technologies.
Microalgae rise as 3rd generation bioenergy feedstock due to its attractive higher carbon dioxide fixation efficiency, higher biomass productivity, and relatively easy pretreatment processes for various biofuel extractions. Besides, microalgae have a low demand for water quality and soil fertility compared to traditional energy plants. It means growing microalgae on the marginal land (non-fertile land that is not suitable for agriculture) could be a promising agent for bioenergy production and CO2 mitigation.
The study is aimed to evaluate the potential energy production from microalgae on marginal land. We combined geospatial data with climate, soil, and terrain to estimate the marginal land of each country. By using Williams and Laurens’ model (2010), we calculated the annual microalgae areal biomass yields for different latitudes and evaluated annual potential energy production from microalgae on marginal land. It is estimated that microalgae may generate up to 67.9 billion tons of coal equivalent of potential energy per year on the total marginal land. By replacing fossil fuels, there will be emission reduction potential 290.6 billion tons of carbon dioxide.
How to cite: Zhang, Q.: World’s potential energy production from microalgae on marginal land, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12574, https://doi.org/10.5194/egusphere-egu2020-12574, 2020.
EGU2020-21308 | Displays | ITS5.2/AS3.17
Reduction of uncertainties in greenhouse gas accounting using Global Forest Watch forest loss database, LiDAR and stand wise inventory dataJanis Ivanovs, Andis Lazdins, and Arta Bardule
Global Forest Watch (GFW) provides a global map of forest loss derived from LANDSAT satellite imagery, providing a tool for monitoring global forest change. In managed forests, GFW mainly provides information on commercial logging. This study is part of the INVENT project which aims to improve the National Forest Inventory based estimates of Carbon stock changes in forests reported to the UNFCCC. The purpose of this study is to evaluate the feasibility of using the GFW database to detect carbon stock changes in forest stands, using LiDAR (Light Detecting and Range) data and the stand wise forest database maintained by the State Forest Service (SFS) as additional data sources.
Only those forest loss areas from GFW database, which were detected according to the national LiDAR survey, were selected for data processing, thus obtaining 3D forest information prior to felling. Information on species composition and number of trees per hectare in the forest was obtained from the SFS stand wise forest database. Living biomass estimates were then calculated for each GFW pixel. For pixels outside the SFS stand wise forest database, living biomass values were determined by extrapolation. The average estimated live biomass per forest loss pixel in GFW database is 6792 kg.
Keywords: ERA-GAS INVENT, living biomass, carbon stock.
How to cite: Ivanovs, J., Lazdins, A., and Bardule, A.: Reduction of uncertainties in greenhouse gas accounting using Global Forest Watch forest loss database, LiDAR and stand wise inventory data, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21308, https://doi.org/10.5194/egusphere-egu2020-21308, 2020.
Global Forest Watch (GFW) provides a global map of forest loss derived from LANDSAT satellite imagery, providing a tool for monitoring global forest change. In managed forests, GFW mainly provides information on commercial logging. This study is part of the INVENT project which aims to improve the National Forest Inventory based estimates of Carbon stock changes in forests reported to the UNFCCC. The purpose of this study is to evaluate the feasibility of using the GFW database to detect carbon stock changes in forest stands, using LiDAR (Light Detecting and Range) data and the stand wise forest database maintained by the State Forest Service (SFS) as additional data sources.
Only those forest loss areas from GFW database, which were detected according to the national LiDAR survey, were selected for data processing, thus obtaining 3D forest information prior to felling. Information on species composition and number of trees per hectare in the forest was obtained from the SFS stand wise forest database. Living biomass estimates were then calculated for each GFW pixel. For pixels outside the SFS stand wise forest database, living biomass values were determined by extrapolation. The average estimated live biomass per forest loss pixel in GFW database is 6792 kg.
Keywords: ERA-GAS INVENT, living biomass, carbon stock.
How to cite: Ivanovs, J., Lazdins, A., and Bardule, A.: Reduction of uncertainties in greenhouse gas accounting using Global Forest Watch forest loss database, LiDAR and stand wise inventory data, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21308, https://doi.org/10.5194/egusphere-egu2020-21308, 2020.
EGU2020-21388 | Displays | ITS5.2/AS3.17
Development of land use matrix using geospatial data of National forest inventoryLinards Ludis Krumsteds, Janis Ivanovs, Andis Lazdins, and Raitis Melniks
Abstract. Calculation of land use and land use change matrix is one of the key elements for the national greenhouse gas (GHG) inventory in land use, land use change and forestry (LULUCF) sector. Main purpose of the land use and land use change matrix is to present comprehensive and harmonized land use and land use change information nationwide over certain time period. Information on land use and land use changes is further used to calculate other parameters important for determination of carbon stock changes and GHG emissions like the stock changes of living and dead biomass, as well as basic information on applied management measures. Aim of this study is to improve methodology for development and maintenance of land use and land use change matrix in the national GHG inventory system using geospatial data information of National forest inventory (NFI) and auxiliary data sources. Creation of land use and land use change matrix is performed in semi-automated way by using GIS tools, which eliminates possible impurities of reported data and have made the calculation process less time consuming than before. New calculation method takes into account present land use data from NFI and land use data from two previous NFI cycles, considerably reducing uncertainty of the estimates, and takes into account land management practices which may alter the land use category in long-term. Auxiliary data, like national land parcel information systems (LPIS), has been introduced to increase certainty, consistency and accuracy for determination of final land-use category. Year-by-year land use change extent detection is carried out by using linear interpolation and extrapolation method is used for the consecutive years for which NFI data are not available.
Key words: ERA-GAS INVENT, land use and land use changes, national forest inventory, greenhouse gas inventory.
How to cite: Krumsteds, L. L., Ivanovs, J., Lazdins, A., and Melniks, R.: Development of land use matrix using geospatial data of National forest inventory, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21388, https://doi.org/10.5194/egusphere-egu2020-21388, 2020.
Abstract. Calculation of land use and land use change matrix is one of the key elements for the national greenhouse gas (GHG) inventory in land use, land use change and forestry (LULUCF) sector. Main purpose of the land use and land use change matrix is to present comprehensive and harmonized land use and land use change information nationwide over certain time period. Information on land use and land use changes is further used to calculate other parameters important for determination of carbon stock changes and GHG emissions like the stock changes of living and dead biomass, as well as basic information on applied management measures. Aim of this study is to improve methodology for development and maintenance of land use and land use change matrix in the national GHG inventory system using geospatial data information of National forest inventory (NFI) and auxiliary data sources. Creation of land use and land use change matrix is performed in semi-automated way by using GIS tools, which eliminates possible impurities of reported data and have made the calculation process less time consuming than before. New calculation method takes into account present land use data from NFI and land use data from two previous NFI cycles, considerably reducing uncertainty of the estimates, and takes into account land management practices which may alter the land use category in long-term. Auxiliary data, like national land parcel information systems (LPIS), has been introduced to increase certainty, consistency and accuracy for determination of final land-use category. Year-by-year land use change extent detection is carried out by using linear interpolation and extrapolation method is used for the consecutive years for which NFI data are not available.
Key words: ERA-GAS INVENT, land use and land use changes, national forest inventory, greenhouse gas inventory.
How to cite: Krumsteds, L. L., Ivanovs, J., Lazdins, A., and Melniks, R.: Development of land use matrix using geospatial data of National forest inventory, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21388, https://doi.org/10.5194/egusphere-egu2020-21388, 2020.
EGU2020-21688 | Displays | ITS5.2/AS3.17
Building a modelling framework to simulate ecosystem processes under changing climate: the long road from Biome-BGC to Biome-BGCMAgDóra Hidy, Nándor Fodor, Roland Hollós, and Zoltán Barcza
During the past 15 years, our research group was developing the Biome-BGCMAg (formerly known as Biome-BGCMuSo) biogeochemical model to improve its ability to simulate carbon and water cycle in different ecosystems, with options for managed croplands, grasslands, and forests. We made various model improvements based on the results of model validation and benchmarking. Our goal is to have a model that is suitable for estimating and predicting greenhouse gas fluxes of different ecosystems at various scales under changing management and climate conditions.
The current, most recent model is called Biome-BGCMAg which is a process-based, biogeochemical model that simulates the storage and flux of water, carbon, and nitrogen in the soil-plant-atmosphere system. Biome-BGCMAg was derived from the widely known Biome-BGC v4.1.1 model developed by the Numerical Terradynamic Simulation Group (NTSG), University of Montana, USA. One of the most important model developments is the implementation of a multilayer soil module with water, carbon, nitrogen, and soil organic matter profiles. We implemented drought and anoxic soil state-related plant mortality. Alternative calculation methods for various processes were implemented to support possible algorithm ensemble modelling approach. Optional dynamic allocation algorithm was introduced using predefined phenophases based on growing degree day method. We implemented optional temperature dependence of allocation and possible assimilation downregulation as a function of temperature. Nitrogen budget simulation was improved. Furthermore, human intervention modules were developed to simulate cropland management (e.g. planting, harvest, ploughing, and application of fertilizers) and forest thinning. Dynamic whole plant mortality was implemented in the model to enable more realistic simulation of forest stand development. Last (but not least) conditional management (irrigation and mowing) was introduced to analyze the effect of different management strategies in the future. We started to build a sophisticated R based software to increase the visibility of the model and enable its use by the wider scientific community.
In our first attempt to simulate national scale greenhouse gas budget with Biome-BGCMAg 2.0, we executed the model at 10 x 10 km spatial resolution for Hungary, using eco-physiological parameterization and prescribed management for maize, winter wheat, forests and grassland. The first results revealed that the spatial pattern of net primary production and crop yield is not represented well by the model. Based on the first experiences we introduced new features within Biome-BGCMAg 2.1 that address soil water deficit related photosynthesis down-regulation. Missing stomatal conductance effect on C4 photosynthesis was also addressed by the new developments.
How to cite: Hidy, D., Fodor, N., Hollós, R., and Barcza, Z.: Building a modelling framework to simulate ecosystem processes under changing climate: the long road from Biome-BGC to Biome-BGCMAg, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21688, https://doi.org/10.5194/egusphere-egu2020-21688, 2020.
During the past 15 years, our research group was developing the Biome-BGCMAg (formerly known as Biome-BGCMuSo) biogeochemical model to improve its ability to simulate carbon and water cycle in different ecosystems, with options for managed croplands, grasslands, and forests. We made various model improvements based on the results of model validation and benchmarking. Our goal is to have a model that is suitable for estimating and predicting greenhouse gas fluxes of different ecosystems at various scales under changing management and climate conditions.
The current, most recent model is called Biome-BGCMAg which is a process-based, biogeochemical model that simulates the storage and flux of water, carbon, and nitrogen in the soil-plant-atmosphere system. Biome-BGCMAg was derived from the widely known Biome-BGC v4.1.1 model developed by the Numerical Terradynamic Simulation Group (NTSG), University of Montana, USA. One of the most important model developments is the implementation of a multilayer soil module with water, carbon, nitrogen, and soil organic matter profiles. We implemented drought and anoxic soil state-related plant mortality. Alternative calculation methods for various processes were implemented to support possible algorithm ensemble modelling approach. Optional dynamic allocation algorithm was introduced using predefined phenophases based on growing degree day method. We implemented optional temperature dependence of allocation and possible assimilation downregulation as a function of temperature. Nitrogen budget simulation was improved. Furthermore, human intervention modules were developed to simulate cropland management (e.g. planting, harvest, ploughing, and application of fertilizers) and forest thinning. Dynamic whole plant mortality was implemented in the model to enable more realistic simulation of forest stand development. Last (but not least) conditional management (irrigation and mowing) was introduced to analyze the effect of different management strategies in the future. We started to build a sophisticated R based software to increase the visibility of the model and enable its use by the wider scientific community.
In our first attempt to simulate national scale greenhouse gas budget with Biome-BGCMAg 2.0, we executed the model at 10 x 10 km spatial resolution for Hungary, using eco-physiological parameterization and prescribed management for maize, winter wheat, forests and grassland. The first results revealed that the spatial pattern of net primary production and crop yield is not represented well by the model. Based on the first experiences we introduced new features within Biome-BGCMAg 2.1 that address soil water deficit related photosynthesis down-regulation. Missing stomatal conductance effect on C4 photosynthesis was also addressed by the new developments.
How to cite: Hidy, D., Fodor, N., Hollós, R., and Barcza, Z.: Building a modelling framework to simulate ecosystem processes under changing climate: the long road from Biome-BGC to Biome-BGCMAg, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21688, https://doi.org/10.5194/egusphere-egu2020-21688, 2020.
EGU2020-21374 | Displays | ITS5.2/AS3.17
Potential of GPS-based smartphone application data for country-wide emission estimationKarol Szymankiewicz, Lech Gawuc, and Marek Soliwoda
Road transport emissions are among the primary causes of poor air quality in cities. Typically, activity data about road transport is based on point-wise automatic traffic measurements or traffic modelling environments like VISSUM. However, such methods do not provide complete spatial patterns of emissions that are needed for air quality modelling. On the other hand, modern smartphone applications, which are used by drivers to navigate and inform about road hazards, might provide a full spatial pattern of road traffic.
We will present preliminary results of road transport emission estimates based on the application of GPS-based smartphone data. The datasets describe average speed and number of users for every road part in Poland, including both major and minor roads. The data is based on the Open Street Maps road geometry and includes more than 4.5 million road segments describing 840 thousand km of roads.
How to cite: Szymankiewicz, K., Gawuc, L., and Soliwoda, M.: Potential of GPS-based smartphone application data for country-wide emission estimation, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21374, https://doi.org/10.5194/egusphere-egu2020-21374, 2020.
Road transport emissions are among the primary causes of poor air quality in cities. Typically, activity data about road transport is based on point-wise automatic traffic measurements or traffic modelling environments like VISSUM. However, such methods do not provide complete spatial patterns of emissions that are needed for air quality modelling. On the other hand, modern smartphone applications, which are used by drivers to navigate and inform about road hazards, might provide a full spatial pattern of road traffic.
We will present preliminary results of road transport emission estimates based on the application of GPS-based smartphone data. The datasets describe average speed and number of users for every road part in Poland, including both major and minor roads. The data is based on the Open Street Maps road geometry and includes more than 4.5 million road segments describing 840 thousand km of roads.
How to cite: Szymankiewicz, K., Gawuc, L., and Soliwoda, M.: Potential of GPS-based smartphone application data for country-wide emission estimation, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21374, https://doi.org/10.5194/egusphere-egu2020-21374, 2020.
EGU2020-5633 | Displays | ITS5.2/AS3.17
Uncertainty analysis of greenhouse gases emission: application to the EDGAR inventoryEfisio Solazzo, Peter Bergamaschi, margarita Choulga, Gabriel Oreggioni, Marilena Muntean, Monica Crippa, and Greet Janssens-Maenhout
Emission inventories of greenhouse gases built up from international statistics of human-related activities and emission factors (often referred to as ‘bottom-up’ inventories) are at the core of emission trend analysis to inform policy actions and scientific applications, to support climate negotiation and pledges for mitigation options.
Increasingly gaining importance is the quantification of the inherent uncertainty of these inventories that could allow moving towards a verification system in support of the enhanced transparency framework of the Paris Agreement, in particular the global stocktakes. Recently, two H2020 projects – CHE (CO2 Human Emissions) and VERIFY – are focusing on this sensible aspect. This paper produces an unprecedented propagation of uncertainty applied to emissions of CO2, CH4 and N2O, impinging in both projects. Starting from the human emission estimates of the Emission Database for Global Atmospheric Research (EDGAR), which encompasses historic and sectoral emissions from all world countries and using the error propagation method, uncertainties of the CO2, CH4 and N2O emissions were computed per sector and country.
The devised methodology applies uncertainty stemming from statistics of human activity and emission factors using the guidelines of Intergovernmental Panel on Climate Change (IPCC 2006). The analysis takes into consideration the accuracy of emission estimates for developed versus developing countries, correlation arising from sector aggregation, and includes an ad-hoc treatment for specific sources and country specific emission factors. The results of emissions and their uncertainties are available for all world countries and all IPCC/EDGAR sectors, and for each country, the share of the total uncertainty each sector is responsible for, is identified.
Our results show that world-wide CO2, CH4 and N2O emissions lies in a confidence range of 5%, 33% and in excess of 100%, respectively. The sectors most responsible for such uncertainty depend strongly on the statistical infrastructure of the country but we observe in general that few sectors with smaller emission total are contributing to a large proportion of the total uncertainty.
This global uncertainty assessment aims at contributing to the European initiative of the CO2 Monitoring Task Force, building up an operational greenhouse gas monitoring and verification support capacity.
How to cite: Solazzo, E., Bergamaschi, P., Choulga, M., Oreggioni, G., Muntean, M., Crippa, M., and Janssens-Maenhout, G.: Uncertainty analysis of greenhouse gases emission: application to the EDGAR inventory, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5633, https://doi.org/10.5194/egusphere-egu2020-5633, 2020.
Emission inventories of greenhouse gases built up from international statistics of human-related activities and emission factors (often referred to as ‘bottom-up’ inventories) are at the core of emission trend analysis to inform policy actions and scientific applications, to support climate negotiation and pledges for mitigation options.
Increasingly gaining importance is the quantification of the inherent uncertainty of these inventories that could allow moving towards a verification system in support of the enhanced transparency framework of the Paris Agreement, in particular the global stocktakes. Recently, two H2020 projects – CHE (CO2 Human Emissions) and VERIFY – are focusing on this sensible aspect. This paper produces an unprecedented propagation of uncertainty applied to emissions of CO2, CH4 and N2O, impinging in both projects. Starting from the human emission estimates of the Emission Database for Global Atmospheric Research (EDGAR), which encompasses historic and sectoral emissions from all world countries and using the error propagation method, uncertainties of the CO2, CH4 and N2O emissions were computed per sector and country.
The devised methodology applies uncertainty stemming from statistics of human activity and emission factors using the guidelines of Intergovernmental Panel on Climate Change (IPCC 2006). The analysis takes into consideration the accuracy of emission estimates for developed versus developing countries, correlation arising from sector aggregation, and includes an ad-hoc treatment for specific sources and country specific emission factors. The results of emissions and their uncertainties are available for all world countries and all IPCC/EDGAR sectors, and for each country, the share of the total uncertainty each sector is responsible for, is identified.
Our results show that world-wide CO2, CH4 and N2O emissions lies in a confidence range of 5%, 33% and in excess of 100%, respectively. The sectors most responsible for such uncertainty depend strongly on the statistical infrastructure of the country but we observe in general that few sectors with smaller emission total are contributing to a large proportion of the total uncertainty.
This global uncertainty assessment aims at contributing to the European initiative of the CO2 Monitoring Task Force, building up an operational greenhouse gas monitoring and verification support capacity.
How to cite: Solazzo, E., Bergamaschi, P., Choulga, M., Oreggioni, G., Muntean, M., Crippa, M., and Janssens-Maenhout, G.: Uncertainty analysis of greenhouse gases emission: application to the EDGAR inventory, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5633, https://doi.org/10.5194/egusphere-egu2020-5633, 2020.
EGU2020-10560 | Displays | ITS5.2/AS3.17
Anthropogenic CO2 emission uncertaintiesMargarita Choulga, Greet Janssens-Maenhout, Gianpaolo Balsamo, Joe McNorton, Efisio Solazzo, Nicolas Bousserez, and Anna Agusti-Panareda
The CO2 Human Emissions (CHE) project has been tasked by the European Commission to prepare the development of a European capacity to monitor anthropogenic CO2 emissions. The monitoring of fossil fuel CO2 emissions has to come with a sufficiently low uncertainty in order to be useful for policymakers. In this context, the main approaches to estimate fossil fuel emissions, apart from bottom-up inventories, are based on inverse transport
modeling either on its own or within a coupled carbon cycle fossil fuel data assimilation system. Both approaches make use of atmospheric CO2 and other tracers (e.g., CO and NOx) and rely on the availability of prior fossil fuel CO2 emission estimates and uncertainties (as well as biogenic fluxes for the transport inverse modeling). For a robust estimate of the uncertainty, information from different sources needs to be brought together.
A methodology to calculate yearly and monthly anthropogenic CO2 emission uncertainties based on IPCC guidelines (2006 IPCC Guidelines for National Greenhouse Gas Inventories + its 2019 Refinements) has been developed. Emission uncertainties are calculated for all world countries, under the assumption of two categories of world countries, depending on whether the country’s statistical infrastructure is well or less developed. For well-developed statistical infrastructure, emission uncertainties are lower, while less developed statistical infrastructure countries have higher emission uncertainties. A sensitivity analysis is investigating the impact of the well or less developed infrastructure assumption for several countries on the global emission uncertainty. Sensitivity experiments with different anthropogenic CO2 sources distributions, as well as the first results on using these prior anthropogenic CO2 uncertainties in ensemble perturbation runs will be presented.
How to cite: Choulga, M., Janssens-Maenhout, G., Balsamo, G., McNorton, J., Solazzo, E., Bousserez, N., and Agusti-Panareda, A.: Anthropogenic CO2 emission uncertainties, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10560, https://doi.org/10.5194/egusphere-egu2020-10560, 2020.
The CO2 Human Emissions (CHE) project has been tasked by the European Commission to prepare the development of a European capacity to monitor anthropogenic CO2 emissions. The monitoring of fossil fuel CO2 emissions has to come with a sufficiently low uncertainty in order to be useful for policymakers. In this context, the main approaches to estimate fossil fuel emissions, apart from bottom-up inventories, are based on inverse transport
modeling either on its own or within a coupled carbon cycle fossil fuel data assimilation system. Both approaches make use of atmospheric CO2 and other tracers (e.g., CO and NOx) and rely on the availability of prior fossil fuel CO2 emission estimates and uncertainties (as well as biogenic fluxes for the transport inverse modeling). For a robust estimate of the uncertainty, information from different sources needs to be brought together.
A methodology to calculate yearly and monthly anthropogenic CO2 emission uncertainties based on IPCC guidelines (2006 IPCC Guidelines for National Greenhouse Gas Inventories + its 2019 Refinements) has been developed. Emission uncertainties are calculated for all world countries, under the assumption of two categories of world countries, depending on whether the country’s statistical infrastructure is well or less developed. For well-developed statistical infrastructure, emission uncertainties are lower, while less developed statistical infrastructure countries have higher emission uncertainties. A sensitivity analysis is investigating the impact of the well or less developed infrastructure assumption for several countries on the global emission uncertainty. Sensitivity experiments with different anthropogenic CO2 sources distributions, as well as the first results on using these prior anthropogenic CO2 uncertainties in ensemble perturbation runs will be presented.
How to cite: Choulga, M., Janssens-Maenhout, G., Balsamo, G., McNorton, J., Solazzo, E., Bousserez, N., and Agusti-Panareda, A.: Anthropogenic CO2 emission uncertainties, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10560, https://doi.org/10.5194/egusphere-egu2020-10560, 2020.
EGU2020-12506 | Displays | ITS5.2/AS3.17
Economic sectoral transfer could not help to global CO2 mitigationHansunbai Li and Yu Ye
CO2 was the largest part of anthropogenic greenhouse gas (GHGs) caused remarkable changes in climate and earth system. In response to this situation, global mitigation efforts, especially sectoral and cross-sectoral, have been taken while meeting the needs of global development. Understanding the sectoral structures and emissions in different countries and regions in the period of emission quick growth and industrial transferred among the world after 1970 could suggest effective efforts to avoid misleading mitigation pathway and could support decision-makers to select efficient strategies for different countries and sectors.
Using CO2 emission data form GHG emission inventory EDGAR (The Emissions Database for Global Atmospheric Research), we identified the major emission pattern of different regions by counted the largest sectoral emission on each grid, which suggests the spatial distribution of sectoral emission. We also identified the high emission regions in the world by selecting grids where emission higher than the global mean plus 2 times stand deviation after logarithm transform, which those regions contributed more than 80% of global emission in every year since 1970. Then, we counted the largest sectoral emission on each grid in the high emission regions to indicate the main contribute sectors. We analyzed those two types of sectoral emissions changes in space and time that representing the spatial distribution pattern and the highest emission sources at different times.
Our study shown emission by transport sector contribute a major part in space after the compliment of transport infrastructure construction, which emission transfer from manufacturing to transport sector. It has three different types of countries of completed time, for countries like the USA, transport sector dominant the distribution in space since the 1970s, for countries like the UK and France, the major sectoral emission in space was building sector before 1990, then was replaced by transport sector, for other countries have not finished yet. Our study also revealed high emission regions that occurred in megacities and at the place where power industries locate and its area has increased. However, sectoral emissions shown different both in time and space. For the USA and Europe, the main emission sectors in high emission regions transferred from power industry and manufacturing sector to building sector before 1990, especially sector in megacities transferred from manufacturing to building sector with the area of high emission regions increased. For the region in the east of China, the main emission sectors in high emission regions were power industry and manufacturing sector, which experienced quick growth between 1980 to 1990 and cities in there became the world manufacture center. In conclusion, during sharply increased emission since 1970, the role of industrial transfers was transfer emissions from some sectors to another region in another country, and emissions from other sectors replaced those transferred emissions.
How to cite: Li, H. and Ye, Y.: Economic sectoral transfer could not help to global CO2 mitigation, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12506, https://doi.org/10.5194/egusphere-egu2020-12506, 2020.
CO2 was the largest part of anthropogenic greenhouse gas (GHGs) caused remarkable changes in climate and earth system. In response to this situation, global mitigation efforts, especially sectoral and cross-sectoral, have been taken while meeting the needs of global development. Understanding the sectoral structures and emissions in different countries and regions in the period of emission quick growth and industrial transferred among the world after 1970 could suggest effective efforts to avoid misleading mitigation pathway and could support decision-makers to select efficient strategies for different countries and sectors.
Using CO2 emission data form GHG emission inventory EDGAR (The Emissions Database for Global Atmospheric Research), we identified the major emission pattern of different regions by counted the largest sectoral emission on each grid, which suggests the spatial distribution of sectoral emission. We also identified the high emission regions in the world by selecting grids where emission higher than the global mean plus 2 times stand deviation after logarithm transform, which those regions contributed more than 80% of global emission in every year since 1970. Then, we counted the largest sectoral emission on each grid in the high emission regions to indicate the main contribute sectors. We analyzed those two types of sectoral emissions changes in space and time that representing the spatial distribution pattern and the highest emission sources at different times.
Our study shown emission by transport sector contribute a major part in space after the compliment of transport infrastructure construction, which emission transfer from manufacturing to transport sector. It has three different types of countries of completed time, for countries like the USA, transport sector dominant the distribution in space since the 1970s, for countries like the UK and France, the major sectoral emission in space was building sector before 1990, then was replaced by transport sector, for other countries have not finished yet. Our study also revealed high emission regions that occurred in megacities and at the place where power industries locate and its area has increased. However, sectoral emissions shown different both in time and space. For the USA and Europe, the main emission sectors in high emission regions transferred from power industry and manufacturing sector to building sector before 1990, especially sector in megacities transferred from manufacturing to building sector with the area of high emission regions increased. For the region in the east of China, the main emission sectors in high emission regions were power industry and manufacturing sector, which experienced quick growth between 1980 to 1990 and cities in there became the world manufacture center. In conclusion, during sharply increased emission since 1970, the role of industrial transfers was transfer emissions from some sectors to another region in another country, and emissions from other sectors replaced those transferred emissions.
How to cite: Li, H. and Ye, Y.: Economic sectoral transfer could not help to global CO2 mitigation, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12506, https://doi.org/10.5194/egusphere-egu2020-12506, 2020.
ITS5.4/CL3.4 – Economics and Econometrics of Climate Change: evaluating the drivers, impacts, and policies of climate change
EGU2020-18986 | Displays | ITS5.4/CL3.4
A statistical model of the global carbon budgetMikkel Bennedsen, Eric Hillebrand, and Siem Jan Koopman
We propose a statistical model of the global carbon budget as represented in the annual data set made available by the Global Carbon Project (Friedlingsstein et al., 2019, Earth System Science Data 11, 1783-1838). The model connects four main objects of interest: atmospheric CO2 concentrations, anthropogenic CO2 emissions, the absorption of CO2 by the terrestrial biosphere (land sink) and by the ocean (ocean sink). The model captures the global carbon budget equation, which states that emissions not absorbed by either land or ocean sinks must remain in the atmosphere and constitute a flow to the stock of atmospheric concentrations. Emissions depend on global economic activity as measured by World gross domestic product (GDP), and sink activity depends on the level of atmospheric concentrations (fertilization). The model is cast in a state-space system, which facilitates estimation of the parameters of the model using the Kalman filter and the method of maximum likelihood. We illustrate the usefulness of the model in two applications: (i) short-horizon forecasts of all variables in the model, which is an output of the Kalman filter; and (ii) long-horizon projections of climate variables, implied by certain assumptions on future World GDP, are constructed from the model and compared with those coming from the Representative Concentration Pathway scenarios. The statistical nature of the model allows or an assessment of parameter estimation uncertainty in the forecast and projection exercises.
How to cite: Bennedsen, M., Hillebrand, E., and Koopman, S. J.: A statistical model of the global carbon budget, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18986, https://doi.org/10.5194/egusphere-egu2020-18986, 2020.
We propose a statistical model of the global carbon budget as represented in the annual data set made available by the Global Carbon Project (Friedlingsstein et al., 2019, Earth System Science Data 11, 1783-1838). The model connects four main objects of interest: atmospheric CO2 concentrations, anthropogenic CO2 emissions, the absorption of CO2 by the terrestrial biosphere (land sink) and by the ocean (ocean sink). The model captures the global carbon budget equation, which states that emissions not absorbed by either land or ocean sinks must remain in the atmosphere and constitute a flow to the stock of atmospheric concentrations. Emissions depend on global economic activity as measured by World gross domestic product (GDP), and sink activity depends on the level of atmospheric concentrations (fertilization). The model is cast in a state-space system, which facilitates estimation of the parameters of the model using the Kalman filter and the method of maximum likelihood. We illustrate the usefulness of the model in two applications: (i) short-horizon forecasts of all variables in the model, which is an output of the Kalman filter; and (ii) long-horizon projections of climate variables, implied by certain assumptions on future World GDP, are constructed from the model and compared with those coming from the Representative Concentration Pathway scenarios. The statistical nature of the model allows or an assessment of parameter estimation uncertainty in the forecast and projection exercises.
How to cite: Bennedsen, M., Hillebrand, E., and Koopman, S. J.: A statistical model of the global carbon budget, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18986, https://doi.org/10.5194/egusphere-egu2020-18986, 2020.
EGU2020-2959 * | Displays | ITS5.4/CL3.4 | Highlight
Tipping Points in the Climate System and the Economics of Climate ChangeJames Rising, Simon Dietz, Thomas Stoerk, and Gernot Wagner
How to cite: Rising, J., Dietz, S., Stoerk, T., and Wagner, G.: Tipping Points in the Climate System and the Economics of Climate Change, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2959, https://doi.org/10.5194/egusphere-egu2020-2959, 2020.
How to cite: Rising, J., Dietz, S., Stoerk, T., and Wagner, G.: Tipping Points in the Climate System and the Economics of Climate Change, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2959, https://doi.org/10.5194/egusphere-egu2020-2959, 2020.
EGU2020-21065 | Displays | ITS5.4/CL3.4
Chaos in Estimates of Climate Change ImpactsEmanuele Massetti and Emanuele Di Lorenzo
Estimates of physical, social and economic impacts of climate change are less accurate than usually thought because the impacts literature has largely neglected the internal variability of the climate system. Climate change scenarios are highly sensitive to the initial conditions of the climate system due the chaotic dynamics of weather. As the initial conditions of the climate system are unknown with a sufficiently high level of precision, each future climate scenario – for any given model parameterization and level of exogenous forcing – is only one of the many possible future realizations of climate. The impacts literature usually relies on only one realization randomly taken out of the full distribution of future climates. Here we use one of the few available large scale ensembles produced to study internal variability and an econometric model of climate change impacts on United States (US) agricultural productivity to show that the range of impacts is much larger than previously thought. Different ensemble members lead to significantly different impacts. Significant sign reversals are frequent. Relying only on one ensemble member leads to incorrect conclusions on the effect of climate change on agriculture in most of the US counties. Impacts studies should start using large scale ensembles of future climate change to predict damages. Climatologists should ramp-up efforts to run large ensembles for all GCMs, for at least the most frequently used scenarios of exogenous forcing.
How to cite: Massetti, E. and Di Lorenzo, E.: Chaos in Estimates of Climate Change Impacts, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21065, https://doi.org/10.5194/egusphere-egu2020-21065, 2020.
Estimates of physical, social and economic impacts of climate change are less accurate than usually thought because the impacts literature has largely neglected the internal variability of the climate system. Climate change scenarios are highly sensitive to the initial conditions of the climate system due the chaotic dynamics of weather. As the initial conditions of the climate system are unknown with a sufficiently high level of precision, each future climate scenario – for any given model parameterization and level of exogenous forcing – is only one of the many possible future realizations of climate. The impacts literature usually relies on only one realization randomly taken out of the full distribution of future climates. Here we use one of the few available large scale ensembles produced to study internal variability and an econometric model of climate change impacts on United States (US) agricultural productivity to show that the range of impacts is much larger than previously thought. Different ensemble members lead to significantly different impacts. Significant sign reversals are frequent. Relying only on one ensemble member leads to incorrect conclusions on the effect of climate change on agriculture in most of the US counties. Impacts studies should start using large scale ensembles of future climate change to predict damages. Climatologists should ramp-up efforts to run large ensembles for all GCMs, for at least the most frequently used scenarios of exogenous forcing.
How to cite: Massetti, E. and Di Lorenzo, E.: Chaos in Estimates of Climate Change Impacts, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21065, https://doi.org/10.5194/egusphere-egu2020-21065, 2020.
EGU2020-8332 | Displays | ITS5.4/CL3.4
Aerosol - Climate Interactions, the Distribution of Aerosol Impacts, and Implications for the Social Cost of CarbonJennifer Burney, Geeta Persad, Jonathan Proctor, Marshall Burke, Eran Bendavid, Sam Heft-Neal, and Ken Caldeira
Here we demonstrate how the same aerosol emissions, released from different locations, lead to different regional and global changes in the physical environment, in turn resulting in divergent magnitudes and spatial distributions of societal impacts. Atmospheric chemistry and the general circulation do not evenly distribute aerosols around the globe, so aerosol impacts -- both direct and via interactions with the general circulation -- vary spatially. Our repeat-cycle perturbation experiment shows that the same emissions, when released from one of 8 different regions, result in significantly different steady-state distributions of surface particulate matter (PM2.5), total column aerosol optical depth (AOD), surface temperature, and precipitation. We link these changes in the physical environment to established temperature, precipitation, AOD, and PM2.5 damage functions to estimate both local and global impacts on infant mortality, crop yields, and economic growth. Because the damages associated with these aerosol and aerosol precursor emissions are strongly emission-location dependent, the marginal dollar spent on mitigation would have very different returns in different locations, both locally and globally. This has important implications for calculating a realistic social cost of carbon, since these aerosol-mediated effects are ultimately inseparable from the processes producing CO2 emissions.
How to cite: Burney, J., Persad, G., Proctor, J., Burke, M., Bendavid, E., Heft-Neal, S., and Caldeira, K.: Aerosol - Climate Interactions, the Distribution of Aerosol Impacts, and Implications for the Social Cost of Carbon, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8332, https://doi.org/10.5194/egusphere-egu2020-8332, 2020.
Here we demonstrate how the same aerosol emissions, released from different locations, lead to different regional and global changes in the physical environment, in turn resulting in divergent magnitudes and spatial distributions of societal impacts. Atmospheric chemistry and the general circulation do not evenly distribute aerosols around the globe, so aerosol impacts -- both direct and via interactions with the general circulation -- vary spatially. Our repeat-cycle perturbation experiment shows that the same emissions, when released from one of 8 different regions, result in significantly different steady-state distributions of surface particulate matter (PM2.5), total column aerosol optical depth (AOD), surface temperature, and precipitation. We link these changes in the physical environment to established temperature, precipitation, AOD, and PM2.5 damage functions to estimate both local and global impacts on infant mortality, crop yields, and economic growth. Because the damages associated with these aerosol and aerosol precursor emissions are strongly emission-location dependent, the marginal dollar spent on mitigation would have very different returns in different locations, both locally and globally. This has important implications for calculating a realistic social cost of carbon, since these aerosol-mediated effects are ultimately inseparable from the processes producing CO2 emissions.
How to cite: Burney, J., Persad, G., Proctor, J., Burke, M., Bendavid, E., Heft-Neal, S., and Caldeira, K.: Aerosol - Climate Interactions, the Distribution of Aerosol Impacts, and Implications for the Social Cost of Carbon, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8332, https://doi.org/10.5194/egusphere-egu2020-8332, 2020.
EGU2020-18055 | Displays | ITS5.4/CL3.4
Embracing dynamic complexity in climate economics: The DSK Agent-based Integrated Assessment ModellingClaudia Wieners, Francesco Lamperti, Andrea Roventini, and Roberto Buizza
Integrated Assessment Models are a key tool to search and evaluate climate policies - i.e. a set of measures best suited to avoid the worst of climate change without “harming the economy” too much.
Climate action µ(t) is typically portrayed as coming at a cost (relative to a no-policy case) C(µ), where C is a positive, monotonously increasing function.
However, this representation ignores economic dynamics. For instance, it assumes that CO2 abatement costs today are independent from efforts done last year, whereas in reality, previous investments in infrastructure or knowledge will have effects on abatement and abatement costs in the future. More generally speaking, the economy is a complex system of interacting players, capable of path-dependent behaviour, multiple equilibria or out-of-equilibrium dynamics, and transitions between states, and climate policy measures (or climate impacts) targeting some actors can affect the whole system.
Agent-based modelling has in recent years emerged as a tool to break the constraints imposed by generalised equilibrium models underlying most IAMs. Agent-based models directly simulate the activities of diverse interacting agents, rather than making assumptions of the aggregate behaviour of groups of agents.
Here, we present an agent-based Integrating Assessment Model, the Dystopian Schumpter-Keynes (DSK) model. It contains an industrial sector with interacting machine and consumption good firms, a banking sector, a government, and an electricity supplier, coupled to a climate module. The model has been used, among other things, to investigate how different types of climate impacts propagate through the economy. In this presentation, we focus on climate policy. In particular, we investigate
1. which policy tools, or combination of tools, are effective at bringing about a sufficiently rapid decarbonisation. Is a uniform carbon tax really sufficient to cause a green transition?
2. what will be the side effects on the economy. Will there be ongoing strain on the economy, or will costs be transitional - potentially even with long-term benefits?
How to cite: Wieners, C., Lamperti, F., Roventini, A., and Buizza, R.: Embracing dynamic complexity in climate economics: The DSK Agent-based Integrated Assessment Modelling, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18055, https://doi.org/10.5194/egusphere-egu2020-18055, 2020.
Integrated Assessment Models are a key tool to search and evaluate climate policies - i.e. a set of measures best suited to avoid the worst of climate change without “harming the economy” too much.
Climate action µ(t) is typically portrayed as coming at a cost (relative to a no-policy case) C(µ), where C is a positive, monotonously increasing function.
However, this representation ignores economic dynamics. For instance, it assumes that CO2 abatement costs today are independent from efforts done last year, whereas in reality, previous investments in infrastructure or knowledge will have effects on abatement and abatement costs in the future. More generally speaking, the economy is a complex system of interacting players, capable of path-dependent behaviour, multiple equilibria or out-of-equilibrium dynamics, and transitions between states, and climate policy measures (or climate impacts) targeting some actors can affect the whole system.
Agent-based modelling has in recent years emerged as a tool to break the constraints imposed by generalised equilibrium models underlying most IAMs. Agent-based models directly simulate the activities of diverse interacting agents, rather than making assumptions of the aggregate behaviour of groups of agents.
Here, we present an agent-based Integrating Assessment Model, the Dystopian Schumpter-Keynes (DSK) model. It contains an industrial sector with interacting machine and consumption good firms, a banking sector, a government, and an electricity supplier, coupled to a climate module. The model has been used, among other things, to investigate how different types of climate impacts propagate through the economy. In this presentation, we focus on climate policy. In particular, we investigate
1. which policy tools, or combination of tools, are effective at bringing about a sufficiently rapid decarbonisation. Is a uniform carbon tax really sufficient to cause a green transition?
2. what will be the side effects on the economy. Will there be ongoing strain on the economy, or will costs be transitional - potentially even with long-term benefits?
How to cite: Wieners, C., Lamperti, F., Roventini, A., and Buizza, R.: Embracing dynamic complexity in climate economics: The DSK Agent-based Integrated Assessment Modelling, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18055, https://doi.org/10.5194/egusphere-egu2020-18055, 2020.
EGU2020-3449 | Displays | ITS5.4/CL3.4
Econometric methods for empirical climate modellingDavid Hendry and Jennifer Castle
To understand the evolution of climate time series, it is essential to take account of their non-stationary nature with both stochastic trends and distributional shifts: see e.g., . Using the novel approach of saturation estimation, explained in the presentation, we model observational records on evolving climate processes that also shift, undertaking empirical studies that are complementary to analyses based on laws of conservation of energy and physical process-based models. Despite saturation estimation creating more candidate variables than observations in the initial general formulation, our machine learning model selection algorithm has seen many successful applications, illustrated here by modelling the highly non-stationary data on UK CO2 emissions annually 1860-2018 with strong upward then downward trends, punctuated by large outliers from world wars, national coal strikes and stringent legislation.
How to cite: Hendry, D. and Castle, J.: Econometric methods for empirical climate modelling, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3449, https://doi.org/10.5194/egusphere-egu2020-3449, 2020.
To understand the evolution of climate time series, it is essential to take account of their non-stationary nature with both stochastic trends and distributional shifts: see e.g., . Using the novel approach of saturation estimation, explained in the presentation, we model observational records on evolving climate processes that also shift, undertaking empirical studies that are complementary to analyses based on laws of conservation of energy and physical process-based models. Despite saturation estimation creating more candidate variables than observations in the initial general formulation, our machine learning model selection algorithm has seen many successful applications, illustrated here by modelling the highly non-stationary data on UK CO2 emissions annually 1860-2018 with strong upward then downward trends, punctuated by large outliers from world wars, national coal strikes and stringent legislation.
How to cite: Hendry, D. and Castle, J.: Econometric methods for empirical climate modelling, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3449, https://doi.org/10.5194/egusphere-egu2020-3449, 2020.
EGU2020-20049 | Displays | ITS5.4/CL3.4
Climate shocks and the supply and demand for climate governanceSam Rowan
Existing studies have demonstrated substantial and robust effects of temperature shocks on economic growth, agricultural output, labor productivity, conflict, and health. These studies help clarify the impacts of climate change on social and economic systems, yet the relationship between climate shocks and political outcomes are less well identified. What effect do climate shocks have on states' climate policies? In this paper, I estimate the relationship between national-level temperature and rainfall shocks and the supply and demand for international climate governance. Temperature shocks may increase the salience of climate change in national politics and lead political leaders to adjust policies to match. Similarly, temperature shocks may have material consequences that induce adaptation---one avenue being to use international institutions to coordinate a global response to climate impacts. I argue that the responsiveness of national governments to climate shocks is conditioned by the political and natural context in which governments operate. Specifically, I expect that democratic governments will be more responsive to climate shocks, as will countries that are more vulnerable to the impacts of climate change. I assess whether countries that experience more frequent and more severe climate shocks participate more in international climate politics and adjust their climate policies. I examine four sets of outcomes at the national level: (1) membership in international institutions that govern climate change, (2) the provision and receipt of climate finance, (3) representation at the UN climate conferences, and (4) national climate policies. As the climate changes, we are developing stronger evidence about the underlying natural relationships, but the heterogenous effects across socio-political contexts are less well understood. This paper contributes to our understanding of how climate change shapes national policy and with it the ability of countries to manage and adapt to climate change.
How to cite: Rowan, S.: Climate shocks and the supply and demand for climate governance, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20049, https://doi.org/10.5194/egusphere-egu2020-20049, 2020.
Existing studies have demonstrated substantial and robust effects of temperature shocks on economic growth, agricultural output, labor productivity, conflict, and health. These studies help clarify the impacts of climate change on social and economic systems, yet the relationship between climate shocks and political outcomes are less well identified. What effect do climate shocks have on states' climate policies? In this paper, I estimate the relationship between national-level temperature and rainfall shocks and the supply and demand for international climate governance. Temperature shocks may increase the salience of climate change in national politics and lead political leaders to adjust policies to match. Similarly, temperature shocks may have material consequences that induce adaptation---one avenue being to use international institutions to coordinate a global response to climate impacts. I argue that the responsiveness of national governments to climate shocks is conditioned by the political and natural context in which governments operate. Specifically, I expect that democratic governments will be more responsive to climate shocks, as will countries that are more vulnerable to the impacts of climate change. I assess whether countries that experience more frequent and more severe climate shocks participate more in international climate politics and adjust their climate policies. I examine four sets of outcomes at the national level: (1) membership in international institutions that govern climate change, (2) the provision and receipt of climate finance, (3) representation at the UN climate conferences, and (4) national climate policies. As the climate changes, we are developing stronger evidence about the underlying natural relationships, but the heterogenous effects across socio-political contexts are less well understood. This paper contributes to our understanding of how climate change shapes national policy and with it the ability of countries to manage and adapt to climate change.
How to cite: Rowan, S.: Climate shocks and the supply and demand for climate governance, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20049, https://doi.org/10.5194/egusphere-egu2020-20049, 2020.
EGU2020-9464 | Displays | ITS5.4/CL3.4
Implications of intrinsic variability for economic assessments of climate changeDavid Stainforth, Raphael Calel, Sandra Chapman, and Nicholas Watkins
Integrated Assessment Models (IAMs) are widely used to evaluate the economic costs of climate change, the social cost of carbon and the value of mitigation policies. These IAMs include simple energy balance models (EBMs) to represent the physical climate system and to calculate the timeseries of global mean temperature in response to changing radiative forcing[1]. The EBMs are deterministic in nature which leads to smoothly varying GMT trajectories so for simple monotonically increasing forcing scenarios (e.g. representative concentration pathways (RCPs) 8.5, 6.0 and 4.5) the GMT trajectories are also monotonically increasing. By contrast real world, and global-climate-model-derived, timeseries show substantial inter-annual and inter-decadal variability. Here we present an analysis of the implications of this intrinsic variability for the economic consequences of climate change.
We use a simple stochastic EBM to generate large ensembles of GMT trajectories under each of the RCP forcing scenarios. The damages implied by each trajectory are calculated using the Weitzman damage function. This provides a conditional estimate of the unavoidable uncertainty in implied damages. It turns out to be large and positively skewed due to the shape of the damage function. Under RCP2.6 we calculate a 5-95% range of -30% to +52% of the deterministic value; -13% to +16% under RCP 8.5. The risk premia associated with such unavoidable uncertainty are also significant. Under our economic assumptions a social planner would be willing to pay 32 trillion dollars to avoid just the intrinsic uncertainty in RCP8.5. This figure rises further when allowance is made for epistemic uncertainty in relation to climate sensitivity. We conclude that appropriate representation of stochastic variability in the climate system is important to include in future economic assessments of climate change.
[1] Calel, R. and Stainforth D.A., “On the Physics of Three Integrated Assessment Models”, Bulletin of the American Meteorological Society, 2017.
How to cite: Stainforth, D., Calel, R., Chapman, S., and Watkins, N.: Implications of intrinsic variability for economic assessments of climate change, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9464, https://doi.org/10.5194/egusphere-egu2020-9464, 2020.
Integrated Assessment Models (IAMs) are widely used to evaluate the economic costs of climate change, the social cost of carbon and the value of mitigation policies. These IAMs include simple energy balance models (EBMs) to represent the physical climate system and to calculate the timeseries of global mean temperature in response to changing radiative forcing[1]. The EBMs are deterministic in nature which leads to smoothly varying GMT trajectories so for simple monotonically increasing forcing scenarios (e.g. representative concentration pathways (RCPs) 8.5, 6.0 and 4.5) the GMT trajectories are also monotonically increasing. By contrast real world, and global-climate-model-derived, timeseries show substantial inter-annual and inter-decadal variability. Here we present an analysis of the implications of this intrinsic variability for the economic consequences of climate change.
We use a simple stochastic EBM to generate large ensembles of GMT trajectories under each of the RCP forcing scenarios. The damages implied by each trajectory are calculated using the Weitzman damage function. This provides a conditional estimate of the unavoidable uncertainty in implied damages. It turns out to be large and positively skewed due to the shape of the damage function. Under RCP2.6 we calculate a 5-95% range of -30% to +52% of the deterministic value; -13% to +16% under RCP 8.5. The risk premia associated with such unavoidable uncertainty are also significant. Under our economic assumptions a social planner would be willing to pay 32 trillion dollars to avoid just the intrinsic uncertainty in RCP8.5. This figure rises further when allowance is made for epistemic uncertainty in relation to climate sensitivity. We conclude that appropriate representation of stochastic variability in the climate system is important to include in future economic assessments of climate change.
[1] Calel, R. and Stainforth D.A., “On the Physics of Three Integrated Assessment Models”, Bulletin of the American Meteorological Society, 2017.
How to cite: Stainforth, D., Calel, R., Chapman, S., and Watkins, N.: Implications of intrinsic variability for economic assessments of climate change, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9464, https://doi.org/10.5194/egusphere-egu2020-9464, 2020.
EGU2020-6454 | Displays | ITS5.4/CL3.4
Discounting Future Climate Change and the Equilibrium Real Interest RateMichael Bauer and Glenn Rudebusch
The social discount rate is a crucial element required for valuing future damages from climate change. A consensus has emerged that discount rates should be declining with horizon, i.e., that the term structure of discount rates should have a negative slope. However, much controversy remains about the appropriate the overall level of discount rates.
We contribute to this debate from a macro-finance perspective, based on the insight that the equilibrium real interest rate, commonly known as r*, is the crucial determinant of the level of discount rates. First, we show theoretically how r* anchors the term structure of discount rates, using the modern macro-finance theory of the term structure of interest rates to provide a new perspective on classic results about social discount rates. Second, we show empirically that new macro-finance estimates of r* have fallen substantially over the past quarter century---consistent with a broader literature that documents such a secular decline. Bayesian estimation of a state-space model for Treasury yields, inflation and the real interest rate allows us to quantify both the decline in r* and the resulting downward shift of the term structure of social discount rates. Third, we document that this decline in r* and the social discount rate boosts the social cost of carbon and has quantitatively important implications for assessing the economic consequences of climate change. In essence, we demonstrate that the lower new normal for interest rates implies a higher new normal for the present value of climate change damages.
How to cite: Bauer, M. and Rudebusch, G.: Discounting Future Climate Change and the Equilibrium Real Interest Rate, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6454, https://doi.org/10.5194/egusphere-egu2020-6454, 2020.
The social discount rate is a crucial element required for valuing future damages from climate change. A consensus has emerged that discount rates should be declining with horizon, i.e., that the term structure of discount rates should have a negative slope. However, much controversy remains about the appropriate the overall level of discount rates.
We contribute to this debate from a macro-finance perspective, based on the insight that the equilibrium real interest rate, commonly known as r*, is the crucial determinant of the level of discount rates. First, we show theoretically how r* anchors the term structure of discount rates, using the modern macro-finance theory of the term structure of interest rates to provide a new perspective on classic results about social discount rates. Second, we show empirically that new macro-finance estimates of r* have fallen substantially over the past quarter century---consistent with a broader literature that documents such a secular decline. Bayesian estimation of a state-space model for Treasury yields, inflation and the real interest rate allows us to quantify both the decline in r* and the resulting downward shift of the term structure of social discount rates. Third, we document that this decline in r* and the social discount rate boosts the social cost of carbon and has quantitatively important implications for assessing the economic consequences of climate change. In essence, we demonstrate that the lower new normal for interest rates implies a higher new normal for the present value of climate change damages.
How to cite: Bauer, M. and Rudebusch, G.: Discounting Future Climate Change and the Equilibrium Real Interest Rate, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6454, https://doi.org/10.5194/egusphere-egu2020-6454, 2020.
EGU2020-10570 | Displays | ITS5.4/CL3.4
Co-movements of financial volatilities in a changing environmentSusana Martins
Anthropogenic climate change has been attributed mainly to the excessive burning of fossil fuels and the release of carbon compounds. On average, 75% of the primary energy is still being produced by means of fossil fuels. In order to mitigate the global effects of climate change, a transition towards low-carbon economies is thus necessary. However, given current technology, this transition requires investments to shift away from high-carbon assets and so the effectiveness of changes in investment decisions depends highly on the expectations about policy change (e.g. regarding carbon pricing). The systemic implications of disruptive technological progress on the prices of carbon-intensive assets are thus compounded by the geopolitical nature of transition risk. If investors are pricing transition risk, this implies prices of high-carbon assets should all be responsive to climate-related policy news. For modelling the dynamics of volatility co-movements at the global scale, we propose an extension to the global volatility factor model of Engle and Martins (\textit{in preparation}). To allow for richer structures of the global volatility process, including dynamics, structural changes, outliers or time-varying parameters, we adapt the indicator saturation approach introduced by Hendry (1999) to the second moment and high-frequency data. In the model, climate change is interpreted as a source of structural change affecting the financial system. The new global volatility model is applied to the daily share prices of major Oil and Gas companies from different countries traded in the NYSE to avoid asynchronicity. As a proxy for climate change risk, we use the climate change news index of Engle et al. (2019). This index is a time series that captures news about long-run climate risk. In particular, we use the innovations in their negative (or bad) news index which is based on sentiment analysis.
How to cite: Martins, S.: Co-movements of financial volatilities in a changing environment, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10570, https://doi.org/10.5194/egusphere-egu2020-10570, 2020.
Anthropogenic climate change has been attributed mainly to the excessive burning of fossil fuels and the release of carbon compounds. On average, 75% of the primary energy is still being produced by means of fossil fuels. In order to mitigate the global effects of climate change, a transition towards low-carbon economies is thus necessary. However, given current technology, this transition requires investments to shift away from high-carbon assets and so the effectiveness of changes in investment decisions depends highly on the expectations about policy change (e.g. regarding carbon pricing). The systemic implications of disruptive technological progress on the prices of carbon-intensive assets are thus compounded by the geopolitical nature of transition risk. If investors are pricing transition risk, this implies prices of high-carbon assets should all be responsive to climate-related policy news. For modelling the dynamics of volatility co-movements at the global scale, we propose an extension to the global volatility factor model of Engle and Martins (\textit{in preparation}). To allow for richer structures of the global volatility process, including dynamics, structural changes, outliers or time-varying parameters, we adapt the indicator saturation approach introduced by Hendry (1999) to the second moment and high-frequency data. In the model, climate change is interpreted as a source of structural change affecting the financial system. The new global volatility model is applied to the daily share prices of major Oil and Gas companies from different countries traded in the NYSE to avoid asynchronicity. As a proxy for climate change risk, we use the climate change news index of Engle et al. (2019). This index is a time series that captures news about long-run climate risk. In particular, we use the innovations in their negative (or bad) news index which is based on sentiment analysis.
How to cite: Martins, S.: Co-movements of financial volatilities in a changing environment, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10570, https://doi.org/10.5194/egusphere-egu2020-10570, 2020.
EGU2020-13230 * | Displays | ITS5.4/CL3.4 | Highlight
Climate-economy feedbacks, temperature variability, and the social cost of carbonJarmo Kikstra, Paul Waidelich, James Rising, Dmitry Yumashev, Chris Hope, and Chris Brierley
A key statistic describing climate change impacts is the “social cost of carbon” (SCC), the total market and non-market costs to society incurred by releasing a ton of CO2. Estimates of the SCC have risen in recent years, with improved understanding of the risk of climate change to various sectors, including agriculture [1], mortality [2], and economic growth [3].
The total risks of climate impacts also depend on the representation of human-climate feedbacks such as the effect of climate impacts on GDP growth and extremes (rather than a focus only on means), but this relationship has not been extensively studied [4-7]. In this paper, we update the widely used PAGE IAM to investigate how SCC distributions change with the inclusion of climate-economy feedbacks and temperature variability. The PAGE model has recently been improved with representations of permafrost thawing and surface albedo feedback, CMIP6 scenarios, and empirical market damage estimates [8]. We study how changes from PAGE09 to PAGE-ICE affected the SCC, increasing it up to 75%, with a SCC distribution with a mean around $300 for the central SSP2-4.5 scenario. Then we model the effects of different levels of the persistence of damages, for which the persistence parameter is shown to have enormous effects. Adding stochastic interannual regional temperature variations based on an analysis of observational temperature data [9] can increase the hazard rate of economic catastrophes changes the form of the distribution of SCC values. Both the effects of temperature variability and climate-economy feedbacks are region-dependent. Our results highlight the importance of feedbacks and extremes for the understanding of the expected value, distribution, and heterogeneity of climate impacts.
[1] Moore, F. C., Baldos, U., Hertel, T., & Diaz, D. (2017). New science of climate change impacts on agriculture implies higher social cost of carbon. Nature communications, 8(1), 1607.
[2] Carleton, et al. (2018). Valuing the global mortality consequences of climate change accounting for adaptation costs and benefits.
[3] Ricke, K., Drouet, L., Caldeira, K., & Tavoni, M. (2018). Country-level social cost of carbon. Nature Climate Change, 8(10), 895.
[4] Burke, M., et al. (2016). Opportunities for advances in climate change economics. Science, 352(6283), 292–293. https://doi.org/10.1126/science.aad9634
[5] National Academies of Sciences Engineering and Medicine. (2017). Valuing climate damages: updating estimation of the social cost of carbon dioxide. National Academies Press.
[6] Stiglitz, J. E., et al.. (2017). Report of the high-level commission on carbon prices.
[7] Field, C. B., Barros, V., Stocker, T. F., & Dahe, Q. (2012). Managing the Risks of Extreme Events and Disasters to Advance Climate Change Adaptation: Special Report of the Intergovernmental Panel on Climate Change (Vol. 9781107025). https://doi.org/10.1017/CBO9781139177245.009
[8] Yumashev, D., et al. (2019). Climate policy implications of nonlinear decline of Arctic land permafrost and other cryosphere elements. Nature Communications, 10(1). https://doi.org/10.1038/s41467-019-09863-x
[9] Brierley, C. M., Koch, A., Ilyas, M., Wennyk, N., & Kikstra, J. S. (2019, March 12). Half the world's population already experiences years 1.5°C warmer than preindustrial. https://doi.org/10.31223/osf.io/sbc3f
How to cite: Kikstra, J., Waidelich, P., Rising, J., Yumashev, D., Hope, C., and Brierley, C.: Climate-economy feedbacks, temperature variability, and the social cost of carbon, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13230, https://doi.org/10.5194/egusphere-egu2020-13230, 2020.
A key statistic describing climate change impacts is the “social cost of carbon” (SCC), the total market and non-market costs to society incurred by releasing a ton of CO2. Estimates of the SCC have risen in recent years, with improved understanding of the risk of climate change to various sectors, including agriculture [1], mortality [2], and economic growth [3].
The total risks of climate impacts also depend on the representation of human-climate feedbacks such as the effect of climate impacts on GDP growth and extremes (rather than a focus only on means), but this relationship has not been extensively studied [4-7]. In this paper, we update the widely used PAGE IAM to investigate how SCC distributions change with the inclusion of climate-economy feedbacks and temperature variability. The PAGE model has recently been improved with representations of permafrost thawing and surface albedo feedback, CMIP6 scenarios, and empirical market damage estimates [8]. We study how changes from PAGE09 to PAGE-ICE affected the SCC, increasing it up to 75%, with a SCC distribution with a mean around $300 for the central SSP2-4.5 scenario. Then we model the effects of different levels of the persistence of damages, for which the persistence parameter is shown to have enormous effects. Adding stochastic interannual regional temperature variations based on an analysis of observational temperature data [9] can increase the hazard rate of economic catastrophes changes the form of the distribution of SCC values. Both the effects of temperature variability and climate-economy feedbacks are region-dependent. Our results highlight the importance of feedbacks and extremes for the understanding of the expected value, distribution, and heterogeneity of climate impacts.
[1] Moore, F. C., Baldos, U., Hertel, T., & Diaz, D. (2017). New science of climate change impacts on agriculture implies higher social cost of carbon. Nature communications, 8(1), 1607.
[2] Carleton, et al. (2018). Valuing the global mortality consequences of climate change accounting for adaptation costs and benefits.
[3] Ricke, K., Drouet, L., Caldeira, K., & Tavoni, M. (2018). Country-level social cost of carbon. Nature Climate Change, 8(10), 895.
[4] Burke, M., et al. (2016). Opportunities for advances in climate change economics. Science, 352(6283), 292–293. https://doi.org/10.1126/science.aad9634
[5] National Academies of Sciences Engineering and Medicine. (2017). Valuing climate damages: updating estimation of the social cost of carbon dioxide. National Academies Press.
[6] Stiglitz, J. E., et al.. (2017). Report of the high-level commission on carbon prices.
[7] Field, C. B., Barros, V., Stocker, T. F., & Dahe, Q. (2012). Managing the Risks of Extreme Events and Disasters to Advance Climate Change Adaptation: Special Report of the Intergovernmental Panel on Climate Change (Vol. 9781107025). https://doi.org/10.1017/CBO9781139177245.009
[8] Yumashev, D., et al. (2019). Climate policy implications of nonlinear decline of Arctic land permafrost and other cryosphere elements. Nature Communications, 10(1). https://doi.org/10.1038/s41467-019-09863-x
[9] Brierley, C. M., Koch, A., Ilyas, M., Wennyk, N., & Kikstra, J. S. (2019, March 12). Half the world's population already experiences years 1.5°C warmer than preindustrial. https://doi.org/10.31223/osf.io/sbc3f
How to cite: Kikstra, J., Waidelich, P., Rising, J., Yumashev, D., Hope, C., and Brierley, C.: Climate-economy feedbacks, temperature variability, and the social cost of carbon, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13230, https://doi.org/10.5194/egusphere-egu2020-13230, 2020.
EGU2020-20039 * | Displays | ITS5.4/CL3.4 | Highlight
Are economists getting climate dynamics right and does it matter?Armon Rezai, Simon Dietz, Frederick van der Ploeg, and Frank Venmans
We show that several of the most important economic models of climate change produce climate dynamics inconsistent with the current crop of models in climate science. First, most economic models exhibit far too long a delay between an impulse of CO2 emissions and warming. Second, few economic models incorporate positive feedbacks in the carbon cycle, whereby CO2 uptake by carbon sinks diminishes at the margin with increasing cumulative CO2 uptake and temperature. These inconsistencies affect economic prescriptions to abate CO2 emissions. Controlling for how the economy is represented, different climate models result in significantly different optimal CO2 emissions. A long delay between emissions and warming leads to optimal carbon prices that are too low and too much sensitivity of optimal carbon prices to the discount rate. Omitting positive carbon cycle feedbacks also leads to optimal carbon prices that are too low. We conclude it is important for policy purposes to bring economic models in line with the state of the art in climate science.
How to cite: Rezai, A., Dietz, S., van der Ploeg, F., and Venmans, F.: Are economists getting climate dynamics right and does it matter?, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20039, https://doi.org/10.5194/egusphere-egu2020-20039, 2020.
We show that several of the most important economic models of climate change produce climate dynamics inconsistent with the current crop of models in climate science. First, most economic models exhibit far too long a delay between an impulse of CO2 emissions and warming. Second, few economic models incorporate positive feedbacks in the carbon cycle, whereby CO2 uptake by carbon sinks diminishes at the margin with increasing cumulative CO2 uptake and temperature. These inconsistencies affect economic prescriptions to abate CO2 emissions. Controlling for how the economy is represented, different climate models result in significantly different optimal CO2 emissions. A long delay between emissions and warming leads to optimal carbon prices that are too low and too much sensitivity of optimal carbon prices to the discount rate. Omitting positive carbon cycle feedbacks also leads to optimal carbon prices that are too low. We conclude it is important for policy purposes to bring economic models in line with the state of the art in climate science.
How to cite: Rezai, A., Dietz, S., van der Ploeg, F., and Venmans, F.: Are economists getting climate dynamics right and does it matter?, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20039, https://doi.org/10.5194/egusphere-egu2020-20039, 2020.
EGU2020-14954 | Displays | ITS5.4/CL3.4
Addressing uncertainty, multiple objectives, and adaptation in DICE: Can dynamic planning shed new light on the decision-making process?Angelo Carlino, Giacomo Marangoni, Massimo Tavoni, and Andrea Castelletti
Integrated assessment models are often criticized because of: i) the simplified treatment of the severe uncertainties involved, ii) the strong dependency on the difficult quantification of future climate damages and iii) their implicit description of adaptation strategies.
We propose a novel approach to tackle these three issues by coupling a closed loop control strategy and an updated AD-DICE (ADaptation - Dynamic Integrated Climate-Economy) model. First, we model explicitly uncertain parametrization and stochastic processes for climate sensitivity, atmospheric temperature, population, productivity, and carbon intensity. We then ensure an adaptive response to the uncertainties by implementing a closed-loop control system where we condition the decision variables on state observation. This leads to an improvement with respect to the traditional static optimization approach. Second, we propose a multi-objective formulation of the optimization problem traditionally solved by DICE in order to separate temperature targets from economic objectives. This allows us to be less dependent on the climate damages quantification while studying the tradeoffs to find compromise solutions. Third, we include an explicit description of adaptation strategies introducing stock and flow adaptation investments as additional decision variables. Thanks to this last modification, we can also thoroughly analyze the tradeoffs between mitigation and adaptation.
Results show that the proposed method outperforms traditional static optimization both in single-objective and multi-objectives contexts. Moreover, we confirm the absolute need for fast and strong mitigation since we observe that the tradeoff between temperature and economic objectives is strongly reduced under uncertainty and when considering adaptation. On the other hand, different adaptation strategies correspond to a different balance of present value damages and economic objectives. By making explicit this tradeoff between two socio-economic objectives, results reveal the political nature of the choice over climate adaptation strategies.
How to cite: Carlino, A., Marangoni, G., Tavoni, M., and Castelletti, A.: Addressing uncertainty, multiple objectives, and adaptation in DICE: Can dynamic planning shed new light on the decision-making process?, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-14954, https://doi.org/10.5194/egusphere-egu2020-14954, 2020.
Integrated assessment models are often criticized because of: i) the simplified treatment of the severe uncertainties involved, ii) the strong dependency on the difficult quantification of future climate damages and iii) their implicit description of adaptation strategies.
We propose a novel approach to tackle these three issues by coupling a closed loop control strategy and an updated AD-DICE (ADaptation - Dynamic Integrated Climate-Economy) model. First, we model explicitly uncertain parametrization and stochastic processes for climate sensitivity, atmospheric temperature, population, productivity, and carbon intensity. We then ensure an adaptive response to the uncertainties by implementing a closed-loop control system where we condition the decision variables on state observation. This leads to an improvement with respect to the traditional static optimization approach. Second, we propose a multi-objective formulation of the optimization problem traditionally solved by DICE in order to separate temperature targets from economic objectives. This allows us to be less dependent on the climate damages quantification while studying the tradeoffs to find compromise solutions. Third, we include an explicit description of adaptation strategies introducing stock and flow adaptation investments as additional decision variables. Thanks to this last modification, we can also thoroughly analyze the tradeoffs between mitigation and adaptation.
Results show that the proposed method outperforms traditional static optimization both in single-objective and multi-objectives contexts. Moreover, we confirm the absolute need for fast and strong mitigation since we observe that the tradeoff between temperature and economic objectives is strongly reduced under uncertainty and when considering adaptation. On the other hand, different adaptation strategies correspond to a different balance of present value damages and economic objectives. By making explicit this tradeoff between two socio-economic objectives, results reveal the political nature of the choice over climate adaptation strategies.
How to cite: Carlino, A., Marangoni, G., Tavoni, M., and Castelletti, A.: Addressing uncertainty, multiple objectives, and adaptation in DICE: Can dynamic planning shed new light on the decision-making process?, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-14954, https://doi.org/10.5194/egusphere-egu2020-14954, 2020.
EGU2020-1339 | Displays | ITS5.4/CL3.4 | Highlight
Global Sensitivity Analysis of Optimal Climate PoliciesAlena Miftakhova
A major tool that supports climate policy decisions, integrated assessment models are highly vulnerable to their initial assumptions and calibrations. Despite the broad literature rich in both single-model and multi-model sensitivity analyses, universal, well-established practices are still missing in this field. This paper endorses structured global sensitivity analysis (GSA) as an indispensable routine in climate–economic modeling. An application of a high-efficiency GSA method based on polynomial chaos expansions to DICE provides two insights. First, only global and comprehensive—as opposed to local or selective—sensitivity analysis delivers a trustworthy picture of the uncertainty propagated through the model. Second, careful treatment of the model’s structure throughout the analysis reconciles the results with established analytical insights—enhancing these insights with more details. The efficient GSA method provides a comprehensive decomposition of the uncertainty in a model’s output while minimizing computational costs, and is hence potentially applicable to models of higher complexity.
How to cite: Miftakhova, A.: Global Sensitivity Analysis of Optimal Climate Policies, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1339, https://doi.org/10.5194/egusphere-egu2020-1339, 2020.
A major tool that supports climate policy decisions, integrated assessment models are highly vulnerable to their initial assumptions and calibrations. Despite the broad literature rich in both single-model and multi-model sensitivity analyses, universal, well-established practices are still missing in this field. This paper endorses structured global sensitivity analysis (GSA) as an indispensable routine in climate–economic modeling. An application of a high-efficiency GSA method based on polynomial chaos expansions to DICE provides two insights. First, only global and comprehensive—as opposed to local or selective—sensitivity analysis delivers a trustworthy picture of the uncertainty propagated through the model. Second, careful treatment of the model’s structure throughout the analysis reconciles the results with established analytical insights—enhancing these insights with more details. The efficient GSA method provides a comprehensive decomposition of the uncertainty in a model’s output while minimizing computational costs, and is hence potentially applicable to models of higher complexity.
How to cite: Miftakhova, A.: Global Sensitivity Analysis of Optimal Climate Policies, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1339, https://doi.org/10.5194/egusphere-egu2020-1339, 2020.
EGU2020-10262 * | Displays | ITS5.4/CL3.4 | Highlight
A new scenario logic for the Paris Agreement long-term temperature goalJoeri Rogelj, Daniel Huppmann, Volker Krey, Keywan Riahi, Leon Clarke, Matthew Gidden, Zebedee Nicholls, and Malte Meinshausen
To understand how global warming can be kept well-below 2°C and even 1.5°C, climate policy uses scenarios that describe how society could transform in order to reduce its greenhouse gas emissions. Such scenario are typically created with integrated assessment models that include a representation of the economy, and the energy, land-use, and industrial system. However, current climate change scenarios have a key weakness in that they typically focus on reaching specific climate goals in 2100 only.
This choice results in risky pathways that delay action and seemingly inevitably rely on large quantities of carbon-dioxide removal after mid-century. Here we propose a framework that more closely reflects the intentions of the UN Paris Agreement. It focusses on reaching a peak in global warming with either stabilisation or reversal thereafter. This approach provides a critical extension of the widely used Shared Socioecononomic Pathways (SSP) framework and reveals a more diverse picture: an inevitable transition period of aggressive near-term climate action to reach carbon neutrality can be followed by a variety of long-term states. It allows policymakers to explicitly consider near-term climate strategies in the context of intergenerational equity and long-term sustainability.
How to cite: Rogelj, J., Huppmann, D., Krey, V., Riahi, K., Clarke, L., Gidden, M., Nicholls, Z., and Meinshausen, M.: A new scenario logic for the Paris Agreement long-term temperature goal, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10262, https://doi.org/10.5194/egusphere-egu2020-10262, 2020.
To understand how global warming can be kept well-below 2°C and even 1.5°C, climate policy uses scenarios that describe how society could transform in order to reduce its greenhouse gas emissions. Such scenario are typically created with integrated assessment models that include a representation of the economy, and the energy, land-use, and industrial system. However, current climate change scenarios have a key weakness in that they typically focus on reaching specific climate goals in 2100 only.
This choice results in risky pathways that delay action and seemingly inevitably rely on large quantities of carbon-dioxide removal after mid-century. Here we propose a framework that more closely reflects the intentions of the UN Paris Agreement. It focusses on reaching a peak in global warming with either stabilisation or reversal thereafter. This approach provides a critical extension of the widely used Shared Socioecononomic Pathways (SSP) framework and reveals a more diverse picture: an inevitable transition period of aggressive near-term climate action to reach carbon neutrality can be followed by a variety of long-term states. It allows policymakers to explicitly consider near-term climate strategies in the context of intergenerational equity and long-term sustainability.
How to cite: Rogelj, J., Huppmann, D., Krey, V., Riahi, K., Clarke, L., Gidden, M., Nicholls, Z., and Meinshausen, M.: A new scenario logic for the Paris Agreement long-term temperature goal, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10262, https://doi.org/10.5194/egusphere-egu2020-10262, 2020.
EGU2020-20564 | Displays | ITS5.4/CL3.4
Systematic scenario process to support analysis of long-term emissions scenarios and transformation pathways for the IPCC WG3 6th Assessment ReportEdward A. Byers, Keywan Riahi, Elmar Kriegler, Volker Krey, Roberto Schaeffer, Detlef van Vuuren, Matthew Gidden, Daniel Huppmann, Jarmo Kikstra, Robin Lamboll, Malte Meinshausen, Zebedee Nicholls, and Joeri Rogelj
The assessment of long-term greenhouse gas emissions scenarios and societal transformation pathways is a key component of the IPCC Working Group 3 (WG3) on the Mitigation of Climate Change. A large scientific community, typically using integrated assessment models and econometric frameworks, supports this assessment in understanding both near-term actions and long-term policy responses and goals related to mitigating global warming. WG3 must systematically assess hundreds of scenarios from the literature to gain an in-depth understanding of long-term emissions pathways, across all sectors, leading to various levels of global warming. Systematic assessment and understanding the climate outcomes of each emissions scenario, requires coordinated processes which have developed over consecutive IPCC assessments. Here, we give an overview of the processes involved in the systematic assessment of long-term mitigation pathways as used in recent IPCC Assessments1 and being further developed for the IPCC 6th Assessment Report (AR6). The presentation will explain how modelling teams can submit scenarios to AR6 and invite feedback to the process.
Following discussions amongst IPCC Lead Authors to define the scope of scenarios desired and variables requested, a call for scenarios to support AR6 was launched in September 2019. Modelling teams have registered and submitted scenarios through Autumn 2019 using a new and secure online submission portal, from which authorised Lead Authors can interrogate the scenarios interactively.
This analysis is underpinned by the open-source software pyam, a Python package specifically designed for analysis and visualisation of integrated assessment scenarios2. Submitted scenarios are automatically checked for errors and processed using a new climate assessment pipeline. The climate assessment involves infilling and harmonization3 of emissions data, then the scenarios are processed through Simple Climate Models, using the OpenSCM framework4, to give probabilistic climate implications for each scenario – atmospheric concentrations, radiative forcing and global mean temperature. The climate assessment accounts for updated climate sensitivity estimates from CMIP6 and WG1,s scenarios are categorized according to climate outcomes and distinguish between timing and levels of net-negative emissions, emissions peak and temperature overshoot. Scenarios are also categorized by other indicators, for consistent use across WG3 chapters, such as: population and GDP; Primary and Final energy use; and shares of renewables, bioenergy and fossil fuels.
The automated framework also facilitates bolt-on analyses, such as estimating the population impacted by biophysical climate impacts5, and estimates of avoided damages with the social cost of carbon6.
Upon publication of the WG3 AR6 report, all scenario data used in the WG3 Assessment will be publicly available on a Scenario Explorer, an online tool for interrogating and visualizing the data that supports the report. In combination, this framework brings new levels of consistency, transparency and reproducibility to the assessment of scenarios in IPCC WG3 and will be a key resource for the climate community in understanding the main drivers of different transformation pathways.
How to cite: Byers, E. A., Riahi, K., Kriegler, E., Krey, V., Schaeffer, R., van Vuuren, D., Gidden, M., Huppmann, D., Kikstra, J., Lamboll, R., Meinshausen, M., Nicholls, Z., and Rogelj, J.: Systematic scenario process to support analysis of long-term emissions scenarios and transformation pathways for the IPCC WG3 6th Assessment Report, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20564, https://doi.org/10.5194/egusphere-egu2020-20564, 2020.
The assessment of long-term greenhouse gas emissions scenarios and societal transformation pathways is a key component of the IPCC Working Group 3 (WG3) on the Mitigation of Climate Change. A large scientific community, typically using integrated assessment models and econometric frameworks, supports this assessment in understanding both near-term actions and long-term policy responses and goals related to mitigating global warming. WG3 must systematically assess hundreds of scenarios from the literature to gain an in-depth understanding of long-term emissions pathways, across all sectors, leading to various levels of global warming. Systematic assessment and understanding the climate outcomes of each emissions scenario, requires coordinated processes which have developed over consecutive IPCC assessments. Here, we give an overview of the processes involved in the systematic assessment of long-term mitigation pathways as used in recent IPCC Assessments1 and being further developed for the IPCC 6th Assessment Report (AR6). The presentation will explain how modelling teams can submit scenarios to AR6 and invite feedback to the process.
Following discussions amongst IPCC Lead Authors to define the scope of scenarios desired and variables requested, a call for scenarios to support AR6 was launched in September 2019. Modelling teams have registered and submitted scenarios through Autumn 2019 using a new and secure online submission portal, from which authorised Lead Authors can interrogate the scenarios interactively.
This analysis is underpinned by the open-source software pyam, a Python package specifically designed for analysis and visualisation of integrated assessment scenarios2. Submitted scenarios are automatically checked for errors and processed using a new climate assessment pipeline. The climate assessment involves infilling and harmonization3 of emissions data, then the scenarios are processed through Simple Climate Models, using the OpenSCM framework4, to give probabilistic climate implications for each scenario – atmospheric concentrations, radiative forcing and global mean temperature. The climate assessment accounts for updated climate sensitivity estimates from CMIP6 and WG1,s scenarios are categorized according to climate outcomes and distinguish between timing and levels of net-negative emissions, emissions peak and temperature overshoot. Scenarios are also categorized by other indicators, for consistent use across WG3 chapters, such as: population and GDP; Primary and Final energy use; and shares of renewables, bioenergy and fossil fuels.
The automated framework also facilitates bolt-on analyses, such as estimating the population impacted by biophysical climate impacts5, and estimates of avoided damages with the social cost of carbon6.
Upon publication of the WG3 AR6 report, all scenario data used in the WG3 Assessment will be publicly available on a Scenario Explorer, an online tool for interrogating and visualizing the data that supports the report. In combination, this framework brings new levels of consistency, transparency and reproducibility to the assessment of scenarios in IPCC WG3 and will be a key resource for the climate community in understanding the main drivers of different transformation pathways.
How to cite: Byers, E. A., Riahi, K., Kriegler, E., Krey, V., Schaeffer, R., van Vuuren, D., Gidden, M., Huppmann, D., Kikstra, J., Lamboll, R., Meinshausen, M., Nicholls, Z., and Rogelj, J.: Systematic scenario process to support analysis of long-term emissions scenarios and transformation pathways for the IPCC WG3 6th Assessment Report, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20564, https://doi.org/10.5194/egusphere-egu2020-20564, 2020.
EGU2020-21004 | Displays | ITS5.4/CL3.4
Modelling Historical Adaptation Rates to Inform Future Adaptation PathwaysMoritz Schwarz and Felix Pretis
Quantifying the climate impacts onto economic outcomes is crucial to inform mitigation and adaptation policy decisions in the context of anthropogenic climate change. Existing macro-level economic impact projections are often derived using calibrated Integrated Assessment Models (IAMs) or empirically-estimated econometric models. Both approaches, however, rarely consider how such impacts would change under macro-level adaptation interventions. Here, we present approaches to econometrically test climate impact estimates for their historical stability to approximate empirical macro-adaptation rates. By modelling deterministic trends and structural breaks as well as socio-economic drivers of adaptation, our approach could provide the basis for a new set of macro-economic impact projections that control for adaptation measures. Ultimately, adaptation-explicit impact projections could be used to inform both mitigation and adaptation decisions and further allow benchmarking of non-empirical modelling approaches.
How to cite: Schwarz, M. and Pretis, F.: Modelling Historical Adaptation Rates to Inform Future Adaptation Pathways, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21004, https://doi.org/10.5194/egusphere-egu2020-21004, 2020.
Quantifying the climate impacts onto economic outcomes is crucial to inform mitigation and adaptation policy decisions in the context of anthropogenic climate change. Existing macro-level economic impact projections are often derived using calibrated Integrated Assessment Models (IAMs) or empirically-estimated econometric models. Both approaches, however, rarely consider how such impacts would change under macro-level adaptation interventions. Here, we present approaches to econometrically test climate impact estimates for their historical stability to approximate empirical macro-adaptation rates. By modelling deterministic trends and structural breaks as well as socio-economic drivers of adaptation, our approach could provide the basis for a new set of macro-economic impact projections that control for adaptation measures. Ultimately, adaptation-explicit impact projections could be used to inform both mitigation and adaptation decisions and further allow benchmarking of non-empirical modelling approaches.
How to cite: Schwarz, M. and Pretis, F.: Modelling Historical Adaptation Rates to Inform Future Adaptation Pathways, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21004, https://doi.org/10.5194/egusphere-egu2020-21004, 2020.
EGU2020-21342 | Displays | ITS5.4/CL3.4
Projections of global labour productivity under climate changeNicole van Maanen, Shouro Dasgupta, Simon N. Gosling, Franziska Piontek, Christian Otto, and Carl-Friedrich Schleussner
Labour productivity declines in hot conditions. The frequency and intensities of extreme heat events is projected to increase substantially with climate change across the world, which causes not only severe impacts on health and well-being but could also lead to adverse impacts on the economy in particular in developing countries. Wet bulb globe temperature (WBGT) is a commonly used metric that combines temperature and humidity to estimate the occurrence of heat stress in occupational health. Although the links between heat stress and economic effects are well established, there are substantial differences between existing impact models of labour productivity.
Here we present results of future changes in labour productivity based on a comprehensive intercomparison of labour productivity models across indoor and outdoor working environments, locations and countries. Under the framework of the Inter-Sectoral Impact Model Intercomparison Project (ISIMIP), we applied projections from multiple bias corrected global climate models to multiple labour productivity impact models and consider different socioeconomic futures. In addition to models used in existing literature, we use a newly developed model based on empirical exposure-response functions estimated from three- hundred surveys (56 million observations) from 89 countries, that allows for projections at the sub-national level. Based on our model intercomparison results, we can provide robust and spatially explicit projections for changes in labour productivity across the globe. At the same time, our approach allows us to assess and compare existing models of labour productivity estimates, therefore covering multiple dimensions of uncertainty.
How to cite: van Maanen, N., Dasgupta, S., Gosling, S. N., Piontek, F., Otto, C., and Schleussner, C.-F.: Projections of global labour productivity under climate change, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21342, https://doi.org/10.5194/egusphere-egu2020-21342, 2020.
Labour productivity declines in hot conditions. The frequency and intensities of extreme heat events is projected to increase substantially with climate change across the world, which causes not only severe impacts on health and well-being but could also lead to adverse impacts on the economy in particular in developing countries. Wet bulb globe temperature (WBGT) is a commonly used metric that combines temperature and humidity to estimate the occurrence of heat stress in occupational health. Although the links between heat stress and economic effects are well established, there are substantial differences between existing impact models of labour productivity.
Here we present results of future changes in labour productivity based on a comprehensive intercomparison of labour productivity models across indoor and outdoor working environments, locations and countries. Under the framework of the Inter-Sectoral Impact Model Intercomparison Project (ISIMIP), we applied projections from multiple bias corrected global climate models to multiple labour productivity impact models and consider different socioeconomic futures. In addition to models used in existing literature, we use a newly developed model based on empirical exposure-response functions estimated from three- hundred surveys (56 million observations) from 89 countries, that allows for projections at the sub-national level. Based on our model intercomparison results, we can provide robust and spatially explicit projections for changes in labour productivity across the globe. At the same time, our approach allows us to assess and compare existing models of labour productivity estimates, therefore covering multiple dimensions of uncertainty.
How to cite: van Maanen, N., Dasgupta, S., Gosling, S. N., Piontek, F., Otto, C., and Schleussner, C.-F.: Projections of global labour productivity under climate change, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21342, https://doi.org/10.5194/egusphere-egu2020-21342, 2020.
EGU2020-21512 | Displays | ITS5.4/CL3.4
Assessing climate impacts on English economic growth (1645–1740): an econometric approachJosé Luis Martinez-Gonzalez
British pre-industrial economic growth has traditionally been analysed from the Malthusian point of view and other more optimistic approaches, but in many cases, ignoring environmental factors. This article explores the inclusion of the climate in this general debate, focusing on one of the colder periods of the last 500 years, known as the Maunder Minimum. The provisional results suggest that climate change and the resulting adaptations may have influenced the start of the English Agricultural Revolution, the Energy Transition and the European Divergence. However, from an econometric point of view these results are not fully conclusive, making it necessary to continue working with better primary sources and other alternative methodologies.
How to cite: Martinez-Gonzalez, J. L.: Assessing climate impacts on English economic growth (1645–1740): an econometric approach, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21512, https://doi.org/10.5194/egusphere-egu2020-21512, 2020.
British pre-industrial economic growth has traditionally been analysed from the Malthusian point of view and other more optimistic approaches, but in many cases, ignoring environmental factors. This article explores the inclusion of the climate in this general debate, focusing on one of the colder periods of the last 500 years, known as the Maunder Minimum. The provisional results suggest that climate change and the resulting adaptations may have influenced the start of the English Agricultural Revolution, the Energy Transition and the European Divergence. However, from an econometric point of view these results are not fully conclusive, making it necessary to continue working with better primary sources and other alternative methodologies.
How to cite: Martinez-Gonzalez, J. L.: Assessing climate impacts on English economic growth (1645–1740): an econometric approach, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21512, https://doi.org/10.5194/egusphere-egu2020-21512, 2020.
EGU2020-22104 | Displays | ITS5.4/CL3.4
Investigating the GDP-CO2 relationship using a neural network approachSebastian Jensen, Eric Hillebrand, and Mikkel Bennedsen
Exploiting a national-level panel of per capita CO2 emissions and GDP data, we investigate the GDP-CO2 relationship, using a data-driven approach. We conduct an in-sample analysis in which we investigate the shape of the GDP-CO2 relationship. Utilizing the shape of the GDP-CO2 relationship learned, we project CO2 emissions through 2100, using the same set of GDP and population growth scenarios as used by the Intergovernmental Panel of Climate Change (IPCC) for their sixth assessment report due for release in 2021-22. Our analysis is carried out at two levels: at a global, and at the level of five large regions of the world. We consider a semiparametric model specification which places no restrictions on the functional relationship between GDP and CO2, but which allows for country and time specific fixed effects. The nonparametric component of our model is specified as a feedforward neural network, ensuring universal approximation capabilities, theoretically. In a simulation study, we show that our model is able to capture various complex relationships in finite samples of realistic sizes.
How to cite: Jensen, S., Hillebrand, E., and Bennedsen, M.: Investigating the GDP-CO2 relationship using a neural network approach, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22104, https://doi.org/10.5194/egusphere-egu2020-22104, 2020.
Exploiting a national-level panel of per capita CO2 emissions and GDP data, we investigate the GDP-CO2 relationship, using a data-driven approach. We conduct an in-sample analysis in which we investigate the shape of the GDP-CO2 relationship. Utilizing the shape of the GDP-CO2 relationship learned, we project CO2 emissions through 2100, using the same set of GDP and population growth scenarios as used by the Intergovernmental Panel of Climate Change (IPCC) for their sixth assessment report due for release in 2021-22. Our analysis is carried out at two levels: at a global, and at the level of five large regions of the world. We consider a semiparametric model specification which places no restrictions on the functional relationship between GDP and CO2, but which allows for country and time specific fixed effects. The nonparametric component of our model is specified as a feedforward neural network, ensuring universal approximation capabilities, theoretically. In a simulation study, we show that our model is able to capture various complex relationships in finite samples of realistic sizes.
How to cite: Jensen, S., Hillebrand, E., and Bennedsen, M.: Investigating the GDP-CO2 relationship using a neural network approach, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22104, https://doi.org/10.5194/egusphere-egu2020-22104, 2020.
EGU2020-9697 | Displays | ITS5.4/CL3.4
Trend analysis and transient climate sensitivity revealed by CMIP6Menghan Yuan and Thomas Leirvik
CMIP6 (Coupled Model Intercomparison Project Version 6) is currently publishing updates on simulations for Global Climate Models (GCMs). In this paper, we focus on analyzing surface temperature and downward solar radiation (SDSR), which are two essential variables in estimating the transient climate sensitivity (TCS). We carry out the analysis for five GCMs that have published data at the moment. More GCMs will be included in the analysis when data is available. The research period dates from 1960 to 2014, providing the latest available projection for climate forcings. Temperature projections accord reasonably well with observations. This is no surprise, as data for CMIP5 was also aligned with observations. On the other hand, a striking improvement has been observed with respect to SDSR. According to Storelvmo et al. (2018), CMIP5 models showed no statistically significant trend over time and revealed egregious mismatch with observations, casting major concerns about their fidelity. The data from CMIP6 models, however, this mismatch between simulations and observations is substantially alleviated. Not only is a negative trend recorded, but the significant fall around the beginning of the 1990s, due to the Mount Pinatubo eruption, is also reproduced, though with a slightly smaller scale compared to the observations in that period.
Based on the econometric framework from Phillips et al. (2019), we estimate the TCS for five GCMs. We find that the TCS estimates range from 2.03K to 2.65K. Each reported TCS for the five GCM’s are within it’s corresponding 95% confidence interval for the estimated TCS. It is worth noticing that a 25-year rolling window estimation indicates that average TCS for the GCMs varies greatly along time, though it has a significant upward trend from the beginning of the 1990s until 2009, and flattens, or even decreases, afterward.
We also compute the sample average of the TCS estimates. We find that for the period 1964-2005, which is used in Phillips et al. (2019), the average TCS is 1.82 for the CMIP5 models, and 2.07 for CMIP6. The difference is not significant. For the 1964-2014 period, however, the average TCS estimate for CMIP6 is 2.38, which is significantly higher than the average CMIP5 estimates. Since we find that the CMIP6 simulations reproduce observed trends in RSDS much better than the CMIP5 simulations, when compared to observations, this indicates both that the econometric framework of Phillips et al.(2019) is working very well and captures key drivers of the climate, and that the true TCS is most likely closer to the estimated TCS for observations.
How to cite: Yuan, M. and Leirvik, T.: Trend analysis and transient climate sensitivity revealed by CMIP6, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9697, https://doi.org/10.5194/egusphere-egu2020-9697, 2020.
CMIP6 (Coupled Model Intercomparison Project Version 6) is currently publishing updates on simulations for Global Climate Models (GCMs). In this paper, we focus on analyzing surface temperature and downward solar radiation (SDSR), which are two essential variables in estimating the transient climate sensitivity (TCS). We carry out the analysis for five GCMs that have published data at the moment. More GCMs will be included in the analysis when data is available. The research period dates from 1960 to 2014, providing the latest available projection for climate forcings. Temperature projections accord reasonably well with observations. This is no surprise, as data for CMIP5 was also aligned with observations. On the other hand, a striking improvement has been observed with respect to SDSR. According to Storelvmo et al. (2018), CMIP5 models showed no statistically significant trend over time and revealed egregious mismatch with observations, casting major concerns about their fidelity. The data from CMIP6 models, however, this mismatch between simulations and observations is substantially alleviated. Not only is a negative trend recorded, but the significant fall around the beginning of the 1990s, due to the Mount Pinatubo eruption, is also reproduced, though with a slightly smaller scale compared to the observations in that period.
Based on the econometric framework from Phillips et al. (2019), we estimate the TCS for five GCMs. We find that the TCS estimates range from 2.03K to 2.65K. Each reported TCS for the five GCM’s are within it’s corresponding 95% confidence interval for the estimated TCS. It is worth noticing that a 25-year rolling window estimation indicates that average TCS for the GCMs varies greatly along time, though it has a significant upward trend from the beginning of the 1990s until 2009, and flattens, or even decreases, afterward.
We also compute the sample average of the TCS estimates. We find that for the period 1964-2005, which is used in Phillips et al. (2019), the average TCS is 1.82 for the CMIP5 models, and 2.07 for CMIP6. The difference is not significant. For the 1964-2014 period, however, the average TCS estimate for CMIP6 is 2.38, which is significantly higher than the average CMIP5 estimates. Since we find that the CMIP6 simulations reproduce observed trends in RSDS much better than the CMIP5 simulations, when compared to observations, this indicates both that the econometric framework of Phillips et al.(2019) is working very well and captures key drivers of the climate, and that the true TCS is most likely closer to the estimated TCS for observations.
How to cite: Yuan, M. and Leirvik, T.: Trend analysis and transient climate sensitivity revealed by CMIP6, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9697, https://doi.org/10.5194/egusphere-egu2020-9697, 2020.
EGU2020-12487 | Displays | ITS5.4/CL3.4
Statistical Approaches for Modeling Ice Sheet InterconnectivityAndrew Martinez, Luke Jackson, Felix Pretis, and Katarina Juselius
The greatest sources of uncertainty for future sea-level rise are the Greenland and Antarctic ice sheets. An important aspect of this uncertainty is the potential interconnectivity between them, which may amplify underlying instabilities in individual ice sheets. We explore these connections empirically by modelling the ice sheets as a cointegrated system. We consider two specications which allow the ice sheets to follow either an I(1) or an I(2) process in order to disentangle the long-run theory consistent relationships in the data. We examine the stability of these relationships over time both in and out of sample and eximine how a sudden loss of ice in Greenland propagates through the system. We show that a 1 Gigatonne loss of ice leads to a large and persistent loss of ice in West Arctica which is partially offset by an accumulation of ice in East Antarctica. Accounting for the long-run interactions between the ice sheets helps to improve our understanding of future instabilities and provides useful projections of the future paths of the ice sheets.
How to cite: Martinez, A., Jackson, L., Pretis, F., and Juselius, K.: Statistical Approaches for Modeling Ice Sheet Interconnectivity, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12487, https://doi.org/10.5194/egusphere-egu2020-12487, 2020.
The greatest sources of uncertainty for future sea-level rise are the Greenland and Antarctic ice sheets. An important aspect of this uncertainty is the potential interconnectivity between them, which may amplify underlying instabilities in individual ice sheets. We explore these connections empirically by modelling the ice sheets as a cointegrated system. We consider two specications which allow the ice sheets to follow either an I(1) or an I(2) process in order to disentangle the long-run theory consistent relationships in the data. We examine the stability of these relationships over time both in and out of sample and eximine how a sudden loss of ice in Greenland propagates through the system. We show that a 1 Gigatonne loss of ice leads to a large and persistent loss of ice in West Arctica which is partially offset by an accumulation of ice in East Antarctica. Accounting for the long-run interactions between the ice sheets helps to improve our understanding of future instabilities and provides useful projections of the future paths of the ice sheets.
How to cite: Martinez, A., Jackson, L., Pretis, F., and Juselius, K.: Statistical Approaches for Modeling Ice Sheet Interconnectivity, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12487, https://doi.org/10.5194/egusphere-egu2020-12487, 2020.
EGU2020-2789 | Displays | ITS5.4/CL3.4
Projecting the effect of climate-change-induced increases in extreme rainfall on residential property damagesJacob Pastor, Ilan Noy, Isabelle Sin, Abha Sood, David Fleming-Munoz, and Sally Owen
New Zealand’s public insurer, the Earthquake Commission (EQC), provides residential insurance for some weather-related damage. Climate change and the expected increase in intensity and frequency of weather-related events are likely to translate into higher damages and thus an additional financial liability for the EQC. We project future insured damages from extreme precipitation events associated with future projected climatic change. We first estimate the empirical relationship between extreme precipitation events and the EQC’s weather-related insurance claims based on a complete dataset of all claims from 2000 to 2017. We then use this estimated relationship, together with climate projections based on future GHG concentration scenarios from six different dynamically downscaled Regional Climate Models, to predict the impact of future extreme precipitation events on EQC liabilities for different time horizons up to the year 2100. Our results show predicted adverse impacts vary over time and space. The percent change between projected and past damages—the climate change signal—ranges between an increase of 7% and 26% by the end of the century. We also give detailed caveats as to why these quantities might be mis-estimated. The projected increase in the public insurer’s liabilities could also be used to inform private insurers, regulators, and policymakers who are assessing the future performance of both the public and private insurers that cover weather-related risks in the face of climatic change.
How to cite: Pastor, J., Noy, I., Sin, I., Sood, A., Fleming-Munoz, D., and Owen, S.: Projecting the effect of climate-change-induced increases in extreme rainfall on residential property damages, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2789, https://doi.org/10.5194/egusphere-egu2020-2789, 2020.
New Zealand’s public insurer, the Earthquake Commission (EQC), provides residential insurance for some weather-related damage. Climate change and the expected increase in intensity and frequency of weather-related events are likely to translate into higher damages and thus an additional financial liability for the EQC. We project future insured damages from extreme precipitation events associated with future projected climatic change. We first estimate the empirical relationship between extreme precipitation events and the EQC’s weather-related insurance claims based on a complete dataset of all claims from 2000 to 2017. We then use this estimated relationship, together with climate projections based on future GHG concentration scenarios from six different dynamically downscaled Regional Climate Models, to predict the impact of future extreme precipitation events on EQC liabilities for different time horizons up to the year 2100. Our results show predicted adverse impacts vary over time and space. The percent change between projected and past damages—the climate change signal—ranges between an increase of 7% and 26% by the end of the century. We also give detailed caveats as to why these quantities might be mis-estimated. The projected increase in the public insurer’s liabilities could also be used to inform private insurers, regulators, and policymakers who are assessing the future performance of both the public and private insurers that cover weather-related risks in the face of climatic change.
How to cite: Pastor, J., Noy, I., Sin, I., Sood, A., Fleming-Munoz, D., and Owen, S.: Projecting the effect of climate-change-induced increases in extreme rainfall on residential property damages, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2789, https://doi.org/10.5194/egusphere-egu2020-2789, 2020.
EGU2020-22094 | Displays | ITS5.4/CL3.4
Optimizing rotation management of forest plantations: the effects of carbon accounting methodsGuolong Hou, Claudio O. Delang, Xixi Lu, and Roland Olschewski
Forest has great value both in storing carbon and timber production. Afforestation has been widely undertaken across countries to achieve their goals in poverty alleviation and environment protection, specifically in mitigating the atmosphere carbon concentration. This study determines the optimal rotations of different forest types in China’s afforestation projects considering the costs of benefit of afforestation and the carbon value under two different carbon accounting rules, tCER and lCER accounting. The optimal rotation periods of three tree species, Eucalyptus, Chinese fir and Poplar, were estimated using data from various Chinese regions. We apply a modified Hartman rotation model to calculate the optimal rotation period. Results show that at carbon price of 15 USD per t CO2 for a 5-year validation period, the optimal rotation period are all extended with the highest increase (5 years or 29%) found for Chinese fir (E, N, NE) under tCER accounting after considering the value of carbon sequestration. However, the optimal decision for Eucalyptus is extended to 3 years or 60% under lCER accounting. Poplar plantation is less influenced by either tCER or lCER accounting. We further examine the sensitivity of the optimal decision to carbon price and interest rate. Results show the optimal decision of Chinese fir is highly sensitive to the changes of carbon price or interest rate under tCER accounting, while that of Eucalyptus is the most sensitive under lCER accounting. We demonstrate the significant effects of carbon accounting methods and plantation species on the determination of optimal rotation period for afforestation projects. The findings can contribute to the sustainable management of carbon sequestration projects. The methodology can also be applied to other regions in the developing world.
How to cite: Hou, G., Delang, C. O., Lu, X., and Olschewski, R.: Optimizing rotation management of forest plantations: the effects of carbon accounting methods, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22094, https://doi.org/10.5194/egusphere-egu2020-22094, 2020.
Forest has great value both in storing carbon and timber production. Afforestation has been widely undertaken across countries to achieve their goals in poverty alleviation and environment protection, specifically in mitigating the atmosphere carbon concentration. This study determines the optimal rotations of different forest types in China’s afforestation projects considering the costs of benefit of afforestation and the carbon value under two different carbon accounting rules, tCER and lCER accounting. The optimal rotation periods of three tree species, Eucalyptus, Chinese fir and Poplar, were estimated using data from various Chinese regions. We apply a modified Hartman rotation model to calculate the optimal rotation period. Results show that at carbon price of 15 USD per t CO2 for a 5-year validation period, the optimal rotation period are all extended with the highest increase (5 years or 29%) found for Chinese fir (E, N, NE) under tCER accounting after considering the value of carbon sequestration. However, the optimal decision for Eucalyptus is extended to 3 years or 60% under lCER accounting. Poplar plantation is less influenced by either tCER or lCER accounting. We further examine the sensitivity of the optimal decision to carbon price and interest rate. Results show the optimal decision of Chinese fir is highly sensitive to the changes of carbon price or interest rate under tCER accounting, while that of Eucalyptus is the most sensitive under lCER accounting. We demonstrate the significant effects of carbon accounting methods and plantation species on the determination of optimal rotation period for afforestation projects. The findings can contribute to the sustainable management of carbon sequestration projects. The methodology can also be applied to other regions in the developing world.
How to cite: Hou, G., Delang, C. O., Lu, X., and Olschewski, R.: Optimizing rotation management of forest plantations: the effects of carbon accounting methods, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22094, https://doi.org/10.5194/egusphere-egu2020-22094, 2020.
EGU2020-12448 | Displays | ITS5.4/CL3.4
Economic losses from changing hydrology under future climate change, glacier shrinkage and growing water demand in the Tropical AndesFabian Drenkhan, Randy Muñoz, Christian Huggel, Holger Frey, Fernando Valenzuela, and Alina Motschmann
In the Tropical Andes, glaciers play a fundamental role for sustaining human livelihoods and ecosystems in headwater areas and further downstream. However, current rates of glacier shrinkage driven by climate change as well as increasing water demand levels bear a threat to long-term water supply. While a growing number of research has covered impacts of climate change and glacier shrinkage on the terrestrial water cycle and potential disaster risks, the associated potential economic losses have barely been assessed.
Here we present an integrated surface-groundwater assessment model for multiple water sectors under current conditions (1981-2016) and future scenarios (2050) of glacier shrinkage and growing water demand. As a case, the lumped model has been applied to the Santa river basin (including the Cordillera Blanca, Andes of Peru) within three subcatchments and considers effects from evapotranspiration, environmental flows and backflows of water use. Therefore, coupled greenhouse gas concentration (RCP2.6 and RCP8.5) and socioeconomic scenarios are used, which provide a broad range of the magnitude of glacier and water volume changes and associated economic impacts. Finally, net water volume released on the long term due to deglaciation effects is quantified and by multiple metrics converted into potential economic costs and losses for the agriculture, household and hydropower sectors. Additionally, the potential damages from outburst floods from current and future lakes have been included. Results for the entire Santa river basin show that water availability would diminish by about 11-16% (57-78 106 m³) in the dry season (June-August) and by some 7-10% (103-155 106 m³) during the wet season (December-February) under selected glacier shrinkage scenarios until 2050. This is a consequence of diminishing glacier contribution to streamflow which until 2050 would reduce from about 45% to 33% for June-August and from 6% to 4% for December-February. A first rough estimate suggests associated economic losses for main water demand sectors (agriculture, hydropower, drinking water) on the order of about 300 106 USD/year by 2050. Additionally, with ongoing glacier shrinkage and the formation of new lakes, about 45,000 inhabitants and 30,000 buildings are expected to be exposed to the risk of outburst floods in the 21st century.
The pressure on water resources and interconnected socio-eonvironmental systems in the basin is already challenging and expected to further exacerbate within the next decades. Currently, water demand levels are considerably increasing driven by growing irrigated (export) agriculture, population and energy demand which is in a large part sustained by hydropower. A coupling of potential water scarcity driven by climate change with a lack of water governance and high human vulnerabilities, bears strong conflict potentials with negative feedbacks for socio-economic development in the Santa basin and beyond. In this context, our coupled hydro-glacial economic impact model provides important support for future decision-making and long-term water management planning. However, uncertainties are relatively high (uncertainty range to be estimated) due to a lack of (good) hydro-climatic and socio-economic information at appropriate spatiotemporal scales. The presented model framework is potentially transferable to other high mountain catchments in the Tropical Andean region and beyond.
How to cite: Drenkhan, F., Muñoz, R., Huggel, C., Frey, H., Valenzuela, F., and Motschmann, A.: Economic losses from changing hydrology under future climate change, glacier shrinkage and growing water demand in the Tropical Andes, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12448, https://doi.org/10.5194/egusphere-egu2020-12448, 2020.
In the Tropical Andes, glaciers play a fundamental role for sustaining human livelihoods and ecosystems in headwater areas and further downstream. However, current rates of glacier shrinkage driven by climate change as well as increasing water demand levels bear a threat to long-term water supply. While a growing number of research has covered impacts of climate change and glacier shrinkage on the terrestrial water cycle and potential disaster risks, the associated potential economic losses have barely been assessed.
Here we present an integrated surface-groundwater assessment model for multiple water sectors under current conditions (1981-2016) and future scenarios (2050) of glacier shrinkage and growing water demand. As a case, the lumped model has been applied to the Santa river basin (including the Cordillera Blanca, Andes of Peru) within three subcatchments and considers effects from evapotranspiration, environmental flows and backflows of water use. Therefore, coupled greenhouse gas concentration (RCP2.6 and RCP8.5) and socioeconomic scenarios are used, which provide a broad range of the magnitude of glacier and water volume changes and associated economic impacts. Finally, net water volume released on the long term due to deglaciation effects is quantified and by multiple metrics converted into potential economic costs and losses for the agriculture, household and hydropower sectors. Additionally, the potential damages from outburst floods from current and future lakes have been included. Results for the entire Santa river basin show that water availability would diminish by about 11-16% (57-78 106 m³) in the dry season (June-August) and by some 7-10% (103-155 106 m³) during the wet season (December-February) under selected glacier shrinkage scenarios until 2050. This is a consequence of diminishing glacier contribution to streamflow which until 2050 would reduce from about 45% to 33% for June-August and from 6% to 4% for December-February. A first rough estimate suggests associated economic losses for main water demand sectors (agriculture, hydropower, drinking water) on the order of about 300 106 USD/year by 2050. Additionally, with ongoing glacier shrinkage and the formation of new lakes, about 45,000 inhabitants and 30,000 buildings are expected to be exposed to the risk of outburst floods in the 21st century.
The pressure on water resources and interconnected socio-eonvironmental systems in the basin is already challenging and expected to further exacerbate within the next decades. Currently, water demand levels are considerably increasing driven by growing irrigated (export) agriculture, population and energy demand which is in a large part sustained by hydropower. A coupling of potential water scarcity driven by climate change with a lack of water governance and high human vulnerabilities, bears strong conflict potentials with negative feedbacks for socio-economic development in the Santa basin and beyond. In this context, our coupled hydro-glacial economic impact model provides important support for future decision-making and long-term water management planning. However, uncertainties are relatively high (uncertainty range to be estimated) due to a lack of (good) hydro-climatic and socio-economic information at appropriate spatiotemporal scales. The presented model framework is potentially transferable to other high mountain catchments in the Tropical Andean region and beyond.
How to cite: Drenkhan, F., Muñoz, R., Huggel, C., Frey, H., Valenzuela, F., and Motschmann, A.: Economic losses from changing hydrology under future climate change, glacier shrinkage and growing water demand in the Tropical Andes, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12448, https://doi.org/10.5194/egusphere-egu2020-12448, 2020.
EGU2020-21564 | Displays | ITS5.4/CL3.4
The Global Long-term Effects of Storm Surge Damages on Human SettlementsSven Kunze
The influence of natural conditions on human settlements are immense. While a friendly and calm environment can lead to prosperity and growth, a hostile one with frequent natural disasters can result in stagnation, collapse, and even death. Tropical cyclones, as an unpredictable and recurring disastrous events, pose a considerable threat to prosperous development of human societies. The IPCC estimates that globally around 250 million people are vulnerable to storm surge events every year. If the threat is too large, a natural adaptation strategy would seem to move away to less dangerous places. It thus can be considered puzzling that there is a positive trend of moving to coastal flooding zones in Sub-Saharan Africa, North America and Asia, and this is projected to continue in the future. Additionally, climate change may increase the local exposure to storm surge by rising sea levels and changing intensity of tropical cyclones.
Given this worrisome development, a systematic analysis of the relationship between settlement structures and tropical cyclones is called for. In this paper we analyze whether people relocate from hazardous areas impacted by tropical cyclones. Importantly, the greatest threat from a tropical cyclone is generally due to the accompanying storm surge. But, because storm surge levels are hard to model, as of date no global (economic) impact study has attempted to model or used historic storm surge data to estimate the economic impact of tropical storms. Rather most studies only focus on wind damages, while other also include rain damages. Within this paper, we are closing this gap by explicitly modeling historic storm surge data worldwide from 1850-2015 and linking this to local population settlement.
By combining data on bathymetry, tidal cycles, weather conditions, and pressure drop models for the tropical cyclones we are able to estimate spatial storm surge data at a resolution of 5 arc minutes. This data then allows us, in a first step, to analyze its systematic impact on historical geo-referenced population and settlement structure data at a spatial scale of 5 arc minutes. We are able to show some interesting population patterns in response to tropical cyclones. Contrary to many empirical studies, we find that people do settle away from hazardous areas. This effect is especially large for low elevation coastal zones, while for non low elevation coastal areas we find no effect. The same pattern can be found for developing and developed countries, but the shrinking of the population is 39 percent larger in developing countries.
How to cite: Kunze, S.: The Global Long-term Effects of Storm Surge Damages on Human Settlements, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21564, https://doi.org/10.5194/egusphere-egu2020-21564, 2020.
The influence of natural conditions on human settlements are immense. While a friendly and calm environment can lead to prosperity and growth, a hostile one with frequent natural disasters can result in stagnation, collapse, and even death. Tropical cyclones, as an unpredictable and recurring disastrous events, pose a considerable threat to prosperous development of human societies. The IPCC estimates that globally around 250 million people are vulnerable to storm surge events every year. If the threat is too large, a natural adaptation strategy would seem to move away to less dangerous places. It thus can be considered puzzling that there is a positive trend of moving to coastal flooding zones in Sub-Saharan Africa, North America and Asia, and this is projected to continue in the future. Additionally, climate change may increase the local exposure to storm surge by rising sea levels and changing intensity of tropical cyclones.
Given this worrisome development, a systematic analysis of the relationship between settlement structures and tropical cyclones is called for. In this paper we analyze whether people relocate from hazardous areas impacted by tropical cyclones. Importantly, the greatest threat from a tropical cyclone is generally due to the accompanying storm surge. But, because storm surge levels are hard to model, as of date no global (economic) impact study has attempted to model or used historic storm surge data to estimate the economic impact of tropical storms. Rather most studies only focus on wind damages, while other also include rain damages. Within this paper, we are closing this gap by explicitly modeling historic storm surge data worldwide from 1850-2015 and linking this to local population settlement.
By combining data on bathymetry, tidal cycles, weather conditions, and pressure drop models for the tropical cyclones we are able to estimate spatial storm surge data at a resolution of 5 arc minutes. This data then allows us, in a first step, to analyze its systematic impact on historical geo-referenced population and settlement structure data at a spatial scale of 5 arc minutes. We are able to show some interesting population patterns in response to tropical cyclones. Contrary to many empirical studies, we find that people do settle away from hazardous areas. This effect is especially large for low elevation coastal zones, while for non low elevation coastal areas we find no effect. The same pattern can be found for developing and developed countries, but the shrinking of the population is 39 percent larger in developing countries.
How to cite: Kunze, S.: The Global Long-term Effects of Storm Surge Damages on Human Settlements, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21564, https://doi.org/10.5194/egusphere-egu2020-21564, 2020.
EGU2020-2259 | Displays | ITS5.4/CL3.4
Trade-offs and Synergies of Ecosystem Services in Karst Area of China Driven by Grain-for-Green ProgramXiaofeng Wang
As an important means regulating the relationship between human and natural ecosystem, ecological restoration program plays a key role in restoring ecosystem functions. The Grain-for-Green Program (GFGP, One of the world’s most ambitious ecosystem conservation set-aside programs aims to transfer farmland on steep slopes to forestland or grassland to increase vegetation coverage) has been widely implemented from 1999 to 2015 and exerted significant influence on land use and ecosystem services (ESs). In this study, three ecological models (InVEST, RUSLE, and CASA) were used to accurately calculate the three key types of ESs, water yield (WY), soil conservation (SC), and net primary production (NPP) in Karst area of southwestern China from 1982 to 2015. The impact of GFGP on ESs and trade-offs was analyzed. It provides practical guidance in carrying out ecological regulation in Karst area of China under global climate change. Results showed that ESs and trade-offs had changed dramatically driven by GFGP . In detail, temporally, SC and NPP exhibited an increasing trend, while WY exhibited a decreasing trend. Spatially, SC basically decreased from west to east; NPP basically increased from north to south; WY basically increased from west to east; NPP and SC, SC and WY developed in the direction of trade-offs driven by the GFGP, while NPP and WY developed in the direction of synergy. Therefore, future ecosystem management and restoration policy-making should consider trade-offs of ESs so as to achieve sustainable provision of ESs.
How to cite: Wang, X.: Trade-offs and Synergies of Ecosystem Services in Karst Area of China Driven by Grain-for-Green Program, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2259, https://doi.org/10.5194/egusphere-egu2020-2259, 2020.
As an important means regulating the relationship between human and natural ecosystem, ecological restoration program plays a key role in restoring ecosystem functions. The Grain-for-Green Program (GFGP, One of the world’s most ambitious ecosystem conservation set-aside programs aims to transfer farmland on steep slopes to forestland or grassland to increase vegetation coverage) has been widely implemented from 1999 to 2015 and exerted significant influence on land use and ecosystem services (ESs). In this study, three ecological models (InVEST, RUSLE, and CASA) were used to accurately calculate the three key types of ESs, water yield (WY), soil conservation (SC), and net primary production (NPP) in Karst area of southwestern China from 1982 to 2015. The impact of GFGP on ESs and trade-offs was analyzed. It provides practical guidance in carrying out ecological regulation in Karst area of China under global climate change. Results showed that ESs and trade-offs had changed dramatically driven by GFGP . In detail, temporally, SC and NPP exhibited an increasing trend, while WY exhibited a decreasing trend. Spatially, SC basically decreased from west to east; NPP basically increased from north to south; WY basically increased from west to east; NPP and SC, SC and WY developed in the direction of trade-offs driven by the GFGP, while NPP and WY developed in the direction of synergy. Therefore, future ecosystem management and restoration policy-making should consider trade-offs of ESs so as to achieve sustainable provision of ESs.
How to cite: Wang, X.: Trade-offs and Synergies of Ecosystem Services in Karst Area of China Driven by Grain-for-Green Program, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2259, https://doi.org/10.5194/egusphere-egu2020-2259, 2020.
EGU2020-12740 | Displays | ITS5.4/CL3.4
Assessing China's Digital Economy and Environmental Sustainability: A Regional Low-Carbon PerspectiveXiuting Piao and Xuefeng Cui
Digital economy is becoming a new engine of China's economic transformation, leading a new path of green and low-carbon development. However, the positive and negative effects of the digital economy on the environment have also been widely debated. The energy consumption of China's digital economy industry is still increasing, but it has received little attention. This paper studies the emerging links between digital economy and low-carbon sustainable development. Understanding the impact of the digital economy on carbon emissions is critical to addressing the challenges of climate change in the digital age.
By integrating input-output methods, this paper establishes a comprehensive framework to evaluate China's digital economy and environmental sustainable development. It can not only evaluate the carbon emissions in various sub-industries of the digital economy, but also reveal its formation and change mechanism by determining its source industries, transfer paths and economic drivers. Using STIRPAT model and provincial panel data from 2001 to 2016, this paper investigates the impact of the digital economy industry on carbon emissions at the national and regional levels. In addition, assess the carbon footprint of the entire digital industry, including the relative contribution of major infrastructure, core and integration components of the digital economy to carbon emissions. The results show that the digital economy helps reduce China's carbon emissions. The digital economy in the central region has a greater impact on carbon emissions than the eastern region, while the western region has unconspicuous impact. With the emergence of the digital economy in the energy system, energy consumption can be reduced and energy efficiency can be improved, which can help reduce carbon emissions in the energy sector, and contribute to the sector's carbon emission reduction goal of about 3%. The positive and negative impacts of the digital economy on the environment have resulted in an inverted U-shaped relationship between the digital economy and carbon emissions. The inflection point of the digital economy is slightly higher than the medium level, which means that carbon emissions may increase further with the development of the digital economy at this stage. Without control, the relative contribution of the digital economy to carbon emissions may exceed 10% by 2030. These findings not only help to advance the existing literature, but also deserve special attention from policy makers.
How to cite: Piao, X. and Cui, X.: Assessing China's Digital Economy and Environmental Sustainability: A Regional Low-Carbon Perspective, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12740, https://doi.org/10.5194/egusphere-egu2020-12740, 2020.
Digital economy is becoming a new engine of China's economic transformation, leading a new path of green and low-carbon development. However, the positive and negative effects of the digital economy on the environment have also been widely debated. The energy consumption of China's digital economy industry is still increasing, but it has received little attention. This paper studies the emerging links between digital economy and low-carbon sustainable development. Understanding the impact of the digital economy on carbon emissions is critical to addressing the challenges of climate change in the digital age.
By integrating input-output methods, this paper establishes a comprehensive framework to evaluate China's digital economy and environmental sustainable development. It can not only evaluate the carbon emissions in various sub-industries of the digital economy, but also reveal its formation and change mechanism by determining its source industries, transfer paths and economic drivers. Using STIRPAT model and provincial panel data from 2001 to 2016, this paper investigates the impact of the digital economy industry on carbon emissions at the national and regional levels. In addition, assess the carbon footprint of the entire digital industry, including the relative contribution of major infrastructure, core and integration components of the digital economy to carbon emissions. The results show that the digital economy helps reduce China's carbon emissions. The digital economy in the central region has a greater impact on carbon emissions than the eastern region, while the western region has unconspicuous impact. With the emergence of the digital economy in the energy system, energy consumption can be reduced and energy efficiency can be improved, which can help reduce carbon emissions in the energy sector, and contribute to the sector's carbon emission reduction goal of about 3%. The positive and negative impacts of the digital economy on the environment have resulted in an inverted U-shaped relationship between the digital economy and carbon emissions. The inflection point of the digital economy is slightly higher than the medium level, which means that carbon emissions may increase further with the development of the digital economy at this stage. Without control, the relative contribution of the digital economy to carbon emissions may exceed 10% by 2030. These findings not only help to advance the existing literature, but also deserve special attention from policy makers.
How to cite: Piao, X. and Cui, X.: Assessing China's Digital Economy and Environmental Sustainability: A Regional Low-Carbon Perspective, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12740, https://doi.org/10.5194/egusphere-egu2020-12740, 2020.
ITS5.6/NH9.22 – Climate services for insurance and adaptation: catastrophe and extreme climate risk assessment
EGU2020-20647 | Displays | ITS5.6/NH9.22
Climate change, historical data and catastrophe modellingRichard Dixon, Sam Franklin, Len Shaffrey, and Debbie Clifford
This presentation will discuss climate change in the context of catastrophe modelling and tail risk. Given that the catastrophe modelling industry typical only has short historical records that provide limited information as to whether hazard is non-stationary, what are the methods and datasets that may aid the catastrophe modelling community to better understand how and whether risk is changing temporally?
The issues will be framed by using examples of output from a multi-year multi-ensemble 60km global climate simulation, where extra-tropical windstorm daily maximum gust data has been converted into yearly aggregate European insurance loss with the help of PERILS European industry exposure data. The data is used to show how reliance on single historical datasets can produce misleading trends in catastrophe losses - but also potentially point to underlying trends in risk that single historical datasets may not be able to detect.
How to cite: Dixon, R., Franklin, S., Shaffrey, L., and Clifford, D.: Climate change, historical data and catastrophe modelling, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20647, https://doi.org/10.5194/egusphere-egu2020-20647, 2020.
This presentation will discuss climate change in the context of catastrophe modelling and tail risk. Given that the catastrophe modelling industry typical only has short historical records that provide limited information as to whether hazard is non-stationary, what are the methods and datasets that may aid the catastrophe modelling community to better understand how and whether risk is changing temporally?
The issues will be framed by using examples of output from a multi-year multi-ensemble 60km global climate simulation, where extra-tropical windstorm daily maximum gust data has been converted into yearly aggregate European insurance loss with the help of PERILS European industry exposure data. The data is used to show how reliance on single historical datasets can produce misleading trends in catastrophe losses - but also potentially point to underlying trends in risk that single historical datasets may not be able to detect.
How to cite: Dixon, R., Franklin, S., Shaffrey, L., and Clifford, D.: Climate change, historical data and catastrophe modelling, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20647, https://doi.org/10.5194/egusphere-egu2020-20647, 2020.
EGU2020-7462 | Displays | ITS5.6/NH9.22 | Highlight
Europe Windstorm variations: past, present and futureStephen Cusack and Davide Panosetti
Ten years ago, we studied 101 years (1910-2010) of wind observations at five stations spread throughout the Netherlands, and representative of a wider area in Europe containing regions of dense exposure. The raw wind speed data were homogenised using detailed station metadata to account for changes in observation practises, then processed to form a windstorm loss index timeseries. Our analysis found large changes in annual storm losses at multidecadal timescales, with two minima occurring in the 1960s and the 2000s. The more recent minimum was three to four times lower than the century-scale peak of indexed losses in the 1980s and early 1990s and primarily driven by the reduced rate of occurrence of damaging storms.
We recently extended the storm loss timeseries up to 2019 and results confirmed what most of us expected: the lull continues. A recent industry survey indicated the ongoing quiet period is the top science issue for European windstorms, presumably because its large amplitude dwarfs other uncertainties in storm loss climate. The burning question for re/insurance is: what to expect over the next few years? What roles will natural climate variability and anthropogenic forcings play in the medium-term future evolution of our storm climate? Researchers have begun to supply some answers, finding strong empirical links between Arctic sea-ice, the state of the North Atlantic Ocean, and European winter climate, backed up by process-based studies connecting these variables. We will review their findings in the context of storm loss variability, then identify questions which could be key to anticipating the storm activity over the next few years.
How to cite: Cusack, S. and Panosetti, D.: Europe Windstorm variations: past, present and future, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7462, https://doi.org/10.5194/egusphere-egu2020-7462, 2020.
Ten years ago, we studied 101 years (1910-2010) of wind observations at five stations spread throughout the Netherlands, and representative of a wider area in Europe containing regions of dense exposure. The raw wind speed data were homogenised using detailed station metadata to account for changes in observation practises, then processed to form a windstorm loss index timeseries. Our analysis found large changes in annual storm losses at multidecadal timescales, with two minima occurring in the 1960s and the 2000s. The more recent minimum was three to four times lower than the century-scale peak of indexed losses in the 1980s and early 1990s and primarily driven by the reduced rate of occurrence of damaging storms.
We recently extended the storm loss timeseries up to 2019 and results confirmed what most of us expected: the lull continues. A recent industry survey indicated the ongoing quiet period is the top science issue for European windstorms, presumably because its large amplitude dwarfs other uncertainties in storm loss climate. The burning question for re/insurance is: what to expect over the next few years? What roles will natural climate variability and anthropogenic forcings play in the medium-term future evolution of our storm climate? Researchers have begun to supply some answers, finding strong empirical links between Arctic sea-ice, the state of the North Atlantic Ocean, and European winter climate, backed up by process-based studies connecting these variables. We will review their findings in the context of storm loss variability, then identify questions which could be key to anticipating the storm activity over the next few years.
How to cite: Cusack, S. and Panosetti, D.: Europe Windstorm variations: past, present and future, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7462, https://doi.org/10.5194/egusphere-egu2020-7462, 2020.
EGU2020-2568 | Displays | ITS5.6/NH9.22
Estimation of Global Synthetic Tropical Cyclone Hazard Probabilities using the STORM datasetNadia Bloemendaal, Ivan Haigh, Hans de Moel, Sanne Muis, and Jeroen Aerts
Tropical cyclones (TCs), also referred to as hurricanes or typhoons, are amongst the deadliest and costliest natural disasters, affecting people, economies and the environment in coastal areas around the globe when they make landfall. In 2017, Hurricanes Harvey, Irma and Maria entered the top-5 costliest Atlantic hurricanes ever recorded, with combined losses estimated at $220 billion. Therefore, to minimize future loss of life and property and to aid risk mitigation efforts, it is crucial to perform accurate TC risk assessments in low-lying coastal regions. Calculating TC risk at a global scale, however, has proven to be difficult, given the limited temporal and spatial information on landfalling TCs around much of the global coastline.
In this research, we present a novel approach to calculate TC risk under present and future climate conditions on a global scale, using the newly developed Synthetic Tropical cyclOne geneRation Model (STORM). For this, we extract 38 years of historical data from the International Best-Track Archive for Climate Stewardship (IBTrACS). This dataset is used as input for the STORM algorithm to statistically extend this dataset from 38 years to 10,000 years of TC activity. Validation shows that the STORM dataset preserves the TC statistics as found on the original IBTrACS dataset. The STORM dataset is then used to calculate global-scale return periods of TC-induced wind speeds at 0.1°resolution. This return period dataset can then be used to assess the low probabilities of extreme events all around the globe. Moreover, we demonstrate the application of this dataset for TC risk modeling on small islands in e.g. the Caribbean or in the South Pacific Ocean.
How to cite: Bloemendaal, N., Haigh, I., de Moel, H., Muis, S., and Aerts, J.: Estimation of Global Synthetic Tropical Cyclone Hazard Probabilities using the STORM dataset, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2568, https://doi.org/10.5194/egusphere-egu2020-2568, 2020.
Tropical cyclones (TCs), also referred to as hurricanes or typhoons, are amongst the deadliest and costliest natural disasters, affecting people, economies and the environment in coastal areas around the globe when they make landfall. In 2017, Hurricanes Harvey, Irma and Maria entered the top-5 costliest Atlantic hurricanes ever recorded, with combined losses estimated at $220 billion. Therefore, to minimize future loss of life and property and to aid risk mitigation efforts, it is crucial to perform accurate TC risk assessments in low-lying coastal regions. Calculating TC risk at a global scale, however, has proven to be difficult, given the limited temporal and spatial information on landfalling TCs around much of the global coastline.
In this research, we present a novel approach to calculate TC risk under present and future climate conditions on a global scale, using the newly developed Synthetic Tropical cyclOne geneRation Model (STORM). For this, we extract 38 years of historical data from the International Best-Track Archive for Climate Stewardship (IBTrACS). This dataset is used as input for the STORM algorithm to statistically extend this dataset from 38 years to 10,000 years of TC activity. Validation shows that the STORM dataset preserves the TC statistics as found on the original IBTrACS dataset. The STORM dataset is then used to calculate global-scale return periods of TC-induced wind speeds at 0.1°resolution. This return period dataset can then be used to assess the low probabilities of extreme events all around the globe. Moreover, we demonstrate the application of this dataset for TC risk modeling on small islands in e.g. the Caribbean or in the South Pacific Ocean.
How to cite: Bloemendaal, N., Haigh, I., de Moel, H., Muis, S., and Aerts, J.: Estimation of Global Synthetic Tropical Cyclone Hazard Probabilities using the STORM dataset, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2568, https://doi.org/10.5194/egusphere-egu2020-2568, 2020.
EGU2020-7676 | Displays | ITS5.6/NH9.22
Using PRIMAVERA high-resolution global climate models for European windstorm risk assessment in present and future climates for the (re)insurance industryJulia Lockwood, Erika Palin, Galina Guentchev, and Malcolm Roberts
PRIMAVERA is a European Union Horizon2020 project about creating a new generation of advanced and well-evaluated high-resolution global climate models, for the benefit of governments, business and society in general. The project has been engaging with several sectors, including finance, transport, and energy, to understand the extent to which any improved process understanding arising from high-resolution global climate modelling can – in turn – help with using climate model output to address user needs.
In this talk we will outline our work for the finance and (re)insurance industries. Following consultation with members of the industry, we are using PRIMAVERA climate models to generate a European windstorm event set for use in catastrophe modelling and risk analysis. The event set is generated from five different climate models, each run at a selection of resolutions ranging from 18-140km, covering the period 1950-2050, giving approximately 1700 years of climate model data in total. High-resolution climate models tend to have reduced biases in storm track position (which is too zonal in low-resolution climate models) and windstorm intensity. We will compare the properties of the windstorm footprints and associated risk across the different models and resolutions, to assess whether the high-resolution models lead to improved estimation of European windstorm risk. We will also compare windstorm risk in present and future climates, to see if a consistent picture emerges between models. Finally we will address the question of whether the event sets from each PRIMAVERA model can be combined to form a multi-model event set ensemble covering thousands of years of windstorm data.
How to cite: Lockwood, J., Palin, E., Guentchev, G., and Roberts, M.: Using PRIMAVERA high-resolution global climate models for European windstorm risk assessment in present and future climates for the (re)insurance industry, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7676, https://doi.org/10.5194/egusphere-egu2020-7676, 2020.
PRIMAVERA is a European Union Horizon2020 project about creating a new generation of advanced and well-evaluated high-resolution global climate models, for the benefit of governments, business and society in general. The project has been engaging with several sectors, including finance, transport, and energy, to understand the extent to which any improved process understanding arising from high-resolution global climate modelling can – in turn – help with using climate model output to address user needs.
In this talk we will outline our work for the finance and (re)insurance industries. Following consultation with members of the industry, we are using PRIMAVERA climate models to generate a European windstorm event set for use in catastrophe modelling and risk analysis. The event set is generated from five different climate models, each run at a selection of resolutions ranging from 18-140km, covering the period 1950-2050, giving approximately 1700 years of climate model data in total. High-resolution climate models tend to have reduced biases in storm track position (which is too zonal in low-resolution climate models) and windstorm intensity. We will compare the properties of the windstorm footprints and associated risk across the different models and resolutions, to assess whether the high-resolution models lead to improved estimation of European windstorm risk. We will also compare windstorm risk in present and future climates, to see if a consistent picture emerges between models. Finally we will address the question of whether the event sets from each PRIMAVERA model can be combined to form a multi-model event set ensemble covering thousands of years of windstorm data.
How to cite: Lockwood, J., Palin, E., Guentchev, G., and Roberts, M.: Using PRIMAVERA high-resolution global climate models for European windstorm risk assessment in present and future climates for the (re)insurance industry, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7676, https://doi.org/10.5194/egusphere-egu2020-7676, 2020.
EGU2020-18439 | Displays | ITS5.6/NH9.22
Flood recurrence under climate change: a probabilistic flood risk assessment of critical infrastructure in the Danube basinMichel Wortmann and Kai Schröter
Consistent information on fluvial flood risks in large river basins is typically sparse. This is especially true for the Danube River basin covering up to 14 countries and creating a patchwork of flood risk information across a populous and flood-prone region. As climatic changes have shown to increase flooding in the future, consistent basin-scale assessments prove vital to the insurance industry as well as municipal and infrastructural planning. The Future Danube Model (FDM) was designed to fill this gap complying to both insurance industry and climate science standards. That is, allowing for a reasonably detailed model scale (based on a 25m digital elevation model), stochastic sampling to create a large number of extreme events and flood event footprints (10k years), a thorough calibration and validation as well as the use of an ensemble of climate model output to drive the model under scenario conditions. The model is here used to assess the impact on critical infrastructure across the basin. Results indicate a marked increase in flood risk has already occurred when comparing the current climate period (2006-2035) to the reference period (1970-1999). Further increases are projected under a moderate and a business as usual scenario for the next climate period (2020-2049) and the end of the century (2070-2099). In large parts of the basin, the historical 100-year flood level, often used as a critical protection level for infrastructure, is projected to be equalled or exceeded every 50–10 years, while areas with a 100-year flood risk are projected to increase by 6-19%.
How to cite: Wortmann, M. and Schröter, K.: Flood recurrence under climate change: a probabilistic flood risk assessment of critical infrastructure in the Danube basin, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18439, https://doi.org/10.5194/egusphere-egu2020-18439, 2020.
Consistent information on fluvial flood risks in large river basins is typically sparse. This is especially true for the Danube River basin covering up to 14 countries and creating a patchwork of flood risk information across a populous and flood-prone region. As climatic changes have shown to increase flooding in the future, consistent basin-scale assessments prove vital to the insurance industry as well as municipal and infrastructural planning. The Future Danube Model (FDM) was designed to fill this gap complying to both insurance industry and climate science standards. That is, allowing for a reasonably detailed model scale (based on a 25m digital elevation model), stochastic sampling to create a large number of extreme events and flood event footprints (10k years), a thorough calibration and validation as well as the use of an ensemble of climate model output to drive the model under scenario conditions. The model is here used to assess the impact on critical infrastructure across the basin. Results indicate a marked increase in flood risk has already occurred when comparing the current climate period (2006-2035) to the reference period (1970-1999). Further increases are projected under a moderate and a business as usual scenario for the next climate period (2020-2049) and the end of the century (2070-2099). In large parts of the basin, the historical 100-year flood level, often used as a critical protection level for infrastructure, is projected to be equalled or exceeded every 50–10 years, while areas with a 100-year flood risk are projected to increase by 6-19%.
How to cite: Wortmann, M. and Schröter, K.: Flood recurrence under climate change: a probabilistic flood risk assessment of critical infrastructure in the Danube basin, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18439, https://doi.org/10.5194/egusphere-egu2020-18439, 2020.
EGU2020-8410 | Displays | ITS5.6/NH9.22
Current and future flood risk assessment in the Danube regionKai Schröter, Michel Wortmann, Stefan Lüdtke, Ben Hayes, Martin Drews, and Heidi Kreibich
Severe hydro-meteorological hazards have been increasing during recent decades and, as a consequence of global change, more frequent and intense events are expected in the future. Climate informed planning of adaptation actions needs both consistent and reliable information about future risks and associated uncertainties, and appropriate tools to support comprehensive risk assessment and management.
The Future Danube Model (FDM) is a multi-hazard and risk model suite for the Danube region which provides climate information related to perils such as heavy precipitation, heatwaves, floods and droughts under recent and future climate conditions. FDM has a modular structure with exchangeable components for climate input, hydrology, inundation, risk, adaptation and visualisation. FDM is implemented within the open-source OASIS Loss Modelling Framework, which defines a standard for estimating ground-up loss and financial damage of disaster events or event scenarios.
The OASIS lmf implementation of the FDM is showcased for the current and future fluvial flood risk assessment in the Danube catchment. We generate stochastic inundation event sets for current and future climate in the Danube region using the output of several EURO-CORDEX models as climate input. One event set represents 10,000 years of daily climate data for a given climate model, period and representative concentration pathway. With this input, we conduct long term continuous simulations of flood processes using a coupled semi-distributed hydrological and a 1.5D hydraulic model for fluvial floods. Flood losses to residential building are estimated using a probabilistic multi-variable vulnerability model. Effects of adaptation actions are exemplified by scenarios of private precaution. Changes in risk are illustrated with exceedance probability curves for different event sets representing current and future climate on different spatial aggregation levels which are of interest for adaptation planning.
How to cite: Schröter, K., Wortmann, M., Lüdtke, S., Hayes, B., Drews, M., and Kreibich, H.: Current and future flood risk assessment in the Danube region, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8410, https://doi.org/10.5194/egusphere-egu2020-8410, 2020.
Severe hydro-meteorological hazards have been increasing during recent decades and, as a consequence of global change, more frequent and intense events are expected in the future. Climate informed planning of adaptation actions needs both consistent and reliable information about future risks and associated uncertainties, and appropriate tools to support comprehensive risk assessment and management.
The Future Danube Model (FDM) is a multi-hazard and risk model suite for the Danube region which provides climate information related to perils such as heavy precipitation, heatwaves, floods and droughts under recent and future climate conditions. FDM has a modular structure with exchangeable components for climate input, hydrology, inundation, risk, adaptation and visualisation. FDM is implemented within the open-source OASIS Loss Modelling Framework, which defines a standard for estimating ground-up loss and financial damage of disaster events or event scenarios.
The OASIS lmf implementation of the FDM is showcased for the current and future fluvial flood risk assessment in the Danube catchment. We generate stochastic inundation event sets for current and future climate in the Danube region using the output of several EURO-CORDEX models as climate input. One event set represents 10,000 years of daily climate data for a given climate model, period and representative concentration pathway. With this input, we conduct long term continuous simulations of flood processes using a coupled semi-distributed hydrological and a 1.5D hydraulic model for fluvial floods. Flood losses to residential building are estimated using a probabilistic multi-variable vulnerability model. Effects of adaptation actions are exemplified by scenarios of private precaution. Changes in risk are illustrated with exceedance probability curves for different event sets representing current and future climate on different spatial aggregation levels which are of interest for adaptation planning.
How to cite: Schröter, K., Wortmann, M., Lüdtke, S., Hayes, B., Drews, M., and Kreibich, H.: Current and future flood risk assessment in the Danube region, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8410, https://doi.org/10.5194/egusphere-egu2020-8410, 2020.
EGU2020-19300 | Displays | ITS5.6/NH9.22
Compound risk of extreme pluvial and fluvial floodsMartin Drews, Kai Schröter, Michel Wortmann, and Morten Andreas Dahl Larsen
Extreme precipitation events often lead to flash floods, in particular in urban environments dominated by impervious surfaces. Likewise, excessive rainfall over an extended period or heavy snowmelt may lead to extreme river floods, which historically have caused loss of many lives, extensive damages to human and natural systems, and displacement of millions of people. Risk assessments generally consider the potential hazards from pluvial and fluvial floods as separate events. This can lead to a significant underestimation of the risks. Thus, the physical processes (e.g. precipitation) that drive these extreme events may interact and/or exhibit a spatial or temporal dependency, which could lead to an intensification of the hazard or modify the associated vulnerability and/or exposure. This is, e.g., the case of Budapest, where the urban drainage system relies on gravity flows. At about 3 m above the normal water level, rain water is not able to drain into the river without pumping, changing the operational conditions of the drainage system and potentially increasing the risk of urban flooding if this is coincident with an extreme precipitation event.
Here, we analyse the coincidence of compound pluvial and fluvial flood events for both a current and future climate, including the potential physical links between extreme precipitations events, and larger scale rainfall in the Danube catchment. For this analysis, we use the Future Danube Model (FDM), representing a full catastrophe model compliant with the insurance industry standards. The model considers four members of the Euro-CORDEX regional climate model ensemble and their historical and future simulations- 30-year time slices (e.g. 2071-2100) are extracted from each simulation, which are first bias-corrected and then statistically inflated using the IMAGE weather generator to yield spatially distributed daily time series covering 10.000 model years with the same overall statistical properties as the underlying Euro-CORDEX model but with an enhanced representation of rare (precipitation) extremes across the entire catchment of the Danube. This time series feed into a detailed hydrological/ hydrodynamic model for the river catchment, based on a combination of the SWIM eco-hydrological model and a modified version of the CaMa-Flood hydrodynamic model, from where we estimate the discharge levels and fluvial flood risk at the location of the city of Budapest. For the pluvial flood modelling, we use a modified version of the approach described in Kaspersen et al. (2017), forced by the same four Euro-CORDEX models as used in the SWIM hydrological model, to infer recurrence periods and intensities for present and future heavy to extreme rainfall events.
Considering the seasonality of the pluvial and fluvial flood risk, respectively, we find a significantly enhanced risk of compound events happening during the summer period, and that for most periods the compound risk is exacerbated by climate change. Given that the urban drainage system in Budapest already today is worn down and lacking the necessary capacity to deal with major flash floods, this suggests that new potentially nature-based approaches for dealing with storm water should be considered and that significant investments in updated urban drainage infrastructure is urgently needed.
How to cite: Drews, M., Schröter, K., Wortmann, M., and Larsen, M. A. D.: Compound risk of extreme pluvial and fluvial floods , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19300, https://doi.org/10.5194/egusphere-egu2020-19300, 2020.
Extreme precipitation events often lead to flash floods, in particular in urban environments dominated by impervious surfaces. Likewise, excessive rainfall over an extended period or heavy snowmelt may lead to extreme river floods, which historically have caused loss of many lives, extensive damages to human and natural systems, and displacement of millions of people. Risk assessments generally consider the potential hazards from pluvial and fluvial floods as separate events. This can lead to a significant underestimation of the risks. Thus, the physical processes (e.g. precipitation) that drive these extreme events may interact and/or exhibit a spatial or temporal dependency, which could lead to an intensification of the hazard or modify the associated vulnerability and/or exposure. This is, e.g., the case of Budapest, where the urban drainage system relies on gravity flows. At about 3 m above the normal water level, rain water is not able to drain into the river without pumping, changing the operational conditions of the drainage system and potentially increasing the risk of urban flooding if this is coincident with an extreme precipitation event.
Here, we analyse the coincidence of compound pluvial and fluvial flood events for both a current and future climate, including the potential physical links between extreme precipitations events, and larger scale rainfall in the Danube catchment. For this analysis, we use the Future Danube Model (FDM), representing a full catastrophe model compliant with the insurance industry standards. The model considers four members of the Euro-CORDEX regional climate model ensemble and their historical and future simulations- 30-year time slices (e.g. 2071-2100) are extracted from each simulation, which are first bias-corrected and then statistically inflated using the IMAGE weather generator to yield spatially distributed daily time series covering 10.000 model years with the same overall statistical properties as the underlying Euro-CORDEX model but with an enhanced representation of rare (precipitation) extremes across the entire catchment of the Danube. This time series feed into a detailed hydrological/ hydrodynamic model for the river catchment, based on a combination of the SWIM eco-hydrological model and a modified version of the CaMa-Flood hydrodynamic model, from where we estimate the discharge levels and fluvial flood risk at the location of the city of Budapest. For the pluvial flood modelling, we use a modified version of the approach described in Kaspersen et al. (2017), forced by the same four Euro-CORDEX models as used in the SWIM hydrological model, to infer recurrence periods and intensities for present and future heavy to extreme rainfall events.
Considering the seasonality of the pluvial and fluvial flood risk, respectively, we find a significantly enhanced risk of compound events happening during the summer period, and that for most periods the compound risk is exacerbated by climate change. Given that the urban drainage system in Budapest already today is worn down and lacking the necessary capacity to deal with major flash floods, this suggests that new potentially nature-based approaches for dealing with storm water should be considered and that significant investments in updated urban drainage infrastructure is urgently needed.
How to cite: Drews, M., Schröter, K., Wortmann, M., and Larsen, M. A. D.: Compound risk of extreme pluvial and fluvial floods , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19300, https://doi.org/10.5194/egusphere-egu2020-19300, 2020.
EGU2020-19644 | Displays | ITS5.6/NH9.22
Impacts of Climate Change and Remote Natural Catastrophes on EU Flood Insurance MarketsMax Tesselaar, W. J. Wouter Botzen, and Jeroen C. J. H. Aerts
Flood insurance coverage can enhance financial resilience of households to changing flood risk caused by climate change. However, due to increasing risk in many areas, premiums are likely to rise, which may cause insurance to become unaffordable for low-income households. This issue can become especially prominent in high-risk areas, when premiums are risk-reflective. Consequently, increasing premiums can reduce the demand for insurance coverage when this is optional, as individuals often underestimate the flood risk they face. After a flood, uninsured households then have to rely on private savings or ex-post government disaster relief. This situation is suboptimal as households may not save sufficiently to cover the damage, and government compensation can be uncertain. Using a modeling approach we simulate unaffordability and uptake of various forms of flood insurance systems in EU countries. To do this, we build upon and advance the “Dynamic Integrated Flood Insurance” (DIFI) model, which integrates flood risk simulations, with an insurance sector and a consumer behavior model. We compute the results using various climatic- and socio-economic scenarios in order to assess the impact of climate- and socio-economic change for flood insurance in the EU. Furthermore, we assess the impact of remote natural disasters on flood insurance premiums in EU countries, which occurs through the global reinsurance market. More specifically, after large natural disasters or compound events occurring outside the EU, which are likely to occur more often due to climate change, reinsurance premiums can temporarily rise as a result of a global “hard” capital market for reinsurers. The higher cost of capital for reinsurers is then transferred to households in the EU through higher flood insurance premiums. We find that rising average, and higher variance, of flood risk towards the end of the century can increase flood insurance premiums, and cause higher premium volatility resulting from global reinsurance market conditions. The rise in premiums increases unaffordability of insurance coverage and results in declining demand for flood insurance. A proposed policy improvement is to introduce a public reinsurance system for flood risk, as governments can often provide cheaper reinsurance coverage and are less subject to volatility on capital markets. Besides this, we recommend a limited degree of premium cross-subsidization to limit the growth of premiums in high-risk areas, and insurance purchase requirements to increase the level of financial protection against flooding.
How to cite: Tesselaar, M., Botzen, W. J. W., and Aerts, J. C. J. H.: Impacts of Climate Change and Remote Natural Catastrophes on EU Flood Insurance Markets, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19644, https://doi.org/10.5194/egusphere-egu2020-19644, 2020.
Flood insurance coverage can enhance financial resilience of households to changing flood risk caused by climate change. However, due to increasing risk in many areas, premiums are likely to rise, which may cause insurance to become unaffordable for low-income households. This issue can become especially prominent in high-risk areas, when premiums are risk-reflective. Consequently, increasing premiums can reduce the demand for insurance coverage when this is optional, as individuals often underestimate the flood risk they face. After a flood, uninsured households then have to rely on private savings or ex-post government disaster relief. This situation is suboptimal as households may not save sufficiently to cover the damage, and government compensation can be uncertain. Using a modeling approach we simulate unaffordability and uptake of various forms of flood insurance systems in EU countries. To do this, we build upon and advance the “Dynamic Integrated Flood Insurance” (DIFI) model, which integrates flood risk simulations, with an insurance sector and a consumer behavior model. We compute the results using various climatic- and socio-economic scenarios in order to assess the impact of climate- and socio-economic change for flood insurance in the EU. Furthermore, we assess the impact of remote natural disasters on flood insurance premiums in EU countries, which occurs through the global reinsurance market. More specifically, after large natural disasters or compound events occurring outside the EU, which are likely to occur more often due to climate change, reinsurance premiums can temporarily rise as a result of a global “hard” capital market for reinsurers. The higher cost of capital for reinsurers is then transferred to households in the EU through higher flood insurance premiums. We find that rising average, and higher variance, of flood risk towards the end of the century can increase flood insurance premiums, and cause higher premium volatility resulting from global reinsurance market conditions. The rise in premiums increases unaffordability of insurance coverage and results in declining demand for flood insurance. A proposed policy improvement is to introduce a public reinsurance system for flood risk, as governments can often provide cheaper reinsurance coverage and are less subject to volatility on capital markets. Besides this, we recommend a limited degree of premium cross-subsidization to limit the growth of premiums in high-risk areas, and insurance purchase requirements to increase the level of financial protection against flooding.
How to cite: Tesselaar, M., Botzen, W. J. W., and Aerts, J. C. J. H.: Impacts of Climate Change and Remote Natural Catastrophes on EU Flood Insurance Markets, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19644, https://doi.org/10.5194/egusphere-egu2020-19644, 2020.
EGU2020-19222 | Displays | ITS5.6/NH9.22
Correlation structure of economic losses due to floods across EuropeStefano Zanardo, Rebecca Smith, Ludovico Nicotina, Anongnart Assteerawatt, and Arno Hilberts
Large scale climatic patterns and river network topology have an important impact on the space-time structure of floods. For example, in a recent study we showed that the effect of the North Atlantic Oscillation (NAO) is visible in the structure of economic losses at the European scale. The analysis revealed that in Northern Europe the majority of historic winter floods occurred during a positive NAO state, whereas the majority of summer floods occurred during a negative state. Through the application of a state-of-the-art flood catastrophe model, we also observed that there exists a statistically significant relationship between economic flood losses and the NAO. In this study we further advance the analysis by exploring the correlation structure of flood losses in Europe during different seasons and for different NAO states. Flood loss correlation is measured in terms of “loss synchrony scale” (LSC), a metric formalized for this study following the definition of “flood synchrony scale” in Berghuijs et al. (2019). For an individual event and an individual CRESTA region, the LSC is defined as the maximum radius around the CRESTA, within which at least half of the other CRESTA regions experience a loss due to the same event. We analyse the LSC across Europe, as produced by the loss model, and check for consistency with the data-based flood synchrony scale in Berghujs et al. (2019). We further explore how the LSC changes between different seasons, and between NAO states. This analysis can help improve financial preparedness to catastrophic floods as a better understanding of the correlation structure of the flood events allows for a better distribution of resources as well as a more efficient application of mitigation measures.
Berghuijs W R, Allen S T, Harrigan S and Kirchner J W 2019 Growing spatial scales of synchronous river flooding in Europe Geophys. Res. Lett. 46 1423–8
How to cite: Zanardo, S., Smith, R., Nicotina, L., Assteerawatt, A., and Hilberts, A.: Correlation structure of economic losses due to floods across Europe, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19222, https://doi.org/10.5194/egusphere-egu2020-19222, 2020.
Large scale climatic patterns and river network topology have an important impact on the space-time structure of floods. For example, in a recent study we showed that the effect of the North Atlantic Oscillation (NAO) is visible in the structure of economic losses at the European scale. The analysis revealed that in Northern Europe the majority of historic winter floods occurred during a positive NAO state, whereas the majority of summer floods occurred during a negative state. Through the application of a state-of-the-art flood catastrophe model, we also observed that there exists a statistically significant relationship between economic flood losses and the NAO. In this study we further advance the analysis by exploring the correlation structure of flood losses in Europe during different seasons and for different NAO states. Flood loss correlation is measured in terms of “loss synchrony scale” (LSC), a metric formalized for this study following the definition of “flood synchrony scale” in Berghuijs et al. (2019). For an individual event and an individual CRESTA region, the LSC is defined as the maximum radius around the CRESTA, within which at least half of the other CRESTA regions experience a loss due to the same event. We analyse the LSC across Europe, as produced by the loss model, and check for consistency with the data-based flood synchrony scale in Berghujs et al. (2019). We further explore how the LSC changes between different seasons, and between NAO states. This analysis can help improve financial preparedness to catastrophic floods as a better understanding of the correlation structure of the flood events allows for a better distribution of resources as well as a more efficient application of mitigation measures.
Berghuijs W R, Allen S T, Harrigan S and Kirchner J W 2019 Growing spatial scales of synchronous river flooding in Europe Geophys. Res. Lett. 46 1423–8
How to cite: Zanardo, S., Smith, R., Nicotina, L., Assteerawatt, A., and Hilberts, A.: Correlation structure of economic losses due to floods across Europe, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19222, https://doi.org/10.5194/egusphere-egu2020-19222, 2020.
EGU2020-10037 | Displays | ITS5.6/NH9.22
Improving the robustness of flood catastrophe models in insurance through academia-industry collaborationValentina Noacco, Francesca Pianosi, Thorsten Wagener, Kirsty Styles, and Stephen Hutchings
To quantify risk from natural hazards and ensure a robust decision-making process in the insurance industry, uncertainties in the mathematical models that underpin decisions need to be efficiently and robustly captured. The complexity and sheer scale of the mathematical modelling often makes a comprehensive, transparent and easily communicable understanding of the uncertainties very difficult. Models predicting flood hazard and risk have shown high levels of uncertainty in their predictions due to data limitations and model structural uncertainty. Moreover, uncertainties are estimated to increase with climate change, especially for higher warming levels.
Global Sensitivity Analysis (GSA) provides a structured approach to quantify and compare the relative importance of parameter, data and structural uncertainty. GSA has been implemented successfully in tools such as the Sensitivity Analysis For Everybody (SAFE) toolbox, which is currently used by more than 2000 researchers worldwide. However, tailored tools, workflows and case studies are needed to demonstrate GSA benefits to practitioners and accelerate its uptake by the insurance industry.
One such case study has been the collaboration between the University of Bristol and JBA Risk Management on JBA’s new Global Flood Model, whose technology and flexibility has allowed to test a catastrophe model in ways not possible in the past. JBA has gained great insight into the sensitivity of modelled losses to uncertainties in the model datasets and analysis options. This has helped to explore the key sensitivities of the results to the assumptions made, for example to visualise how the distribution of modelled losses varies by return period and explore which parameters have the biggest impact on loss for the part of the Exceedance-Probability curve of interest. This information is essential for insurance companies to form their view of risk and to empower model users to adequately communicate uncertainties to decision-makers.
How to cite: Noacco, V., Pianosi, F., Wagener, T., Styles, K., and Hutchings, S.: Improving the robustness of flood catastrophe models in insurance through academia-industry collaboration , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10037, https://doi.org/10.5194/egusphere-egu2020-10037, 2020.
To quantify risk from natural hazards and ensure a robust decision-making process in the insurance industry, uncertainties in the mathematical models that underpin decisions need to be efficiently and robustly captured. The complexity and sheer scale of the mathematical modelling often makes a comprehensive, transparent and easily communicable understanding of the uncertainties very difficult. Models predicting flood hazard and risk have shown high levels of uncertainty in their predictions due to data limitations and model structural uncertainty. Moreover, uncertainties are estimated to increase with climate change, especially for higher warming levels.
Global Sensitivity Analysis (GSA) provides a structured approach to quantify and compare the relative importance of parameter, data and structural uncertainty. GSA has been implemented successfully in tools such as the Sensitivity Analysis For Everybody (SAFE) toolbox, which is currently used by more than 2000 researchers worldwide. However, tailored tools, workflows and case studies are needed to demonstrate GSA benefits to practitioners and accelerate its uptake by the insurance industry.
One such case study has been the collaboration between the University of Bristol and JBA Risk Management on JBA’s new Global Flood Model, whose technology and flexibility has allowed to test a catastrophe model in ways not possible in the past. JBA has gained great insight into the sensitivity of modelled losses to uncertainties in the model datasets and analysis options. This has helped to explore the key sensitivities of the results to the assumptions made, for example to visualise how the distribution of modelled losses varies by return period and explore which parameters have the biggest impact on loss for the part of the Exceedance-Probability curve of interest. This information is essential for insurance companies to form their view of risk and to empower model users to adequately communicate uncertainties to decision-makers.
How to cite: Noacco, V., Pianosi, F., Wagener, T., Styles, K., and Hutchings, S.: Improving the robustness of flood catastrophe models in insurance through academia-industry collaboration , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10037, https://doi.org/10.5194/egusphere-egu2020-10037, 2020.
EGU2020-9877 | Displays | ITS5.6/NH9.22
Climate risk score – a framework to quantify an insurance portfolio's exposure and contribution to climate changeSamuel Lüthi, Michael Gloor, and Michael Walz
The (re)insurance industry is alarmed that trends resulting from changing climate extremes may not be correctly reflected in its models, which are typically calibrated on past data. However, depending on the region and the peril, these trends vary in direction, magnitude and confidence level. A climate risk score framework has been developed that allows to identify regions or insurance portfolios which are particularly exposed to the consequences of climatic changes. In addition, the score also highlights a portfolio's contribution to climate change which eventually translates into a transitional risk – the risks emerging from the transition to a low-carbon economy.
The climate risk score is based on several sub-scores which reflect expected changes in mean and extreme precipitation and temperature as well as in mean sea level rise. It is computed using data output from several CMIP5 models – the models that lay the data foundation of the recent IPCC reports. In addition, the Swiss Re proprietary storm surge zones as well as its pluvial and fluvial flood zones are incorporated, allowing for a risk view in high-resolution (30 m). The contribution to climate change is displayed qualitatively, based on the occupancy of the individual sites of the portfolio.
Using this framework, the climate risk exposure of individual insurance portfolios can be assessed over time, across different RCP scenarios, or against an overall market portfolio. These insights can amongst others be used to steer a portfolio, or to judge past and expected changes in portfolio profitability and may thus also influence underwriting decisions. This may be particularly relevant for portfolios that are exposed to so-called secondary perils, i.e. high-frequency loss events of low-to-medium severity. Furthermore, regions can be identified, where uncertainties are particularly high and a more in-depth analysis of existing models might be required.
How to cite: Lüthi, S., Gloor, M., and Walz, M.: Climate risk score – a framework to quantify an insurance portfolio's exposure and contribution to climate change, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9877, https://doi.org/10.5194/egusphere-egu2020-9877, 2020.
The (re)insurance industry is alarmed that trends resulting from changing climate extremes may not be correctly reflected in its models, which are typically calibrated on past data. However, depending on the region and the peril, these trends vary in direction, magnitude and confidence level. A climate risk score framework has been developed that allows to identify regions or insurance portfolios which are particularly exposed to the consequences of climatic changes. In addition, the score also highlights a portfolio's contribution to climate change which eventually translates into a transitional risk – the risks emerging from the transition to a low-carbon economy.
The climate risk score is based on several sub-scores which reflect expected changes in mean and extreme precipitation and temperature as well as in mean sea level rise. It is computed using data output from several CMIP5 models – the models that lay the data foundation of the recent IPCC reports. In addition, the Swiss Re proprietary storm surge zones as well as its pluvial and fluvial flood zones are incorporated, allowing for a risk view in high-resolution (30 m). The contribution to climate change is displayed qualitatively, based on the occupancy of the individual sites of the portfolio.
Using this framework, the climate risk exposure of individual insurance portfolios can be assessed over time, across different RCP scenarios, or against an overall market portfolio. These insights can amongst others be used to steer a portfolio, or to judge past and expected changes in portfolio profitability and may thus also influence underwriting decisions. This may be particularly relevant for portfolios that are exposed to so-called secondary perils, i.e. high-frequency loss events of low-to-medium severity. Furthermore, regions can be identified, where uncertainties are particularly high and a more in-depth analysis of existing models might be required.
How to cite: Lüthi, S., Gloor, M., and Walz, M.: Climate risk score – a framework to quantify an insurance portfolio's exposure and contribution to climate change, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9877, https://doi.org/10.5194/egusphere-egu2020-9877, 2020.
EGU2020-5323 | Displays | ITS5.6/NH9.22
Assessing the global risk of climate change to re/insurers using catastrophe models and hazard mapsSarah Jones, Emma Raven, and Jane Toothill
In 2018 worldwide natural catastrophe losses were estimated at around USD $155 billion, resulting in the fourth-highest insurance payout on sigma records, and in 2020 JBA Risk Management (JBA) estimate 2 billion people will be at risk to inland flooding. By 2100, under a 1.5°C warming scenario, the cost of coastal flooding alone as a result of sea level rise could reach USD $10.2 trillion per year, assuming no further adaptation. It is therefore imperative to understand the impact climate change may have on global flood risk and insured losses in the future.
The re/insurance industry has an important role to play in providing financial resilience in a changing climate. Although integrating climate science into financial business remains in its infancy, modelling companies like JBA are increasingly developing new data and services to help assess the potential impact of climate change on insurance exposure.
We will discuss several approaches to incorporating climate change projections with flood risk data using examples from research collaborations and commercial projects. Our case studies will include: (1) building a national-scale climate change flood model through the application of projected changes in river flow, rainfall and sea level to the stochastic event set in the model, and (2) using Global Climate Model data to adjust hydrological inputs driving 2D hydraulic models to develop climate change flood hazard maps.
These tools provide outputs to meet different needs, and results may sometimes invoke further questions. For example: how can an extreme climate scenario produce lower flood risk than a conservative one? Why may adjacent postcodes' flood risk differ? We will explore the challenges associated with interpreting these results and the potential implications for the re/insurance industry.
How to cite: Jones, S., Raven, E., and Toothill, J.: Assessing the global risk of climate change to re/insurers using catastrophe models and hazard maps, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5323, https://doi.org/10.5194/egusphere-egu2020-5323, 2020.
In 2018 worldwide natural catastrophe losses were estimated at around USD $155 billion, resulting in the fourth-highest insurance payout on sigma records, and in 2020 JBA Risk Management (JBA) estimate 2 billion people will be at risk to inland flooding. By 2100, under a 1.5°C warming scenario, the cost of coastal flooding alone as a result of sea level rise could reach USD $10.2 trillion per year, assuming no further adaptation. It is therefore imperative to understand the impact climate change may have on global flood risk and insured losses in the future.
The re/insurance industry has an important role to play in providing financial resilience in a changing climate. Although integrating climate science into financial business remains in its infancy, modelling companies like JBA are increasingly developing new data and services to help assess the potential impact of climate change on insurance exposure.
We will discuss several approaches to incorporating climate change projections with flood risk data using examples from research collaborations and commercial projects. Our case studies will include: (1) building a national-scale climate change flood model through the application of projected changes in river flow, rainfall and sea level to the stochastic event set in the model, and (2) using Global Climate Model data to adjust hydrological inputs driving 2D hydraulic models to develop climate change flood hazard maps.
These tools provide outputs to meet different needs, and results may sometimes invoke further questions. For example: how can an extreme climate scenario produce lower flood risk than a conservative one? Why may adjacent postcodes' flood risk differ? We will explore the challenges associated with interpreting these results and the potential implications for the re/insurance industry.
How to cite: Jones, S., Raven, E., and Toothill, J.: Assessing the global risk of climate change to re/insurers using catastrophe models and hazard maps, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5323, https://doi.org/10.5194/egusphere-egu2020-5323, 2020.
EGU2020-3058 | Displays | ITS5.6/NH9.22 | Highlight
Integrating Climate and Socioeconomic Pathways to Calculate the Future Cost of CatastrophesAlastair Clarke, Alexander Koch, Eric Robinson, Michelle Cipullo, Shane Latchman, and Peter Sousounis
The cost of future catastrophes will depend on changes to the hazard, exposure and vulnerability. Previous work has shown how climate change could affect the financial losses from damaged buildings by altering the frequency, severity and other characteristics of the hazard, but has not shown how socioeconomic trends could affect losses by altering the total number, spatial distribution and vulnerability of buildings.
We extend and apply urban scaling theory to model the spatiotemporal evolution of exposure using population projections that are consistent with Shared Socioeconomic Pathways (SSPs). The exposure sets are integrated with hazard catalogues that are consistent with Representative Concentration Pathways to give five views of UK windstorm risk for the year 2100.
SSPs describe five plausible futures where socioeconomic trends have made mitigation of, or adaptation to, climate change harder or easier. For example, one SSP describes a global panacea of co-operative, sustainable development while another describes a fragmented, under-developed world heavily-reliant on fossil fuels. AIR’s present-day exposure set, representative of all insurable properties in the UK, is perturbed by the SSPs to create an ensemble of plausible exposure sets for the year 2100. This ensemble is run through the AIR Extratropical Cyclone model for Europe with four stochastic event-based catalogues that represent the present hazard and three plausible future hazards posed by 1.5°C, 3°C and 4.5°C increases in global temperature.
Previous work found that global warming of 1.5°C to 4.5°C would increase the Average Annual Loss (AAL) from UK windstorms by 11% to 25%. We find that changes in exposure alone, dictated by the SSPs, lead to a wider range of changes in AAL. Urbanisation occurs under all SSPs resulting in exposure concentrating in cities and regional-level variation in AAL. Changes in AAL will further widen when integrated with the future hazard catalogues.
The results can help governments and public bodies to decide on a strategy for future urban and rural development, and how much to invest in protective measures against catastrophes. The framework can be extended to other perils in other countries adapting to climate change.
How to cite: Clarke, A., Koch, A., Robinson, E., Cipullo, M., Latchman, S., and Sousounis, P.: Integrating Climate and Socioeconomic Pathways to Calculate the Future Cost of Catastrophes, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3058, https://doi.org/10.5194/egusphere-egu2020-3058, 2020.
The cost of future catastrophes will depend on changes to the hazard, exposure and vulnerability. Previous work has shown how climate change could affect the financial losses from damaged buildings by altering the frequency, severity and other characteristics of the hazard, but has not shown how socioeconomic trends could affect losses by altering the total number, spatial distribution and vulnerability of buildings.
We extend and apply urban scaling theory to model the spatiotemporal evolution of exposure using population projections that are consistent with Shared Socioeconomic Pathways (SSPs). The exposure sets are integrated with hazard catalogues that are consistent with Representative Concentration Pathways to give five views of UK windstorm risk for the year 2100.
SSPs describe five plausible futures where socioeconomic trends have made mitigation of, or adaptation to, climate change harder or easier. For example, one SSP describes a global panacea of co-operative, sustainable development while another describes a fragmented, under-developed world heavily-reliant on fossil fuels. AIR’s present-day exposure set, representative of all insurable properties in the UK, is perturbed by the SSPs to create an ensemble of plausible exposure sets for the year 2100. This ensemble is run through the AIR Extratropical Cyclone model for Europe with four stochastic event-based catalogues that represent the present hazard and three plausible future hazards posed by 1.5°C, 3°C and 4.5°C increases in global temperature.
Previous work found that global warming of 1.5°C to 4.5°C would increase the Average Annual Loss (AAL) from UK windstorms by 11% to 25%. We find that changes in exposure alone, dictated by the SSPs, lead to a wider range of changes in AAL. Urbanisation occurs under all SSPs resulting in exposure concentrating in cities and regional-level variation in AAL. Changes in AAL will further widen when integrated with the future hazard catalogues.
The results can help governments and public bodies to decide on a strategy for future urban and rural development, and how much to invest in protective measures against catastrophes. The framework can be extended to other perils in other countries adapting to climate change.
How to cite: Clarke, A., Koch, A., Robinson, E., Cipullo, M., Latchman, S., and Sousounis, P.: Integrating Climate and Socioeconomic Pathways to Calculate the Future Cost of Catastrophes, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-3058, https://doi.org/10.5194/egusphere-egu2020-3058, 2020.
EGU2020-21646 | Displays | ITS5.6/NH9.22
Hot spots - cold spots - what dots? A critical reflection on integrated climate risk assessments – example flood risk in AustriaStefan Kienberger and Jutta-Lucia Leis
Climate risk, and related impacts, are determined by a variety of natural, climatological and socio-economic factors. In its fifth Assessment Report, the Intergovernmental Panel on Climate Change has adapted the concept and terminology in this respect. The challenge is: How can relevant influencing factors be identified and integrated? And, how can these factors be represented spatially and integratively in order to provide decision makers with a sound basis for adaptation measures? The central starting question is: Where do I do what (and when)? Within the Austrian ACRP project 'RESPECT', a novel climate change risk analysis for the natural hazard 'flooding' was developed. Special attention is paid to the modelling of socio-economic and physical vulnerability and its integration into a spatially explicit climate risk analysis. As a result, spatial and thematic hotspots of social and physical vulnerability and climate risk for Austria are identified, which serve as a basis for the identification of adaptation measures.
As a result, climate risk maps are available for Austria, which show risk and vulnerability hotspots as homogeneous spatial regions, independent from administrative boundaries and traditional raster-based approaches. These hotspots are quantitatively evaluated by an index value as a measure of climate risk. In addition to the purely quantitative evaluation, it is also possible to characterise and present the spatial units qualitatively, in terms of 'problem areas' and contributing factors. This is a significant development compared to 'traditional' spatial units (grid cell based; based on administrative units). Thus the question mentioned at the beginning can be answered - where are which intervention measures necessary. The results are available for socio-economic and physical climate risk, which are flanked by corresponding hazard and vulnerability maps. Results for the present and the future have been produced using proxy indicators from the high-resolution Austrian climate change scenario data (ÖKS15). This makes it possible to identify future hot spots under the assumption of different climate scenarios. The presentations presents the adapted risk concept and methodological approach, respectively, and reflects critically on the opportunities and challenges of climate risk analysis in Austria and in general for the planning of climate change adaptation measures.
How to cite: Kienberger, S. and Leis, J.-L.: Hot spots - cold spots - what dots? A critical reflection on integrated climate risk assessments – example flood risk in Austria, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21646, https://doi.org/10.5194/egusphere-egu2020-21646, 2020.
Climate risk, and related impacts, are determined by a variety of natural, climatological and socio-economic factors. In its fifth Assessment Report, the Intergovernmental Panel on Climate Change has adapted the concept and terminology in this respect. The challenge is: How can relevant influencing factors be identified and integrated? And, how can these factors be represented spatially and integratively in order to provide decision makers with a sound basis for adaptation measures? The central starting question is: Where do I do what (and when)? Within the Austrian ACRP project 'RESPECT', a novel climate change risk analysis for the natural hazard 'flooding' was developed. Special attention is paid to the modelling of socio-economic and physical vulnerability and its integration into a spatially explicit climate risk analysis. As a result, spatial and thematic hotspots of social and physical vulnerability and climate risk for Austria are identified, which serve as a basis for the identification of adaptation measures.
As a result, climate risk maps are available for Austria, which show risk and vulnerability hotspots as homogeneous spatial regions, independent from administrative boundaries and traditional raster-based approaches. These hotspots are quantitatively evaluated by an index value as a measure of climate risk. In addition to the purely quantitative evaluation, it is also possible to characterise and present the spatial units qualitatively, in terms of 'problem areas' and contributing factors. This is a significant development compared to 'traditional' spatial units (grid cell based; based on administrative units). Thus the question mentioned at the beginning can be answered - where are which intervention measures necessary. The results are available for socio-economic and physical climate risk, which are flanked by corresponding hazard and vulnerability maps. Results for the present and the future have been produced using proxy indicators from the high-resolution Austrian climate change scenario data (ÖKS15). This makes it possible to identify future hot spots under the assumption of different climate scenarios. The presentations presents the adapted risk concept and methodological approach, respectively, and reflects critically on the opportunities and challenges of climate risk analysis in Austria and in general for the planning of climate change adaptation measures.
How to cite: Kienberger, S. and Leis, J.-L.: Hot spots - cold spots - what dots? A critical reflection on integrated climate risk assessments – example flood risk in Austria, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21646, https://doi.org/10.5194/egusphere-egu2020-21646, 2020.
EGU2020-12893 | Displays | ITS5.6/NH9.22
A decision-theoretic approach to sustain public protection under climate change based on ensembles of future hazard developmentsKatharina Enigl, Christoph Matulla, Fabian Frank, Matthias Schlögl, Franz Schmid, and Ingo Schnetzer
In large parts of the world, an increasing number of damaging events caused by previously rare extreme weather phenomena is being observed. This poses a challenge to those responsible for civil protection of how to sustain current safety levels under accelerated climate change. The aim of this study is to contribute to meeting these challenges by providing methods to determine anticipatory strategies for decades of sustainable protection.
This endeavor requires the identification of weather-related hazard processeson on the one, and the establishment of corresponding future hazard development corridors on the other hand. The former, so-called Climate Indices (CIs), are determined by blending damage events and spatiotemporal highly resolved meteorological data for three different regions in the Austrian Alpine region and six different process categories via multivariate statistical analyses. The derivation of hazard development corridors describing future changes in risk landscapes requires ensembles of regional climate projections, in which the occurrence of corresponding CIs is detected.
Results are incorporated into the decision-making process and processed together with experts in civil protection. The determination of optimal, sustainable protection strategies is based on decision-theoretical techniques and the application of the expected utility theory (Bernoulli principle).
The feasibility of integrating hazard development corridors into decision-making processes, as well as the satisfactory implementation of established procedures, is demonstrated by the most comprehensive civil protection project in Austria to date. The results are consistent and show significant differences between near (2036-2065) and far future (2071-2100) time periods, as well as between the threat levels corresponding to the "climate-friendly" path of humanity and those associated with the "business as usual" scenario. The results are in line with the European Floods Directive by ranking linear measures behind resettlement and retention measures.
How to cite: Enigl, K., Matulla, C., Frank, F., Schlögl, M., Schmid, F., and Schnetzer, I.: A decision-theoretic approach to sustain public protection under climate change based on ensembles of future hazard developments, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12893, https://doi.org/10.5194/egusphere-egu2020-12893, 2020.
In large parts of the world, an increasing number of damaging events caused by previously rare extreme weather phenomena is being observed. This poses a challenge to those responsible for civil protection of how to sustain current safety levels under accelerated climate change. The aim of this study is to contribute to meeting these challenges by providing methods to determine anticipatory strategies for decades of sustainable protection.
This endeavor requires the identification of weather-related hazard processeson on the one, and the establishment of corresponding future hazard development corridors on the other hand. The former, so-called Climate Indices (CIs), are determined by blending damage events and spatiotemporal highly resolved meteorological data for three different regions in the Austrian Alpine region and six different process categories via multivariate statistical analyses. The derivation of hazard development corridors describing future changes in risk landscapes requires ensembles of regional climate projections, in which the occurrence of corresponding CIs is detected.
Results are incorporated into the decision-making process and processed together with experts in civil protection. The determination of optimal, sustainable protection strategies is based on decision-theoretical techniques and the application of the expected utility theory (Bernoulli principle).
The feasibility of integrating hazard development corridors into decision-making processes, as well as the satisfactory implementation of established procedures, is demonstrated by the most comprehensive civil protection project in Austria to date. The results are consistent and show significant differences between near (2036-2065) and far future (2071-2100) time periods, as well as between the threat levels corresponding to the "climate-friendly" path of humanity and those associated with the "business as usual" scenario. The results are in line with the European Floods Directive by ranking linear measures behind resettlement and retention measures.
How to cite: Enigl, K., Matulla, C., Frank, F., Schlögl, M., Schmid, F., and Schnetzer, I.: A decision-theoretic approach to sustain public protection under climate change based on ensembles of future hazard developments, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12893, https://doi.org/10.5194/egusphere-egu2020-12893, 2020.
EGU2020-4276 | Displays | ITS5.6/NH9.22
The application of climate adaptation algorithm to physical climate risk assessment and management of the TCFDIwen Liu and Ching-pin Tung
The Financial Stability Board (FSB) published “Recommendations of the Task Force on Climate-related Financial Disclosures (TCFD)” in 2017 to assist companies in assessing climate-related risks and opportunities and financial disclosure. However, the integration between climate scenarios and the corporate risk management system and the financial quantification of climate-related risks are still the challenges for corporate practice. To collect the climate scenarios mentioned in TCFD and integrate the relevant factors in corporate operations, the study will use the framework of TCFD: Governance, Strategy, Risk management, Metrics and Targets, introduce the first three steps of the Climate Change Adaptation (CCA Steps): "identifying problems and establishing objectives", "assessing and analyzing current risk", "assessing and analyzing future risk", and use climate risk template which use Hazard, Exposure and Vulnerability as risk assessment factors to establish a framework for the evaluation and analysis of risk. After establishing a complete method for climate risk and opportunity assessment, in response to the "financial disclosure", the study will link to the financial statement items, referring to related concepts such as “Value at Risk” and “stranded assets”, to strengthen the integrity and transparency of corporate financial disclosure. At last, the study will select a specific climate physical risk in a industry for case study by the analysis of literature, international reports and historical events and introduce a climate risk assessment framework to verify the practicality of this framework. The study's results will be applied to the risk management of business operations. At the same time, the framework of climate risk can assist companies to put climate change factors into their decisions, maintaining the sustainable competitiveness in a low-carbon economy.
Key words: climate risk assessment, TCFD, enterprise risk management
How to cite: Liu, I. and Tung, C.: The application of climate adaptation algorithm to physical climate risk assessment and management of the TCFD, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4276, https://doi.org/10.5194/egusphere-egu2020-4276, 2020.
The Financial Stability Board (FSB) published “Recommendations of the Task Force on Climate-related Financial Disclosures (TCFD)” in 2017 to assist companies in assessing climate-related risks and opportunities and financial disclosure. However, the integration between climate scenarios and the corporate risk management system and the financial quantification of climate-related risks are still the challenges for corporate practice. To collect the climate scenarios mentioned in TCFD and integrate the relevant factors in corporate operations, the study will use the framework of TCFD: Governance, Strategy, Risk management, Metrics and Targets, introduce the first three steps of the Climate Change Adaptation (CCA Steps): "identifying problems and establishing objectives", "assessing and analyzing current risk", "assessing and analyzing future risk", and use climate risk template which use Hazard, Exposure and Vulnerability as risk assessment factors to establish a framework for the evaluation and analysis of risk. After establishing a complete method for climate risk and opportunity assessment, in response to the "financial disclosure", the study will link to the financial statement items, referring to related concepts such as “Value at Risk” and “stranded assets”, to strengthen the integrity and transparency of corporate financial disclosure. At last, the study will select a specific climate physical risk in a industry for case study by the analysis of literature, international reports and historical events and introduce a climate risk assessment framework to verify the practicality of this framework. The study's results will be applied to the risk management of business operations. At the same time, the framework of climate risk can assist companies to put climate change factors into their decisions, maintaining the sustainable competitiveness in a low-carbon economy.
Key words: climate risk assessment, TCFD, enterprise risk management
How to cite: Liu, I. and Tung, C.: The application of climate adaptation algorithm to physical climate risk assessment and management of the TCFD, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4276, https://doi.org/10.5194/egusphere-egu2020-4276, 2020.
EGU2020-2255 | Displays | ITS5.6/NH9.22
Analyzing the Shifts of Land Use for Agricultural Land Planning under Climate Change– A Case Study of Northern Yilan County, TaiwanWen - Yen Lin and Chi-Tung Hung
Taiwan belongs to the edges of sub-tropical and tropical climate zones, and has been indicated as a high risk edge area by international climate change researches. According to the Intergovernmental Panel on Climate Change (IPCC), Taiwan is threatened by global warming, changes of rainfall pattern, sea level rising and high frequency and influence of extreme weather, which will result in great impacts to agriculture industry and the future of food security. Unfortunately, along with the rapid economic development and urbanization in Taiwan since the 1960’s, agricultural land use has become less competitive to industrial, commercial, and residential types of land uses under land use competition. Therefore, to effectively enhance the resilience and conserve the agricultural lands which under the threats of climate change and the competitions of other types of land use, Taiwan’s Spatial Planning Act (promulgated on 2016/1/6) enlists Agricultural Development Zones, one of four major functional zones in National Spatial Plan, into demarcated functional zone and applying land use control. The zoning plan is expected to be completed by every city and county before the year of 2022, and one of the major issues is to consider the land use function changes of different locations. By comparing the 2007 and 2016 land utilization maps investigated by National Land Surveying and Mapping Center (Taiwan), this study is able to identify the 10-year changes of agricultural lands of northern Yilan county. To further investigate the spatial distribution of agricultural land changes, spatial analysis techniques such as multi-distance spatial cluster analysis (Ripley’s K Function) and point pattern analysis (Kernel density) are employed to analyze the spatial clustering of changes. The spatial analysis results overlays with climate change related and hazard risk maps, such as flooding, landslide, soil liquefaction, to support the decision making of future agricultural land planning and agriculture development zoning plan.
Keywords: agricultural land, land use changes, climate change, spatial analysis
How to cite: Lin, W.-Y. and Hung, C.-T.: Analyzing the Shifts of Land Use for Agricultural Land Planning under Climate Change– A Case Study of Northern Yilan County, Taiwan, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2255, https://doi.org/10.5194/egusphere-egu2020-2255, 2020.
Taiwan belongs to the edges of sub-tropical and tropical climate zones, and has been indicated as a high risk edge area by international climate change researches. According to the Intergovernmental Panel on Climate Change (IPCC), Taiwan is threatened by global warming, changes of rainfall pattern, sea level rising and high frequency and influence of extreme weather, which will result in great impacts to agriculture industry and the future of food security. Unfortunately, along with the rapid economic development and urbanization in Taiwan since the 1960’s, agricultural land use has become less competitive to industrial, commercial, and residential types of land uses under land use competition. Therefore, to effectively enhance the resilience and conserve the agricultural lands which under the threats of climate change and the competitions of other types of land use, Taiwan’s Spatial Planning Act (promulgated on 2016/1/6) enlists Agricultural Development Zones, one of four major functional zones in National Spatial Plan, into demarcated functional zone and applying land use control. The zoning plan is expected to be completed by every city and county before the year of 2022, and one of the major issues is to consider the land use function changes of different locations. By comparing the 2007 and 2016 land utilization maps investigated by National Land Surveying and Mapping Center (Taiwan), this study is able to identify the 10-year changes of agricultural lands of northern Yilan county. To further investigate the spatial distribution of agricultural land changes, spatial analysis techniques such as multi-distance spatial cluster analysis (Ripley’s K Function) and point pattern analysis (Kernel density) are employed to analyze the spatial clustering of changes. The spatial analysis results overlays with climate change related and hazard risk maps, such as flooding, landslide, soil liquefaction, to support the decision making of future agricultural land planning and agriculture development zoning plan.
Keywords: agricultural land, land use changes, climate change, spatial analysis
How to cite: Lin, W.-Y. and Hung, C.-T.: Analyzing the Shifts of Land Use for Agricultural Land Planning under Climate Change– A Case Study of Northern Yilan County, Taiwan, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2255, https://doi.org/10.5194/egusphere-egu2020-2255, 2020.
EGU2020-9157 | Displays | ITS5.6/NH9.22
A downward counter-factual climate risk analysis of the impact of tropical cyclones in the Caribbean islandsAlessio Ciullo, Olivia Romppainen-Martius, Eric Strobl, and David Bresch
Climate risk analysis and assessment studies are typically conducted relying on historical data. These data, however, represent just one single realization of the past, which could have unfolded differently. As an example, Hurricane Irma might had struck South Florida at Category 4 and, had it done so, damages could have been as high as 150 billion, about three times higher than damage estimated from the actual event. To explore the impacts of these potentially catastrophic near-misses, downward counter-factual risk analysis (Woo, Maynard and Seria, 2017) complements standard risk analysis by exploring alternative, plausible realization of past climatic events. As downward counter-factual risk analysis frames risk in an event-oriented manner, corresponding more closely to how people perceive risk, it is expected to increase climate risk awareness among people and policy makers (Shepherd et al., 2018).
We present a counter-factual risk analysis study of climate risk from tropical cyclones on the Caribbean islands. The analysis is conducted using the natcat impact model CLIMADA (Aznar-Siguan and Bresch, 2019). Impact is estimated based on forecasts of past tropical cyclones tracks from the THORPEX Interactive Grand Global Ensemble (TIGGE) dataset, as they all represent plausible alternative realizations of past tropical cyclones. The goal is to study whether, and to what extent, the estimated impacts from forecasts provide new insights than those provided by historical records in terms of e.g. cumulated annual damages, maximum annual damages and, in so doing, perform a worst-case analysis study to support climate risk management planning.
Aznar-Siguan, G. and Bresch, D. N.: CLIMADA v1: a global weather and climate risk assessment platform, Geosci. Model Dev., 12, 3085-3097, doi.org/10.5194/gmd-12-3085-2019, 2019.
Woo, G., Maynard, T., and Seria, J. Reimagining history. Counterfactual risk analysis. Retrieved from: https://www.lloyds.com/~/media/files/news-and-insight/risk-insight/2017/reimagining-history.pdf, 2017.
Shepherd, T.G., Boyd, E., Calel, R.A. et al.: Storylines: an alternative approach to representing uncertainty in physical aspects of climate change. Climatic Change 151, 555–571, doi.org/10.1007/s10584-018-2317-9 , 2018.
How to cite: Ciullo, A., Romppainen-Martius, O., Strobl, E., and Bresch, D.: A downward counter-factual climate risk analysis of the impact of tropical cyclones in the Caribbean islands, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9157, https://doi.org/10.5194/egusphere-egu2020-9157, 2020.
Climate risk analysis and assessment studies are typically conducted relying on historical data. These data, however, represent just one single realization of the past, which could have unfolded differently. As an example, Hurricane Irma might had struck South Florida at Category 4 and, had it done so, damages could have been as high as 150 billion, about three times higher than damage estimated from the actual event. To explore the impacts of these potentially catastrophic near-misses, downward counter-factual risk analysis (Woo, Maynard and Seria, 2017) complements standard risk analysis by exploring alternative, plausible realization of past climatic events. As downward counter-factual risk analysis frames risk in an event-oriented manner, corresponding more closely to how people perceive risk, it is expected to increase climate risk awareness among people and policy makers (Shepherd et al., 2018).
We present a counter-factual risk analysis study of climate risk from tropical cyclones on the Caribbean islands. The analysis is conducted using the natcat impact model CLIMADA (Aznar-Siguan and Bresch, 2019). Impact is estimated based on forecasts of past tropical cyclones tracks from the THORPEX Interactive Grand Global Ensemble (TIGGE) dataset, as they all represent plausible alternative realizations of past tropical cyclones. The goal is to study whether, and to what extent, the estimated impacts from forecasts provide new insights than those provided by historical records in terms of e.g. cumulated annual damages, maximum annual damages and, in so doing, perform a worst-case analysis study to support climate risk management planning.
Aznar-Siguan, G. and Bresch, D. N.: CLIMADA v1: a global weather and climate risk assessment platform, Geosci. Model Dev., 12, 3085-3097, doi.org/10.5194/gmd-12-3085-2019, 2019.
Woo, G., Maynard, T., and Seria, J. Reimagining history. Counterfactual risk analysis. Retrieved from: https://www.lloyds.com/~/media/files/news-and-insight/risk-insight/2017/reimagining-history.pdf, 2017.
Shepherd, T.G., Boyd, E., Calel, R.A. et al.: Storylines: an alternative approach to representing uncertainty in physical aspects of climate change. Climatic Change 151, 555–571, doi.org/10.1007/s10584-018-2317-9 , 2018.
How to cite: Ciullo, A., Romppainen-Martius, O., Strobl, E., and Bresch, D.: A downward counter-factual climate risk analysis of the impact of tropical cyclones in the Caribbean islands, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9157, https://doi.org/10.5194/egusphere-egu2020-9157, 2020.
EGU2020-17552 | Displays | ITS5.6/NH9.22
Objective deviation of Climate Indices for the assessment of altered risk-landscapes driven by accelerated climate changeNikta Madjdi, Katharina Enigl, and Christoph Matulla
Floodings are amongst the most devastating damage-processes worldwide. Along with the increase in climate change induced extreme events, research devoted to the identification of so-called Climate Indices (CIs) describing weather phenomena triggering hazard-occurrences gains rising emphasis. CIs have a wide potential for further investigation in both research and application as e.g. in public protection and the transport and logistic industry. The appearance of specific CIs in regional climate models (i.e., ‘hazard development corridors’) can serve as an input in decision-theoretic concepts aiming to sustain current safety levels in climate change induced altering risk landscapes (Matulla et al, submitted). Enigl et al, 2019 first objectively derived hazard-triggering precipitation totals for six process-categories and three climatologically as well as geomorphologically distinct regions in the Austrian part of the European Alps. This study aims at investigating a slightly different methodological approach for the objective determination of Climate Indices in the catchment area of the River Danube in Austria depending on catchment areas.
How to cite: Madjdi, N., Enigl, K., and Matulla, C.: Objective deviation of Climate Indices for the assessment of altered risk-landscapes driven by accelerated climate change, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-17552, https://doi.org/10.5194/egusphere-egu2020-17552, 2020.
Floodings are amongst the most devastating damage-processes worldwide. Along with the increase in climate change induced extreme events, research devoted to the identification of so-called Climate Indices (CIs) describing weather phenomena triggering hazard-occurrences gains rising emphasis. CIs have a wide potential for further investigation in both research and application as e.g. in public protection and the transport and logistic industry. The appearance of specific CIs in regional climate models (i.e., ‘hazard development corridors’) can serve as an input in decision-theoretic concepts aiming to sustain current safety levels in climate change induced altering risk landscapes (Matulla et al, submitted). Enigl et al, 2019 first objectively derived hazard-triggering precipitation totals for six process-categories and three climatologically as well as geomorphologically distinct regions in the Austrian part of the European Alps. This study aims at investigating a slightly different methodological approach for the objective determination of Climate Indices in the catchment area of the River Danube in Austria depending on catchment areas.
How to cite: Madjdi, N., Enigl, K., and Matulla, C.: Objective deviation of Climate Indices for the assessment of altered risk-landscapes driven by accelerated climate change, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-17552, https://doi.org/10.5194/egusphere-egu2020-17552, 2020.
EGU2020-16561 | Displays | ITS5.6/NH9.22
Quantification of flood hazard for the megacity of Lagos, Nigeria, by hydrodynamic simulationTobias Pilz
Climate change leads to rising temperatures and therefore stimulates the water cycle. As a consequence, extreme events in rainfall and associated flooding are projected to increase in frequency and severity in many regions of the world. Especially in developing countries with high population growth and often unregulated settlement, flood risk may increase due to both increased flood hazard and enhanced exposure. One such example is the megacity of Lagos, Nigeria, belonging to the largest cities in Africa. Floods within the city are recurrent and caused by storm surges from the Atlantic, heavy precipitation, and river floods. Flood risk is an issue and even expected to increase due to enhanced extreme precipitation, sea level rise, enhances storm surges, as well as illegal settlement, poor management, insufficient or blocking of drainage channels, missing early warning systems, and insufficient data.
The aim of this study is to deliver a first quantification of flood hazard for the city of Lagos based on hydrodynamic simulation with the model TELEMAC-2D. A focus is put on the use of freely available data sources and the design of reproducible workflows in order to enable local decision-makers to individually apply and refine the established workflows. The biggest challenge is the generation of the model mesh as the basis for subsequent hydrodynamic modelling due to limited data availability and the size of the model domain (about 1000 km²).
How to cite: Pilz, T.: Quantification of flood hazard for the megacity of Lagos, Nigeria, by hydrodynamic simulation, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-16561, https://doi.org/10.5194/egusphere-egu2020-16561, 2020.
Climate change leads to rising temperatures and therefore stimulates the water cycle. As a consequence, extreme events in rainfall and associated flooding are projected to increase in frequency and severity in many regions of the world. Especially in developing countries with high population growth and often unregulated settlement, flood risk may increase due to both increased flood hazard and enhanced exposure. One such example is the megacity of Lagos, Nigeria, belonging to the largest cities in Africa. Floods within the city are recurrent and caused by storm surges from the Atlantic, heavy precipitation, and river floods. Flood risk is an issue and even expected to increase due to enhanced extreme precipitation, sea level rise, enhances storm surges, as well as illegal settlement, poor management, insufficient or blocking of drainage channels, missing early warning systems, and insufficient data.
The aim of this study is to deliver a first quantification of flood hazard for the city of Lagos based on hydrodynamic simulation with the model TELEMAC-2D. A focus is put on the use of freely available data sources and the design of reproducible workflows in order to enable local decision-makers to individually apply and refine the established workflows. The biggest challenge is the generation of the model mesh as the basis for subsequent hydrodynamic modelling due to limited data availability and the size of the model domain (about 1000 km²).
How to cite: Pilz, T.: Quantification of flood hazard for the megacity of Lagos, Nigeria, by hydrodynamic simulation, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-16561, https://doi.org/10.5194/egusphere-egu2020-16561, 2020.
EGU2020-9253 | Displays | ITS5.6/NH9.22
Correlating surface water flood damages in three Indonesia citiesMatthew Farnham, Vivian Camacho-Suarez, Alistair Milne, John Hillier, Dapeng Yu, Louise Slater, Laura Whyte, and Avinoam Baruch
Despite a high growth rate of over 5%, the insurance penetration rate in Indonesia is low, at roughly 2.77 percent and is one of the least developed insurance market among ASEAN economies. A primary explanation for the lack of motivation for taking up insurance is due to the lack of understanding of the multitude of risks from natural hazards the Indonesian market faces, principally of flooding. The purpose of this research is to assess the flood correlation between three of the major cities (Jakarta, Semarang, and Solo) on the island of Java. These highly populated and financial centres of Indonesia are most prone to the rainfall extremes during the Monsoon Season (November – March), many of which causes flooding. All the historical rainfall events were extracted from ECMWF’s ERA-5 hourly rainfall dataset (1979 – 2018). The top 10 events for each city were selected based on peak rainfall intensity. For the selected events in a city, rainfall records of the same period were extracted for the other two cities. This results in 30 simulations per city. Using a 2D hydraulic modelling tool (FloodMap), surface water flood footprints were generated for the events. In the absence of depth-damage curves, the number of buildings flooded under each event were used as an approximation to building damages. Damage to buildings due to surface water flooding in Solo and Semarang were found to be most correlated, with a significant number of buildings flooded in both cities in 15 out of the 20 paired events. Solo and Jakarta show some correlation (7 out of 20) whilst flooding in Semarang and Jakarta are least correlated (4 out of 20). This study is an initial analysis relevant to the modelling of catastrophes in a relatively data sparse environment, providing an approximation of the correlation of flooding between three Indonesian cities. Further studies are required to develop pragmatic approaches to complement catastrophe modelling that integrate the spatial correlation between flood damages in cities.
How to cite: Farnham, M., Camacho-Suarez, V., Milne, A., Hillier, J., Yu, D., Slater, L., Whyte, L., and Baruch, A.: Correlating surface water flood damages in three Indonesia cities, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9253, https://doi.org/10.5194/egusphere-egu2020-9253, 2020.
Despite a high growth rate of over 5%, the insurance penetration rate in Indonesia is low, at roughly 2.77 percent and is one of the least developed insurance market among ASEAN economies. A primary explanation for the lack of motivation for taking up insurance is due to the lack of understanding of the multitude of risks from natural hazards the Indonesian market faces, principally of flooding. The purpose of this research is to assess the flood correlation between three of the major cities (Jakarta, Semarang, and Solo) on the island of Java. These highly populated and financial centres of Indonesia are most prone to the rainfall extremes during the Monsoon Season (November – March), many of which causes flooding. All the historical rainfall events were extracted from ECMWF’s ERA-5 hourly rainfall dataset (1979 – 2018). The top 10 events for each city were selected based on peak rainfall intensity. For the selected events in a city, rainfall records of the same period were extracted for the other two cities. This results in 30 simulations per city. Using a 2D hydraulic modelling tool (FloodMap), surface water flood footprints were generated for the events. In the absence of depth-damage curves, the number of buildings flooded under each event were used as an approximation to building damages. Damage to buildings due to surface water flooding in Solo and Semarang were found to be most correlated, with a significant number of buildings flooded in both cities in 15 out of the 20 paired events. Solo and Jakarta show some correlation (7 out of 20) whilst flooding in Semarang and Jakarta are least correlated (4 out of 20). This study is an initial analysis relevant to the modelling of catastrophes in a relatively data sparse environment, providing an approximation of the correlation of flooding between three Indonesian cities. Further studies are required to develop pragmatic approaches to complement catastrophe modelling that integrate the spatial correlation between flood damages in cities.
How to cite: Farnham, M., Camacho-Suarez, V., Milne, A., Hillier, J., Yu, D., Slater, L., Whyte, L., and Baruch, A.: Correlating surface water flood damages in three Indonesia cities, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9253, https://doi.org/10.5194/egusphere-egu2020-9253, 2020.
EGU2020-2978 | Displays | ITS5.6/NH9.22
Climate trends and tourist flows: first results of the case study in the Sila National Park (southern Italy) within the INDECIS Project.Roberto Coscarelli, Loredana Antronico, Anna Boqué Ciurana, Francesco De Pascale, Alba Font Barnet, Antonio Paolo Russo, and Òscar Saladié Borraz
The scientific community agrees that climate change is generating a series of direct/indirect impacts on the environment and on humans that cannot be underestimated anymore. Consequently, it becomes urgent and necessary to know how this phenomenon affects ecosystems, productive activities and human well-being in order to plan measures for mitigation and adaptation. One sector whose performance is closely related to climate trends is tourism. The influence that climate change can have on tourism determines the need for adopting appropriate strategies to guarantee the sustainability of tourist destinations.
In order to develop models and tools for the near-real-time acquisition of climate data and for spatial interpolation, visualization and communication of climate monitoring to territorial stakeholders, the INDECIS project has involved a partnership of experts in the climate sector, from 12 European countries. The INDECIS Project intends to develop an integrated approach to produce a series of climate indicators aimed at the high priority sectors of the Global Framework for Climate Services of the World Meteorological Organization (agriculture, risk reduction, energy, health, water), with the addition of tourism.
With regards to the tourism sector, the territory of the Sila National Park (Calabria, southern Italy) has been selected as a study area for the acquisition of sectorial data on tourism (in particular, attendance data and tourist arrivals) and for the realization of a Workshop useful for the identification and enhancement of climate services that should be provided to stakeholders of the tourist destination, based on their needs. The Workshop was organized with three focus groups related to the following tourism activities: snow tourism, water and lake tourism, and earth tourism. Within the focus groups, the identified stakeholders - hotel groups, local associations, tourist agencies, parks, etc. - were able to highlight their needs in relation to the climate services that the INDECIS Project intends to offer.
From the results it emerges how the stakeholders consider essential, in the case of long-term forecasts regarding both positive and negative weather conditions for their activities, to start a synergy between the institutional, the economic and social networks to undertake a joint action. In the case of a positive forecast, these actions could consist in increasing the tourist offer, building new infrastructures, planning new investments, and in the realization of promotional actions to attract further customers. In the case of negative forecasts, the stakeholders proposed the development of a new tourist destination model, as an alternative to the existing one, with new activities that could adapt to the new climatic conditions.
In this context, the local community is the key component of the destination and the main stakeholder in tourism planning. Therefore, it is essential to pay attention to communities and work in the context of tourist destinations on a local scale to encourage mitigation and adaptation to climate change.
Acknowledgments:
The Project INDECIS is part of ERA4CS, an ERA-NET initiated by JPI Climate, and funded by FORMAS (SE), DLR (DE), BMWFW (AT), IFD (DK), MINECO (ES), ANR (FR) with co-funding by the European Union (Grant 690462).
How to cite: Coscarelli, R., Antronico, L., Boqué Ciurana, A., De Pascale, F., Font Barnet, A., Russo, A. P., and Saladié Borraz, Ò.: Climate trends and tourist flows: first results of the case study in the Sila National Park (southern Italy) within the INDECIS Project., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2978, https://doi.org/10.5194/egusphere-egu2020-2978, 2020.
The scientific community agrees that climate change is generating a series of direct/indirect impacts on the environment and on humans that cannot be underestimated anymore. Consequently, it becomes urgent and necessary to know how this phenomenon affects ecosystems, productive activities and human well-being in order to plan measures for mitigation and adaptation. One sector whose performance is closely related to climate trends is tourism. The influence that climate change can have on tourism determines the need for adopting appropriate strategies to guarantee the sustainability of tourist destinations.
In order to develop models and tools for the near-real-time acquisition of climate data and for spatial interpolation, visualization and communication of climate monitoring to territorial stakeholders, the INDECIS project has involved a partnership of experts in the climate sector, from 12 European countries. The INDECIS Project intends to develop an integrated approach to produce a series of climate indicators aimed at the high priority sectors of the Global Framework for Climate Services of the World Meteorological Organization (agriculture, risk reduction, energy, health, water), with the addition of tourism.
With regards to the tourism sector, the territory of the Sila National Park (Calabria, southern Italy) has been selected as a study area for the acquisition of sectorial data on tourism (in particular, attendance data and tourist arrivals) and for the realization of a Workshop useful for the identification and enhancement of climate services that should be provided to stakeholders of the tourist destination, based on their needs. The Workshop was organized with three focus groups related to the following tourism activities: snow tourism, water and lake tourism, and earth tourism. Within the focus groups, the identified stakeholders - hotel groups, local associations, tourist agencies, parks, etc. - were able to highlight their needs in relation to the climate services that the INDECIS Project intends to offer.
From the results it emerges how the stakeholders consider essential, in the case of long-term forecasts regarding both positive and negative weather conditions for their activities, to start a synergy between the institutional, the economic and social networks to undertake a joint action. In the case of a positive forecast, these actions could consist in increasing the tourist offer, building new infrastructures, planning new investments, and in the realization of promotional actions to attract further customers. In the case of negative forecasts, the stakeholders proposed the development of a new tourist destination model, as an alternative to the existing one, with new activities that could adapt to the new climatic conditions.
In this context, the local community is the key component of the destination and the main stakeholder in tourism planning. Therefore, it is essential to pay attention to communities and work in the context of tourist destinations on a local scale to encourage mitigation and adaptation to climate change.
Acknowledgments:
The Project INDECIS is part of ERA4CS, an ERA-NET initiated by JPI Climate, and funded by FORMAS (SE), DLR (DE), BMWFW (AT), IFD (DK), MINECO (ES), ANR (FR) with co-funding by the European Union (Grant 690462).
How to cite: Coscarelli, R., Antronico, L., Boqué Ciurana, A., De Pascale, F., Font Barnet, A., Russo, A. P., and Saladié Borraz, Ò.: Climate trends and tourist flows: first results of the case study in the Sila National Park (southern Italy) within the INDECIS Project., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2978, https://doi.org/10.5194/egusphere-egu2020-2978, 2020.
EGU2020-8816 | Displays | ITS5.6/NH9.22
Insurance Fund as an Adaptation Measure for Increasing Water Security in Basins Under ChangeGabriela Gesualdo, Felipe Souza, and Eduardo Mendiondo
Extreme weather events are increasingly evident and widespread around the world due to climate change. These events are driven by rising temperatures and changes in precipitation patterns, which lead to changes in flood frequency, drought and water availability. To reduce the future impacts of natural disasters, it is crucial to understand the spatiotemporal variability of social, economic and environmental vulnerabilities related to natural disasters. Particularly, developing countries are more vulnerable to climate risks due to their greater economic dependence on climate-sensitive primary activities, infrastructure, finance and other factors that undermine successful adaptation. In this concept, adaptation plays the role of anticipating the adverse effects of climate change and taking appropriate measures to prevent or minimize the damage they may cause. Thus, the insurance fund is a valuable adaptation tool for unexpected losses reimbursement, long-term impacts prevention and encouraging risk mitigation. Although this approach is successful throughout the world and major organizations support insurance as an adaptation measure, the Brazilian insurance fund only provides support for rural landowners. Thus, we will evaluate the implementation of an indexed multi-risk insurance fund integrated with water security parameters, as an instrument for adaptation to climate change. We will use the SWAT+, a hydrosedimentological model, to assess the current conditions and future scenarios (up to 2100) of water security indices considering two International Panel on Climate Change (IPCC) Representative Concentration Pathways (RCP 4.5 and RCP 8.5). Then, we will incorporate those parameters to the Hydrological Risk Transfer Model (MTRH). Our results will provide optimized premium in current and future scenarios for supporting adaptation plans to climate change. Furthermore, to contribute to technical-scientific information addressing possible effects of climate change on the hydrometeorological variables and their spatiotemporal variability.
How to cite: Gesualdo, G., Souza, F., and Mendiondo, E.: Insurance Fund as an Adaptation Measure for Increasing Water Security in Basins Under Change, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8816, https://doi.org/10.5194/egusphere-egu2020-8816, 2020.
Extreme weather events are increasingly evident and widespread around the world due to climate change. These events are driven by rising temperatures and changes in precipitation patterns, which lead to changes in flood frequency, drought and water availability. To reduce the future impacts of natural disasters, it is crucial to understand the spatiotemporal variability of social, economic and environmental vulnerabilities related to natural disasters. Particularly, developing countries are more vulnerable to climate risks due to their greater economic dependence on climate-sensitive primary activities, infrastructure, finance and other factors that undermine successful adaptation. In this concept, adaptation plays the role of anticipating the adverse effects of climate change and taking appropriate measures to prevent or minimize the damage they may cause. Thus, the insurance fund is a valuable adaptation tool for unexpected losses reimbursement, long-term impacts prevention and encouraging risk mitigation. Although this approach is successful throughout the world and major organizations support insurance as an adaptation measure, the Brazilian insurance fund only provides support for rural landowners. Thus, we will evaluate the implementation of an indexed multi-risk insurance fund integrated with water security parameters, as an instrument for adaptation to climate change. We will use the SWAT+, a hydrosedimentological model, to assess the current conditions and future scenarios (up to 2100) of water security indices considering two International Panel on Climate Change (IPCC) Representative Concentration Pathways (RCP 4.5 and RCP 8.5). Then, we will incorporate those parameters to the Hydrological Risk Transfer Model (MTRH). Our results will provide optimized premium in current and future scenarios for supporting adaptation plans to climate change. Furthermore, to contribute to technical-scientific information addressing possible effects of climate change on the hydrometeorological variables and their spatiotemporal variability.
How to cite: Gesualdo, G., Souza, F., and Mendiondo, E.: Insurance Fund as an Adaptation Measure for Increasing Water Security in Basins Under Change, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8816, https://doi.org/10.5194/egusphere-egu2020-8816, 2020.
EGU2020-7850 | Displays | ITS5.6/NH9.22
Augmenting Catastrophe Models to Quantify Financial Losses Under Prescribed Climate ScenariosShane Latchman, Alastair Clarke, Boyd Zapatka, Peter Sousounis, and Scott Stransky
In 2019, the Bank of England, through the Prudential Regulation Authority (PRA), became the first regulator to ask insurers how financial losses could change under prescribed climate scenarios. Insurers readily use catastrophe models to quantify the likelihood and severity of financial losses based on at least 40 years of past climate data. However, they cannot readily use these models to answer the climate scenarios posed by the PRA.
We present four novel methods for how to use existing catastrophe models to answer what-if climate scenario questions. The methods make use of sampling algorithms, quantile mapping, and adjustments to model parameters, to represent different climate scenarios.
Using AIR’s Hurricane model for the United States (US), Inland Flood model for Great Britain, and Coastal Flood model for Great Britain, we quantify the sensitivity of the Average Annual Loss (AAL) and the 100-year exceedance probability aggregate loss (100-year loss) to four environmental variables under three climate scenarios. The environmental variables include the (i) frequency and (ii) severity of major US landfalling hurricanes; (iii) the mean sea level along the coast of the US and Great Britain; and (iv) the surface run-off from extreme precipitation events in Great Britain. Each of these variables are increased in turn by low, medium and high amounts as prescribed by the PRA.
We compare each variable and rank their influence on loss. We find that the AAL and 100-year loss are more sensitive to changes in the severity of major US hurricanes than changes in the frequency. We will show whether sea level rise has a greater influence on coastal flooding losses in the US or in Great Britain, and we show how sensitive inland flooding losses are to surface run-off.
The methods yield approximate results but are quicker and easier to implement than running Global Circulation Models. The methods and results will interest those in insurance, the public sector, and academia, who are working to understand how society best adapts to climate change.
How to cite: Latchman, S., Clarke, A., Zapatka, B., Sousounis, P., and Stransky, S.: Augmenting Catastrophe Models to Quantify Financial Losses Under Prescribed Climate Scenarios, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7850, https://doi.org/10.5194/egusphere-egu2020-7850, 2020.
In 2019, the Bank of England, through the Prudential Regulation Authority (PRA), became the first regulator to ask insurers how financial losses could change under prescribed climate scenarios. Insurers readily use catastrophe models to quantify the likelihood and severity of financial losses based on at least 40 years of past climate data. However, they cannot readily use these models to answer the climate scenarios posed by the PRA.
We present four novel methods for how to use existing catastrophe models to answer what-if climate scenario questions. The methods make use of sampling algorithms, quantile mapping, and adjustments to model parameters, to represent different climate scenarios.
Using AIR’s Hurricane model for the United States (US), Inland Flood model for Great Britain, and Coastal Flood model for Great Britain, we quantify the sensitivity of the Average Annual Loss (AAL) and the 100-year exceedance probability aggregate loss (100-year loss) to four environmental variables under three climate scenarios. The environmental variables include the (i) frequency and (ii) severity of major US landfalling hurricanes; (iii) the mean sea level along the coast of the US and Great Britain; and (iv) the surface run-off from extreme precipitation events in Great Britain. Each of these variables are increased in turn by low, medium and high amounts as prescribed by the PRA.
We compare each variable and rank their influence on loss. We find that the AAL and 100-year loss are more sensitive to changes in the severity of major US hurricanes than changes in the frequency. We will show whether sea level rise has a greater influence on coastal flooding losses in the US or in Great Britain, and we show how sensitive inland flooding losses are to surface run-off.
The methods yield approximate results but are quicker and easier to implement than running Global Circulation Models. The methods and results will interest those in insurance, the public sector, and academia, who are working to understand how society best adapts to climate change.
How to cite: Latchman, S., Clarke, A., Zapatka, B., Sousounis, P., and Stransky, S.: Augmenting Catastrophe Models to Quantify Financial Losses Under Prescribed Climate Scenarios, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7850, https://doi.org/10.5194/egusphere-egu2020-7850, 2020.
EGU2020-20103 | Displays | ITS5.6/NH9.22
Climate services for large scale investments in infrastructure and climate resilience in the Danube BasinJelena Radonić, Maja Turk Sekulić, Dubravko Ćulibrk, Martin Drews, Mads L. Dømgaard, and Michel Wortmann
The results presented in this contribution demonstrate the value of climate services for the planned construction of the new Wastewater Treatment Plant (WWTP) in Novi Sad, Serbia. In this case, climate services provided added value for the decision-making processes, in terms of enhanced effectiveness, optimized technological opportunities and minimized risks and by serving as the means of involving and better-informing end-users and stakeholders. The specific goal of the research was to improve climate change resilience of the WWTP and to facilitate better overall hygienic conditions in Novi Sad and to safeguard the potable water resources and the quality of the environment in the areas located downstream and under the influence of the Danube River.
In order to achieve it, preliminary activities were oriented on analyzing the current climate and hydrological conditions, engaging the relevant data providers, stakeholders and policy makers and evaluating what relevant local data would be useful for the study. The data collected was applied in the testing and for improving the Future Danube Multi-hazard, Multi-risk Model (FDM), a catastrophe model implemented in the OASIS Loss Modeling Framework (Oasis-LMF). The FDM is implemented for the entire Danube Basin. High-resolution components for pluvial flood risks were further implemented to the city of Novi Sad, Serbia, after successful testing in the Budapest region. Observations and model results were used in a climate change impact assessment with the purpose of identifying adaptation options, appraisal of adaptation options and integration of an adaptation action plan into the Feasibility Study of the WWTP construction. The results of the pluvial flood model for Novi Sad clearly suggested that it is important to consider pluvial flood risks and that protective measures have to be considered as part of the WWTP construction, both under current and future climate conditions. Moreover, novel estimates of drainage water intensities during heavy rains would advise the design of the simultaneously planned pumping station on the banks of the Danube. Combined, this clearly demonstrates the added value of the climate services and risk information delivered by the FDM also beyond the insurance sector, as well as its potential to support adaptation decision making with respect to infrastructural investments in Novi Sad.
How to cite: Radonić, J., Turk Sekulić, M., Ćulibrk, D., Drews, M., Dømgaard, M. L., and Wortmann, M.: Climate services for large scale investments in infrastructure and climate resilience in the Danube Basin, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20103, https://doi.org/10.5194/egusphere-egu2020-20103, 2020.
The results presented in this contribution demonstrate the value of climate services for the planned construction of the new Wastewater Treatment Plant (WWTP) in Novi Sad, Serbia. In this case, climate services provided added value for the decision-making processes, in terms of enhanced effectiveness, optimized technological opportunities and minimized risks and by serving as the means of involving and better-informing end-users and stakeholders. The specific goal of the research was to improve climate change resilience of the WWTP and to facilitate better overall hygienic conditions in Novi Sad and to safeguard the potable water resources and the quality of the environment in the areas located downstream and under the influence of the Danube River.
In order to achieve it, preliminary activities were oriented on analyzing the current climate and hydrological conditions, engaging the relevant data providers, stakeholders and policy makers and evaluating what relevant local data would be useful for the study. The data collected was applied in the testing and for improving the Future Danube Multi-hazard, Multi-risk Model (FDM), a catastrophe model implemented in the OASIS Loss Modeling Framework (Oasis-LMF). The FDM is implemented for the entire Danube Basin. High-resolution components for pluvial flood risks were further implemented to the city of Novi Sad, Serbia, after successful testing in the Budapest region. Observations and model results were used in a climate change impact assessment with the purpose of identifying adaptation options, appraisal of adaptation options and integration of an adaptation action plan into the Feasibility Study of the WWTP construction. The results of the pluvial flood model for Novi Sad clearly suggested that it is important to consider pluvial flood risks and that protective measures have to be considered as part of the WWTP construction, both under current and future climate conditions. Moreover, novel estimates of drainage water intensities during heavy rains would advise the design of the simultaneously planned pumping station on the banks of the Danube. Combined, this clearly demonstrates the added value of the climate services and risk information delivered by the FDM also beyond the insurance sector, as well as its potential to support adaptation decision making with respect to infrastructural investments in Novi Sad.
How to cite: Radonić, J., Turk Sekulić, M., Ćulibrk, D., Drews, M., Dømgaard, M. L., and Wortmann, M.: Climate services for large scale investments in infrastructure and climate resilience in the Danube Basin, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-20103, https://doi.org/10.5194/egusphere-egu2020-20103, 2020.
EGU2020-19161 | Displays | ITS5.6/NH9.22
'Transport and Transport-Infrastructure' - key findings of a "User Needs" surveyChristoph Matulla, Katharina Enigl, Audrey Macnab, Philip Evans, Gavin Roser, Samuel Muchemi, Gerald Fleming, Walt Dabberdt, Sarah Grimes, and Pekka Leviäkangas
The aim of this contribution is to present the design as well as findings of a survey targeted assessing the needs of stakeholders in the transportation domain with respect to climate change driven damages. This ‘User needs survey’ is one of the major objectives of multifarious collaborations investigating anticipatory asset protection strategies under accelerated climate change. The viability of these efforts is guaranteed by pairing up the scientific community (CIT, University of Vienna, BOKU, TU Vienna) with notable stakeholders (F&L, WMO, BMNT).
The ‘User needs’ survey, was carried out in cooperation between the Climate Impact Team (CIT) the European Transport, Freight and Logistics Leaders Forum (F&L) and the World Meteorological Organization (WMO). The aim of the survey is to identify services that stakeholders in the realm of transportation themselves consider significant and beneficial.
Therefore, findings should be of vital importance for -- (i) setting up meaningful climate services; (ii) selecting sustainable protection measures strengthening transportation system resilience in the face of future climate change; (iii) compiling the chapter on 'Land Transport' in WMO’s new Service Delivery Guide -- as they ensure the expediency of the services described.
Presented results encompass: (i) an assessment of extreme events in terms of their damaging impacts on transport, freight and logistics by stakeholders; (ii) an assessment of the vulnerability of assets in transport, freight and logistics by stakeholders; (iii) an illustration of the extent of impacts climate-change (through shifts in extremes and associated threats) has had on transport, freight and logistics over the past decades; (iv) the stakeholders' expectations regarding future developments towards advancing climate-change and (v) an evaluation of time horizons (short, medium and long term) at which stakeholders need services.
A summary completes this contribution.
How to cite: Matulla, C., Enigl, K., Macnab, A., Evans, P., Roser, G., Muchemi, S., Fleming, G., Dabberdt, W., Grimes, S., and Leviäkangas, P.: 'Transport and Transport-Infrastructure' - key findings of a "User Needs" survey, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19161, https://doi.org/10.5194/egusphere-egu2020-19161, 2020.
The aim of this contribution is to present the design as well as findings of a survey targeted assessing the needs of stakeholders in the transportation domain with respect to climate change driven damages. This ‘User needs survey’ is one of the major objectives of multifarious collaborations investigating anticipatory asset protection strategies under accelerated climate change. The viability of these efforts is guaranteed by pairing up the scientific community (CIT, University of Vienna, BOKU, TU Vienna) with notable stakeholders (F&L, WMO, BMNT).
The ‘User needs’ survey, was carried out in cooperation between the Climate Impact Team (CIT) the European Transport, Freight and Logistics Leaders Forum (F&L) and the World Meteorological Organization (WMO). The aim of the survey is to identify services that stakeholders in the realm of transportation themselves consider significant and beneficial.
Therefore, findings should be of vital importance for -- (i) setting up meaningful climate services; (ii) selecting sustainable protection measures strengthening transportation system resilience in the face of future climate change; (iii) compiling the chapter on 'Land Transport' in WMO’s new Service Delivery Guide -- as they ensure the expediency of the services described.
Presented results encompass: (i) an assessment of extreme events in terms of their damaging impacts on transport, freight and logistics by stakeholders; (ii) an assessment of the vulnerability of assets in transport, freight and logistics by stakeholders; (iii) an illustration of the extent of impacts climate-change (through shifts in extremes and associated threats) has had on transport, freight and logistics over the past decades; (iv) the stakeholders' expectations regarding future developments towards advancing climate-change and (v) an evaluation of time horizons (short, medium and long term) at which stakeholders need services.
A summary completes this contribution.
How to cite: Matulla, C., Enigl, K., Macnab, A., Evans, P., Roser, G., Muchemi, S., Fleming, G., Dabberdt, W., Grimes, S., and Leviäkangas, P.: 'Transport and Transport-Infrastructure' - key findings of a "User Needs" survey, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19161, https://doi.org/10.5194/egusphere-egu2020-19161, 2020.
EGU2020-11639 | Displays | ITS5.6/NH9.22
Climate services for forest fire risk managementCéline Deandreis, Gwendoline Lacressonière, Marc Chiapero, Miguel Mendes, Humberto Diaz Fidalgo, Maxence Rageade, Christoph Menz, Phil Cottle, and Nicholas Gellie
The weather and its climatic evolution play the main role in generating hazard profiles of forest fires. The increased in magnitude and damage of last forest fire seasons has caused a larger concern of the insurance sector for this peril. Due to the lack of knowledge of this risk, there is a widespread low level of insurance coverage of forest fire risk. A first step forward is clearly needed to (1) propose simplified approaches showing how the risk links with its main weather drivers, and (2) re-incentivize the use of insurance by forest managers.
To answer this objective, ARIA Technologies and its partners have developed a geospatial web-based decision tool to support both forest owners and forest insurance actors in managing the vulnerability of their asset/portfolios to fire risk. RiskFP includes:
- A “realistic disaster scenarios generator module” that allows the generation of hundreds of scenarios of extreme wildfires to complete information from historical fires databases. This information can be used in damage and loss modelling to improve the estimation of the probable maximum loss (PML). In addition, the risk FP “impact module” provides to the users information on the different potential impact like the amount of biomass burnt or the economic losses.
- A precise mapping of the local forest fire risk through the graphical representation of an index including five risk levels (from low to extreme) that provides an overview of the most critical locations regarding the potential behavior of the fire in case of an hypothetical ignition.
- A forecasting/projection module to inform the users on the frequency of the severe-extreme days in the mid- and long-term horizons. It can be used by the forestry sector to better anticipate and prepare the next fire season and as a planning tool for long-term operation/investment.
At the heart of the platform lies the concept of critical landscape weather patterns (CLP), an empirical fire weather index that identifies severe-extreme weather days derived from hourly records of a representative weather station (Gellie, 2019). It could be computed from past records, seasonal forecast or climate projection allowing to provide fire risk assessment for these different time scales. The CLP module is coupled with a propagation model, the Wildfire Analyst® forest fire simulator at the resolution of about 40m, that is used to estimate the progression and behavior of the fire in space and time. It is based on the standardized and validated semi-empirical Rothermel propagation model (1972).
Acknowledgements:
We acknowledge the European Commission for sponsoring this work in the framework of the H2020-insurance project (Grant Agreement number 730381).
How to cite: Deandreis, C., Lacressonière, G., Chiapero, M., Mendes, M., Diaz Fidalgo, H., Rageade, M., Menz, C., Cottle, P., and Gellie, N.: Climate services for forest fire risk management, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11639, https://doi.org/10.5194/egusphere-egu2020-11639, 2020.
The weather and its climatic evolution play the main role in generating hazard profiles of forest fires. The increased in magnitude and damage of last forest fire seasons has caused a larger concern of the insurance sector for this peril. Due to the lack of knowledge of this risk, there is a widespread low level of insurance coverage of forest fire risk. A first step forward is clearly needed to (1) propose simplified approaches showing how the risk links with its main weather drivers, and (2) re-incentivize the use of insurance by forest managers.
To answer this objective, ARIA Technologies and its partners have developed a geospatial web-based decision tool to support both forest owners and forest insurance actors in managing the vulnerability of their asset/portfolios to fire risk. RiskFP includes:
- A “realistic disaster scenarios generator module” that allows the generation of hundreds of scenarios of extreme wildfires to complete information from historical fires databases. This information can be used in damage and loss modelling to improve the estimation of the probable maximum loss (PML). In addition, the risk FP “impact module” provides to the users information on the different potential impact like the amount of biomass burnt or the economic losses.
- A precise mapping of the local forest fire risk through the graphical representation of an index including five risk levels (from low to extreme) that provides an overview of the most critical locations regarding the potential behavior of the fire in case of an hypothetical ignition.
- A forecasting/projection module to inform the users on the frequency of the severe-extreme days in the mid- and long-term horizons. It can be used by the forestry sector to better anticipate and prepare the next fire season and as a planning tool for long-term operation/investment.
At the heart of the platform lies the concept of critical landscape weather patterns (CLP), an empirical fire weather index that identifies severe-extreme weather days derived from hourly records of a representative weather station (Gellie, 2019). It could be computed from past records, seasonal forecast or climate projection allowing to provide fire risk assessment for these different time scales. The CLP module is coupled with a propagation model, the Wildfire Analyst® forest fire simulator at the resolution of about 40m, that is used to estimate the progression and behavior of the fire in space and time. It is based on the standardized and validated semi-empirical Rothermel propagation model (1972).
Acknowledgements:
We acknowledge the European Commission for sponsoring this work in the framework of the H2020-insurance project (Grant Agreement number 730381).
How to cite: Deandreis, C., Lacressonière, G., Chiapero, M., Mendes, M., Diaz Fidalgo, H., Rageade, M., Menz, C., Cottle, P., and Gellie, N.: Climate services for forest fire risk management, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11639, https://doi.org/10.5194/egusphere-egu2020-11639, 2020.
EGU2020-5605 | Displays | ITS5.6/NH9.22
Climate services to reduce human health impact associated with environmental risk factors exposure.Samya Pinheiro, Celine Deandreis, Gwendoline Lacressonniere, Larissa Zanutto, Christian Witt, Christina Hoffmann, Peter Hoffmann, Fred Hattermann, Ylva Hauf, Martin Drews, Mads Dømgaard, Per Skougaard Kaspersen, and Robin Hervé
Introduction - Climate change impact reduction can be achieved by exposure reduction and improved health care management. Adaptation strategies can be designed based on sustainable urban-infrastructure planning and warning systems (Banwell, 2018). The H2020 Insurance Project aimed to help the health insurance sector to understand the relation between the air quality, climate extremes and health conditions in a given population, quantifying potential losses associated with the current and future climate risk. Potential climate services were identified Considering the rising demand for adaptation solutions in a climate change context, we present two test cases, applied for EU Projects (H2020 Insurance - Germany and CAMS/AIRE SALUD – Chile), to illustrate the potential of air quality and meteorological modeling for climate change adaptation.
Methods and Results -
H2020 Insurance – Health DEMO (https://h2020insurance.oasishub.co/): Most of the sector has no detailed information regarding the baseline impact of air pollution or weather extreme events (i.e. heatwaves), neither the projection losses in the future climate. H2020-Insurance Health Work Package showcased a Risk/Impact assessment based on high-resolution air quality and meteorological databases integrated with morbidity/mortality data and provided present/future climate impact on health.
District-specific climate relative risk for COPD hospital admissions in Berlin and Potsdam, considering the period between 2012-2016. The attributable morbidity and the associated cost were calculated for the present condition. Climate change projections on air quality and heat exposure were computed and the potential future losses estimated. In parallel, a clinical trial demonstrated how specific counteracting measures (establish ideal room temperatures, telemedicine to monitor the domestic environment, etc.) can help to reduce the hospital stay and shorten recovery time.
CAMS Project – AIRE SALUD (www.airesalud.cl): For contexts, whereas risk awareness is built and strong, forecast systems are key resources to alert the population and give recommendations to reduce exposure. The AIRE SALUD system is based on a geospatial analysis of medical consultations in public emergencies recorded between 2011 and 2018 by the Department of Health Statistics and Information (DEIS) of the Ministry of Health of Chile. This integrates demographic data, socioeconomic vulnerability factors, participatory web data flows, and atmospheric variables, and allowed the development of geostatistical/machine learning algorithms to predict the increase in respiratory infections in the Santiago Metropolitan Region with a week of anticipation, with a confidence level of over 87%.
Conclusion - The applications described present potential as a decision-making tool for adaptation plans in urban areas, improving population resilience and/or giving support on healthcare infrastructure planning strategy.
How to cite: Pinheiro, S., Deandreis, C., Lacressonniere, G., Zanutto, L., Witt, C., Hoffmann, C., Hoffmann, P., Hattermann, F., Hauf, Y., Drews, M., Dømgaard, M., Kaspersen, P. S., and Hervé, R.: Climate services to reduce human health impact associated with environmental risk factors exposure., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5605, https://doi.org/10.5194/egusphere-egu2020-5605, 2020.
Introduction - Climate change impact reduction can be achieved by exposure reduction and improved health care management. Adaptation strategies can be designed based on sustainable urban-infrastructure planning and warning systems (Banwell, 2018). The H2020 Insurance Project aimed to help the health insurance sector to understand the relation between the air quality, climate extremes and health conditions in a given population, quantifying potential losses associated with the current and future climate risk. Potential climate services were identified Considering the rising demand for adaptation solutions in a climate change context, we present two test cases, applied for EU Projects (H2020 Insurance - Germany and CAMS/AIRE SALUD – Chile), to illustrate the potential of air quality and meteorological modeling for climate change adaptation.
Methods and Results -
H2020 Insurance – Health DEMO (https://h2020insurance.oasishub.co/): Most of the sector has no detailed information regarding the baseline impact of air pollution or weather extreme events (i.e. heatwaves), neither the projection losses in the future climate. H2020-Insurance Health Work Package showcased a Risk/Impact assessment based on high-resolution air quality and meteorological databases integrated with morbidity/mortality data and provided present/future climate impact on health.
District-specific climate relative risk for COPD hospital admissions in Berlin and Potsdam, considering the period between 2012-2016. The attributable morbidity and the associated cost were calculated for the present condition. Climate change projections on air quality and heat exposure were computed and the potential future losses estimated. In parallel, a clinical trial demonstrated how specific counteracting measures (establish ideal room temperatures, telemedicine to monitor the domestic environment, etc.) can help to reduce the hospital stay and shorten recovery time.
CAMS Project – AIRE SALUD (www.airesalud.cl): For contexts, whereas risk awareness is built and strong, forecast systems are key resources to alert the population and give recommendations to reduce exposure. The AIRE SALUD system is based on a geospatial analysis of medical consultations in public emergencies recorded between 2011 and 2018 by the Department of Health Statistics and Information (DEIS) of the Ministry of Health of Chile. This integrates demographic data, socioeconomic vulnerability factors, participatory web data flows, and atmospheric variables, and allowed the development of geostatistical/machine learning algorithms to predict the increase in respiratory infections in the Santiago Metropolitan Region with a week of anticipation, with a confidence level of over 87%.
Conclusion - The applications described present potential as a decision-making tool for adaptation plans in urban areas, improving population resilience and/or giving support on healthcare infrastructure planning strategy.
How to cite: Pinheiro, S., Deandreis, C., Lacressonniere, G., Zanutto, L., Witt, C., Hoffmann, C., Hoffmann, P., Hattermann, F., Hauf, Y., Drews, M., Dømgaard, M., Kaspersen, P. S., and Hervé, R.: Climate services to reduce human health impact associated with environmental risk factors exposure., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-5605, https://doi.org/10.5194/egusphere-egu2020-5605, 2020.
EGU2020-9373 * | Displays | ITS5.6/NH9.22 | Highlight
Economic loss due to flooding in Europe at 1.5°C global warmingMaximiliano Sassi, Carlotta Scudeler, Ludovico Nicotina, Anongnart Assteerawatt, and Arno Hilberts
We study the impact of climate change on European flood economic losses under 1.5°C global warming scenario. Climate scenarios were generated with the Community Atmospheric Model (CAM) version 5 under the protocols of the Half a degree Additional warming, Prognosis and Projected Impacts (HAPPI) experiment. Present climate scenario corresponding to the years 2006-2015 includes observed forcing conditions for sea surface temperatures (SSTs) and sea-ice cover. The future 1.5°C scenario was constructed following SST warming according to the response to the RCP2.6 in CMIP5 model simulations. Each scenario comprises five 10-year long simulations that differ in the initial weather state. For each scenario we generated a 1000 years long stochastic set of precipitation based on the main modes of variability of gridded precipitation data through Principal Component Analysis applied to the monthly precipitation fields of the combined 50 simulated years. The other variables were obtained through an analogue month approach. Stochastic monthly fields were subsequently disaggregated in space and time to 3-hourly, 6 km resolution grids, and these were finally fed to a well-calibrated flood-loss model. The flood-loss model comprises a rainfall-runoff component, a flood routing scheme, an inundation component and a financial module that integrates flood hazard, buildings vulnerability, and economic exposure at location level. Prior to model evaluation, the stochastic meteorological forcing was bias-corrected with the stochastic set (based on observations) employed in the construction and calibration of the flood-loss model. The method for bias-correction preserves the ratio of quantiles of the future scenario to the present and preserves the correlation structure of the forcing variables. Average annual loss for Europe with the current-climate scenario generated by CAM is within 10-15% of the current industry estimate (based on observations), which suggests the applicability of the proposed approach. For the future scenario the model suggests a significant increase in loss (> 4 times) with respect to the present, which is in line with other studies for similar future global warming pathways.
How to cite: Sassi, M., Scudeler, C., Nicotina, L., Assteerawatt, A., and Hilberts, A.: Economic loss due to flooding in Europe at 1.5°C global warming, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9373, https://doi.org/10.5194/egusphere-egu2020-9373, 2020.
We study the impact of climate change on European flood economic losses under 1.5°C global warming scenario. Climate scenarios were generated with the Community Atmospheric Model (CAM) version 5 under the protocols of the Half a degree Additional warming, Prognosis and Projected Impacts (HAPPI) experiment. Present climate scenario corresponding to the years 2006-2015 includes observed forcing conditions for sea surface temperatures (SSTs) and sea-ice cover. The future 1.5°C scenario was constructed following SST warming according to the response to the RCP2.6 in CMIP5 model simulations. Each scenario comprises five 10-year long simulations that differ in the initial weather state. For each scenario we generated a 1000 years long stochastic set of precipitation based on the main modes of variability of gridded precipitation data through Principal Component Analysis applied to the monthly precipitation fields of the combined 50 simulated years. The other variables were obtained through an analogue month approach. Stochastic monthly fields were subsequently disaggregated in space and time to 3-hourly, 6 km resolution grids, and these were finally fed to a well-calibrated flood-loss model. The flood-loss model comprises a rainfall-runoff component, a flood routing scheme, an inundation component and a financial module that integrates flood hazard, buildings vulnerability, and economic exposure at location level. Prior to model evaluation, the stochastic meteorological forcing was bias-corrected with the stochastic set (based on observations) employed in the construction and calibration of the flood-loss model. The method for bias-correction preserves the ratio of quantiles of the future scenario to the present and preserves the correlation structure of the forcing variables. Average annual loss for Europe with the current-climate scenario generated by CAM is within 10-15% of the current industry estimate (based on observations), which suggests the applicability of the proposed approach. For the future scenario the model suggests a significant increase in loss (> 4 times) with respect to the present, which is in line with other studies for similar future global warming pathways.
How to cite: Sassi, M., Scudeler, C., Nicotina, L., Assteerawatt, A., and Hilberts, A.: Economic loss due to flooding in Europe at 1.5°C global warming, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9373, https://doi.org/10.5194/egusphere-egu2020-9373, 2020.
EGU2020-18571 | Displays | ITS5.6/NH9.22
'Islands of excellence’ in catastrophe & disaster risk data, tools and services in the face of the climate change crisis – how can innovation systems advance stakeholder understanding and use of catastrophe and disaster sciences?Tracy Irvine
New catastrophe and disaster risk data, tools and services can often include complex science and algorithms that offer profoundly important information on understanding risk or can inform climate adaption. However, if few people know about or understand how and in what context to use these tools, they remain on the databases of academic institutions and in scientific journals across the world. How many tools that could transform the world’s understanding of risk and ways to adapt to that risk already exist or are currently under development? The answer is likely to be in the hundreds. But, how many of those tools have ever been used beyond one or two scientific case studies? The answer is likely to be, in most cases, very few.
Academic institutions often administer barriers on access to their data and tools through institutional data management and by specifically implementing non-commercial use licensing in the dissemination of tools once scientific studies are completed. In addition, very commonly, insufficient thought is put to the exploitation strategies of these tools. The gaps in understanding and trust between academia and the needs of business sometimes feel insurmountable on both sides. Is ‘custom’ defying reason in the face of the climate change crisis and the need for rapid systems transformation globally?
The Oasis family, offers new approaches around transparency, collaboration, dissemination and exploitation and the encouragement of intereoperability by providing platforms that allow for comparative approaches to scientific data and tools.
Firstly, "OASIS LMF is an open source platform for developing, deploying and executing catastrophe models to enable the “plug and play” of hazard and vulnerability modules (along with exposure and insurance policy terms) by way of a set of data standards that describe a model. It has been built in collaboration with the insurance industry (https://oasislmf.org/)." Oasis Palmtree offers support to enable access to this system.
Secondly, Oasis Hub, has designed science innovation approaches to bringing tools and data to wider, diverse audiences in collaboration with scientific institutions. We discuss "OASIS Hub, as a global window and conduit to free and commercial environmental, catastrophe and risk data, tools and services (https://oasishub.co/) as an example of a new innovation approach.
How to cite: Irvine, T.: 'Islands of excellence’ in catastrophe & disaster risk data, tools and services in the face of the climate change crisis – how can innovation systems advance stakeholder understanding and use of catastrophe and disaster sciences?, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18571, https://doi.org/10.5194/egusphere-egu2020-18571, 2020.
New catastrophe and disaster risk data, tools and services can often include complex science and algorithms that offer profoundly important information on understanding risk or can inform climate adaption. However, if few people know about or understand how and in what context to use these tools, they remain on the databases of academic institutions and in scientific journals across the world. How many tools that could transform the world’s understanding of risk and ways to adapt to that risk already exist or are currently under development? The answer is likely to be in the hundreds. But, how many of those tools have ever been used beyond one or two scientific case studies? The answer is likely to be, in most cases, very few.
Academic institutions often administer barriers on access to their data and tools through institutional data management and by specifically implementing non-commercial use licensing in the dissemination of tools once scientific studies are completed. In addition, very commonly, insufficient thought is put to the exploitation strategies of these tools. The gaps in understanding and trust between academia and the needs of business sometimes feel insurmountable on both sides. Is ‘custom’ defying reason in the face of the climate change crisis and the need for rapid systems transformation globally?
The Oasis family, offers new approaches around transparency, collaboration, dissemination and exploitation and the encouragement of intereoperability by providing platforms that allow for comparative approaches to scientific data and tools.
Firstly, "OASIS LMF is an open source platform for developing, deploying and executing catastrophe models to enable the “plug and play” of hazard and vulnerability modules (along with exposure and insurance policy terms) by way of a set of data standards that describe a model. It has been built in collaboration with the insurance industry (https://oasislmf.org/)." Oasis Palmtree offers support to enable access to this system.
Secondly, Oasis Hub, has designed science innovation approaches to bringing tools and data to wider, diverse audiences in collaboration with scientific institutions. We discuss "OASIS Hub, as a global window and conduit to free and commercial environmental, catastrophe and risk data, tools and services (https://oasishub.co/) as an example of a new innovation approach.
How to cite: Irvine, T.: 'Islands of excellence’ in catastrophe & disaster risk data, tools and services in the face of the climate change crisis – how can innovation systems advance stakeholder understanding and use of catastrophe and disaster sciences?, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18571, https://doi.org/10.5194/egusphere-egu2020-18571, 2020.
ITS5.7/CL2.14 – Climate change and other drivers of environmental change: Developments, interlinkages and impacts in regional seas and coastal regions
EGU2020-4603 | Displays | ITS5.7/CL2.14
Understanding the coupled land-sea system dynamics in coastal regions through a participatory approach: A Baltic case studySamaneh Seifollahi-Aghmiuni, Zahra Kalantari, and Georgia Destouni
Eutrophication and water quality issues in the Baltic Sea and its coastal zones have strong human dimensions, including also land and water uses and their management in hinterland areas. Solutions require participatory approaches to inspire integrated long-term ‘land-sea’ systems management, and contribute to the development of harmonized guidelines at different spatial scales. Considering the source to sea case of Swedish Norrström drainage basin, its surrounding coastal areas, and the associated marine basin of the Baltic Sea, this study has used a participatory approach to enhance understanding of the land-sea system links and dynamics, and facilitate exploration of cross-system/sector cooperation opportunities for addressing water-related high-impact issues, including water pollution and eutrophication. Employing a problem-oriented system thinking approach, we investigate the following questions based on various sector perspectives in the study region: (i) What are the key underlying land-sea system elements and their interlinkages? (ii) What are the most relevant and important dynamics for evaluation of land-sea system behavior? and (iii) What are the main challenges and opportunities for sustainable coastal management and development? Different groups of relevant stakeholders are asked to co-create causal loop diagrams for characterizing the land-sea system dynamics based on their perceptions of and answers to the questions (i)-(iii). From the co-created causal loop diagrams, sector-specific and general issues, challenges, opportunities, and barriers to sustainable coastal development in the study region are identified. In further analysis, various scenarios of the land-sea system dynamics and the importance of feedback mechanisms are investigated. The scenario results and associated system behavior are also validated by stakeholders. Selected scenarios are further quantified by systems dynamics modeling, exploring the impacts of associated coastal development and policy options on the regional water-related issues and potential sustainable solutions. The scenario analysis outcomes highlight the necessity and usefulness of combining scientific knowledge with local expertise for synergistic and strategic planning of sustainable coastal region development.
How to cite: Seifollahi-Aghmiuni, S., Kalantari, Z., and Destouni, G.: Understanding the coupled land-sea system dynamics in coastal regions through a participatory approach: A Baltic case study, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4603, https://doi.org/10.5194/egusphere-egu2020-4603, 2020.
Eutrophication and water quality issues in the Baltic Sea and its coastal zones have strong human dimensions, including also land and water uses and their management in hinterland areas. Solutions require participatory approaches to inspire integrated long-term ‘land-sea’ systems management, and contribute to the development of harmonized guidelines at different spatial scales. Considering the source to sea case of Swedish Norrström drainage basin, its surrounding coastal areas, and the associated marine basin of the Baltic Sea, this study has used a participatory approach to enhance understanding of the land-sea system links and dynamics, and facilitate exploration of cross-system/sector cooperation opportunities for addressing water-related high-impact issues, including water pollution and eutrophication. Employing a problem-oriented system thinking approach, we investigate the following questions based on various sector perspectives in the study region: (i) What are the key underlying land-sea system elements and their interlinkages? (ii) What are the most relevant and important dynamics for evaluation of land-sea system behavior? and (iii) What are the main challenges and opportunities for sustainable coastal management and development? Different groups of relevant stakeholders are asked to co-create causal loop diagrams for characterizing the land-sea system dynamics based on their perceptions of and answers to the questions (i)-(iii). From the co-created causal loop diagrams, sector-specific and general issues, challenges, opportunities, and barriers to sustainable coastal development in the study region are identified. In further analysis, various scenarios of the land-sea system dynamics and the importance of feedback mechanisms are investigated. The scenario results and associated system behavior are also validated by stakeholders. Selected scenarios are further quantified by systems dynamics modeling, exploring the impacts of associated coastal development and policy options on the regional water-related issues and potential sustainable solutions. The scenario analysis outcomes highlight the necessity and usefulness of combining scientific knowledge with local expertise for synergistic and strategic planning of sustainable coastal region development.
How to cite: Seifollahi-Aghmiuni, S., Kalantari, Z., and Destouni, G.: Understanding the coupled land-sea system dynamics in coastal regions through a participatory approach: A Baltic case study, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4603, https://doi.org/10.5194/egusphere-egu2020-4603, 2020.
EGU2020-9882 | Displays | ITS5.7/CL2.14
Past, current, and future freshwater inflows to the Baltic Sea under changing climate and socioeconomicsAlena Bartosova, René Capell, Jørgen E. Olesen, and Berit Arheimer
The Baltic Sea is suffering from eutrophication caused by nutrient discharges from land to sea. These freshwater inflows vary in magnitude from year to year as well as within each year due to e.g. natural variability, weather patterns, and seasonal human activities. Nutrient transport models are important tools for assessments of macro-nutrient fluxes (nitrogen, phosphorus) and for evaluating the connection between pollution sources and the assessed water body. While understanding of current status is important, impacts from changing climate and socio-economics on freshwater inflows to the Baltic Sea also need to be taken into account when planning management practices and mitigation measures.
Continental to global scale catchment-based hydrological models have emerged in recent years as tools e.g. for flood forecasting, large-scale climate impact analyses, and estimation of time-dynamic water fluxes into sea basins. Here, we present results from the pan-European rainfall-runoff and nutrient transfer model E-HYPE, developed as a multi-purpose tool for large-scale hydrological analyses. We compared current freshwater inflows from land with those from dynamic modelling with E-HYPE under various climate and socioeconomic conditions. The socioeconomic conditions (land use, agricultural practices, population changes, dietary changes, atmospheric deposition, and wastewater technologies) were evaluated for 3 additional time horizons: 2050s using the Shared Socioeconomic Pathways, 1900s using historical data, and a reference period using a synthetic “no human impact” scenario. An ensemble of 4 climate models that preserves the range of projected changes in precipitation and temperature from a larger ensemble was selected for analysis of climate impacts in 2050s.
We show that while climate change affects nutrient loads to the Baltic Sea, these impacts can be overshadowed by the impacts of changing socioeconomic factors. Historical nitrogen loads were estimated as 43% and 33% of the current loads for the 1900s and the “no human impact” scenarios, respectively. Average nitrogen loads are projected to increase by 4-10% (8% on average) as a response to climate change by 2050s. Purely mitigation measures that did not address the magnitude of the nutrient sources reduced the total nitrogen load by <5%, with local efficiencies being reduced through retention processes. However, changes in the socioeconomic drivers led to significant changes in the future loads with the range of impacts spanning 30% of the current load depending on the socioeconomic pathway to be followed. This means that policy decisions have by far the largest impact when managing eutrophication in the Baltic Sea region.
Bartosova, A., Capell, R., Olesen, J.E. et al. (2019). Future socioeconomic conditions may have a larger impact than climate change on nutrient loads to the Baltic Sea. Ambio 48, 1325–1336 doi:10.1007/s13280-019-01243-5
How to cite: Bartosova, A., Capell, R., Olesen, J. E., and Arheimer, B.: Past, current, and future freshwater inflows to the Baltic Sea under changing climate and socioeconomics, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9882, https://doi.org/10.5194/egusphere-egu2020-9882, 2020.
The Baltic Sea is suffering from eutrophication caused by nutrient discharges from land to sea. These freshwater inflows vary in magnitude from year to year as well as within each year due to e.g. natural variability, weather patterns, and seasonal human activities. Nutrient transport models are important tools for assessments of macro-nutrient fluxes (nitrogen, phosphorus) and for evaluating the connection between pollution sources and the assessed water body. While understanding of current status is important, impacts from changing climate and socio-economics on freshwater inflows to the Baltic Sea also need to be taken into account when planning management practices and mitigation measures.
Continental to global scale catchment-based hydrological models have emerged in recent years as tools e.g. for flood forecasting, large-scale climate impact analyses, and estimation of time-dynamic water fluxes into sea basins. Here, we present results from the pan-European rainfall-runoff and nutrient transfer model E-HYPE, developed as a multi-purpose tool for large-scale hydrological analyses. We compared current freshwater inflows from land with those from dynamic modelling with E-HYPE under various climate and socioeconomic conditions. The socioeconomic conditions (land use, agricultural practices, population changes, dietary changes, atmospheric deposition, and wastewater technologies) were evaluated for 3 additional time horizons: 2050s using the Shared Socioeconomic Pathways, 1900s using historical data, and a reference period using a synthetic “no human impact” scenario. An ensemble of 4 climate models that preserves the range of projected changes in precipitation and temperature from a larger ensemble was selected for analysis of climate impacts in 2050s.
We show that while climate change affects nutrient loads to the Baltic Sea, these impacts can be overshadowed by the impacts of changing socioeconomic factors. Historical nitrogen loads were estimated as 43% and 33% of the current loads for the 1900s and the “no human impact” scenarios, respectively. Average nitrogen loads are projected to increase by 4-10% (8% on average) as a response to climate change by 2050s. Purely mitigation measures that did not address the magnitude of the nutrient sources reduced the total nitrogen load by <5%, with local efficiencies being reduced through retention processes. However, changes in the socioeconomic drivers led to significant changes in the future loads with the range of impacts spanning 30% of the current load depending on the socioeconomic pathway to be followed. This means that policy decisions have by far the largest impact when managing eutrophication in the Baltic Sea region.
Bartosova, A., Capell, R., Olesen, J.E. et al. (2019). Future socioeconomic conditions may have a larger impact than climate change on nutrient loads to the Baltic Sea. Ambio 48, 1325–1336 doi:10.1007/s13280-019-01243-5
How to cite: Bartosova, A., Capell, R., Olesen, J. E., and Arheimer, B.: Past, current, and future freshwater inflows to the Baltic Sea under changing climate and socioeconomics, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9882, https://doi.org/10.5194/egusphere-egu2020-9882, 2020.
EGU2020-21520 | Displays | ITS5.7/CL2.14
First results of model sensitivity studies on the influence of global changes to the North and Baltic SeasBirte-Marie Ehlers, Frank Janssen, and Janna Abalichin
The “German Strategy for Adaption to Climate Change” (DAS) has been established as the political framework to climate change adaption in Germany. One task of the “Adaption Action Plan of the DAS” is the installation of a permanent service of seamless climate prediction. The pilot project “Projection Service for Waterways and Shipping” (ProWaS) prepares an operational forecasting and projection service for climate, extreme weather and coastal and inland waterbodies. The target region is the North Sea and Baltic Sea with focus on the German coastal region and its estuaries.
ProWaS provides regional model setups for the North and Baltic Seas. To figure out technical issues and to validate the model setups, 20-year hindcast simulations forced with a regional reanalysis (COSMO-REA6 (Bollmeyer et al., 2015)) were carried out.
These simulations are used as basis for sensitivity studies with reference to global change scenarios. To evaluate the effect of global changes on the coastal regions especially in the North and Baltic Seas, model studies regarding global sea level rise, changes in global ocean and air temperature, changes in global salinity and changes of the regional river runoffs have been performed. Therefore, boundary conditions of a hindcast simulation are adapted to different change conditions and sensitivity studies for different periods have been carried out. First results of these investigations on model sensitivity studies are presented. These results will be used as a basis for further development of climate projection models.
How to cite: Ehlers, B.-M., Janssen, F., and Abalichin, J.: First results of model sensitivity studies on the influence of global changes to the North and Baltic Seas, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21520, https://doi.org/10.5194/egusphere-egu2020-21520, 2020.
The “German Strategy for Adaption to Climate Change” (DAS) has been established as the political framework to climate change adaption in Germany. One task of the “Adaption Action Plan of the DAS” is the installation of a permanent service of seamless climate prediction. The pilot project “Projection Service for Waterways and Shipping” (ProWaS) prepares an operational forecasting and projection service for climate, extreme weather and coastal and inland waterbodies. The target region is the North Sea and Baltic Sea with focus on the German coastal region and its estuaries.
ProWaS provides regional model setups for the North and Baltic Seas. To figure out technical issues and to validate the model setups, 20-year hindcast simulations forced with a regional reanalysis (COSMO-REA6 (Bollmeyer et al., 2015)) were carried out.
These simulations are used as basis for sensitivity studies with reference to global change scenarios. To evaluate the effect of global changes on the coastal regions especially in the North and Baltic Seas, model studies regarding global sea level rise, changes in global ocean and air temperature, changes in global salinity and changes of the regional river runoffs have been performed. Therefore, boundary conditions of a hindcast simulation are adapted to different change conditions and sensitivity studies for different periods have been carried out. First results of these investigations on model sensitivity studies are presented. These results will be used as a basis for further development of climate projection models.
How to cite: Ehlers, B.-M., Janssen, F., and Abalichin, J.: First results of model sensitivity studies on the influence of global changes to the North and Baltic Seas, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21520, https://doi.org/10.5194/egusphere-egu2020-21520, 2020.
EGU2020-12403 | Displays | ITS5.7/CL2.14
Climate change effects on UK shelf sea’s connectivity and hydrographic propertiesClaudia Gabriela Mayorga Adame, James Harle, Jason Holt, Artioli Yuri, and Sarah Wakelin
Climate change is expected to cause important changes in ocean physics, which will in turn have important effects on the marine ecosystems. The ReCICLE project (Resolving Climate Impacts on shelf and CoastaL seas Ecosystems) aims to identify and quantify the envelope of response to climate change of lower trophic level shelf-sea ecosystems and their functional interactions, in order to assess the vulnerability of ecosystem goods and services in the UK shelf seas. The central tool for this work is an ensemble of coupled hydrodynamic-biogeochemical ecosystem models NEMO-ERSEM Atlantic Margin Model configuration at 7 km horizontal resolution (AMM7), forced by different CIMP5 global climate change models to generate downscaled scenarios for future decades.
Changes in connectivity patterns are expected to affect coastal populations of marine organisms in shelf seas. Holt et al 2018 (GRL https://doi.org/10.1029/2018GL078878) showed the potential for radical reorganization of the North Sea circulation in earlier simulations. To assess this particular issue particle tracking experiments are carried out during two 10 year time slices, in the recent past (2000-2010) and in the future (2040-2050) in ensemble members of the ReCICLE AMM7 regional downscaling showing contrasting circulation patterns. Surface particles were uniformly seeded in the UK shelf seas every month and tracked for 30 days. The resulting particle trajectories are analysed with cluster analysis technics aiming to determine if persistent oceanographic boundaries re-arrange in the future climate scenarios. The ecological effects of circulation and water masses changes in the future ocean are discussed from a Lagrangian perspective.
How to cite: Mayorga Adame, C. G., Harle, J., Holt, J., Yuri, A., and Wakelin, S.: Climate change effects on UK shelf sea’s connectivity and hydrographic properties, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12403, https://doi.org/10.5194/egusphere-egu2020-12403, 2020.
Climate change is expected to cause important changes in ocean physics, which will in turn have important effects on the marine ecosystems. The ReCICLE project (Resolving Climate Impacts on shelf and CoastaL seas Ecosystems) aims to identify and quantify the envelope of response to climate change of lower trophic level shelf-sea ecosystems and their functional interactions, in order to assess the vulnerability of ecosystem goods and services in the UK shelf seas. The central tool for this work is an ensemble of coupled hydrodynamic-biogeochemical ecosystem models NEMO-ERSEM Atlantic Margin Model configuration at 7 km horizontal resolution (AMM7), forced by different CIMP5 global climate change models to generate downscaled scenarios for future decades.
Changes in connectivity patterns are expected to affect coastal populations of marine organisms in shelf seas. Holt et al 2018 (GRL https://doi.org/10.1029/2018GL078878) showed the potential for radical reorganization of the North Sea circulation in earlier simulations. To assess this particular issue particle tracking experiments are carried out during two 10 year time slices, in the recent past (2000-2010) and in the future (2040-2050) in ensemble members of the ReCICLE AMM7 regional downscaling showing contrasting circulation patterns. Surface particles were uniformly seeded in the UK shelf seas every month and tracked for 30 days. The resulting particle trajectories are analysed with cluster analysis technics aiming to determine if persistent oceanographic boundaries re-arrange in the future climate scenarios. The ecological effects of circulation and water masses changes in the future ocean are discussed from a Lagrangian perspective.
How to cite: Mayorga Adame, C. G., Harle, J., Holt, J., Yuri, A., and Wakelin, S.: Climate change effects on UK shelf sea’s connectivity and hydrographic properties, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12403, https://doi.org/10.5194/egusphere-egu2020-12403, 2020.
EGU2020-12932 | Displays | ITS5.7/CL2.14
Wind wake effects of large offshore wind farms, an underrated impact on the marine ecosystem?Corinna Schrum, Naveed Akhtar, Nils Christiansen, Jeff Carpenter, Ute Daewel, Bughsin Djath, Martin Hieronymi, Burkhardt Rockel, Johannes Schulz-Stellenfleth, Larissa Schultze, and Emil Stanev
The North Sea is a world-wide hot-spot in offshore wind energy production and installed capacity is rapidly increasing. Current and potential future developments raise concerns about the implications for the environment and ecosystem. Offshore wind farms change the physical environment across scales in various ways, which have the potential to modify biogeochemical fluxes and ecosystem structure. The foundations of wind farms cause oceanic wakes and sediment fluxes into the water column. Oceanic wakes have spatial scales of about O(1km) and structure local ecosystems within and in the vicinity of wind farms. Spatially larger effects can be expected from wind deficits and atmospheric boundary layer turbulence arising from wind farms. Wind disturbances extend often over muliple tenths of kilometer and are detectable as large scale wind wakes. Moreover, boundary layer disturbances have the potential to change the local weather conditions and foster e.g. local cloud development. The atmospheric changes in turn changes ocean circulation and turbulence on the same large spatial scales and modulate ocean nutrient fluxes. The latter directly influences biological productivity and food web structure. These cascading effects from atmosphere to ocean hydrodynamics, biogeochemistry and foodwebs are likely underrated while assessing potential and risks of offshore wind.
We present latest evidence for local to regional environmental impacts, with a focus on wind wakes and discuss results from observations, remote sensing and modelling. Using a suite of coupled atmosphere, ocean hydrodynamic and biogeochemistry models, we quantify the impact of large-scale offshore wind farms in the North Sea. The local and regional meteorological effects are studied using the regional climate model COSMO-CLM and the coupled ocean hydrodynamics-ecosystem model ECOSMO is used to study the consequent effects on ocean hydrodynamics and ocean productivity. Both models operate at a horizontal resolution of 2km.
How to cite: Schrum, C., Akhtar, N., Christiansen, N., Carpenter, J., Daewel, U., Djath, B., Hieronymi, M., Rockel, B., Schulz-Stellenfleth, J., Schultze, L., and Stanev, E.: Wind wake effects of large offshore wind farms, an underrated impact on the marine ecosystem?, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12932, https://doi.org/10.5194/egusphere-egu2020-12932, 2020.
The North Sea is a world-wide hot-spot in offshore wind energy production and installed capacity is rapidly increasing. Current and potential future developments raise concerns about the implications for the environment and ecosystem. Offshore wind farms change the physical environment across scales in various ways, which have the potential to modify biogeochemical fluxes and ecosystem structure. The foundations of wind farms cause oceanic wakes and sediment fluxes into the water column. Oceanic wakes have spatial scales of about O(1km) and structure local ecosystems within and in the vicinity of wind farms. Spatially larger effects can be expected from wind deficits and atmospheric boundary layer turbulence arising from wind farms. Wind disturbances extend often over muliple tenths of kilometer and are detectable as large scale wind wakes. Moreover, boundary layer disturbances have the potential to change the local weather conditions and foster e.g. local cloud development. The atmospheric changes in turn changes ocean circulation and turbulence on the same large spatial scales and modulate ocean nutrient fluxes. The latter directly influences biological productivity and food web structure. These cascading effects from atmosphere to ocean hydrodynamics, biogeochemistry and foodwebs are likely underrated while assessing potential and risks of offshore wind.
We present latest evidence for local to regional environmental impacts, with a focus on wind wakes and discuss results from observations, remote sensing and modelling. Using a suite of coupled atmosphere, ocean hydrodynamic and biogeochemistry models, we quantify the impact of large-scale offshore wind farms in the North Sea. The local and regional meteorological effects are studied using the regional climate model COSMO-CLM and the coupled ocean hydrodynamics-ecosystem model ECOSMO is used to study the consequent effects on ocean hydrodynamics and ocean productivity. Both models operate at a horizontal resolution of 2km.
How to cite: Schrum, C., Akhtar, N., Christiansen, N., Carpenter, J., Daewel, U., Djath, B., Hieronymi, M., Rockel, B., Schulz-Stellenfleth, J., Schultze, L., and Stanev, E.: Wind wake effects of large offshore wind farms, an underrated impact on the marine ecosystem?, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12932, https://doi.org/10.5194/egusphere-egu2020-12932, 2020.
EGU2020-19063 | Displays | ITS5.7/CL2.14
Climate change and recovery from eutrophication reduce benthic fauna and carbon processing in a coastal seaEva Ehrnsten, Alf Norkko, Bärbel Müller-Karulis, Erik Gustafsson, and Bo Gustafsson
Nutrient loading and climate change affect coastal ecosystems worldwide. Unravelling the combined effects of these pressures on benthic macrofauna is essential for understanding the future functioning of coastal ecosystems, as it is an important component linking the benthic and pelagic realms. In this study, we extended an existing model of benthic macrofauna coupled with the physical-biogeochemical BALTSEM model of the Baltic Sea to study the combined effects of changing nutrient loads and climate on biomass and metabolism of benthic macrofauna historically and in scenarios for the future. Based on a statistical comparison with a large validation dataset of measured biomasses, the model showed good or reasonable performance across the different basins and depth strata in the model area. In scenarios with decreasing nutrient loads according to the Baltic Sea Action Plan, but also with continued recent loads (mean loads 2012-2014), overall macrofaunal biomass and carbon processing were projected to decrease significantly by the end of the century despite improved oxygen conditions at the seafloor. Climate change led to intensified pelagic recycling of primary production and reduced export of particulate organic carbon to the seafloor with negative effects on macrofaunal biomass. In the high nutrient load scenario, representing the highest recorded historical loads, climate change counteracted the effects of increased productivity leading to a hyperbolic response: biomass and carbon processing increased up to mid-21st century but then decreased, giving almost no net change by the end of the 21st century compared to present. The study shows that benthic responses to environmental change are nonlinear and partly decoupled from pelagic responses and indicates that benthic-pelagic coupling might be weaker in a warmer and less eutrophic sea.
How to cite: Ehrnsten, E., Norkko, A., Müller-Karulis, B., Gustafsson, E., and Gustafsson, B.: Climate change and recovery from eutrophication reduce benthic fauna and carbon processing in a coastal sea, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19063, https://doi.org/10.5194/egusphere-egu2020-19063, 2020.
Nutrient loading and climate change affect coastal ecosystems worldwide. Unravelling the combined effects of these pressures on benthic macrofauna is essential for understanding the future functioning of coastal ecosystems, as it is an important component linking the benthic and pelagic realms. In this study, we extended an existing model of benthic macrofauna coupled with the physical-biogeochemical BALTSEM model of the Baltic Sea to study the combined effects of changing nutrient loads and climate on biomass and metabolism of benthic macrofauna historically and in scenarios for the future. Based on a statistical comparison with a large validation dataset of measured biomasses, the model showed good or reasonable performance across the different basins and depth strata in the model area. In scenarios with decreasing nutrient loads according to the Baltic Sea Action Plan, but also with continued recent loads (mean loads 2012-2014), overall macrofaunal biomass and carbon processing were projected to decrease significantly by the end of the century despite improved oxygen conditions at the seafloor. Climate change led to intensified pelagic recycling of primary production and reduced export of particulate organic carbon to the seafloor with negative effects on macrofaunal biomass. In the high nutrient load scenario, representing the highest recorded historical loads, climate change counteracted the effects of increased productivity leading to a hyperbolic response: biomass and carbon processing increased up to mid-21st century but then decreased, giving almost no net change by the end of the 21st century compared to present. The study shows that benthic responses to environmental change are nonlinear and partly decoupled from pelagic responses and indicates that benthic-pelagic coupling might be weaker in a warmer and less eutrophic sea.
How to cite: Ehrnsten, E., Norkko, A., Müller-Karulis, B., Gustafsson, E., and Gustafsson, B.: Climate change and recovery from eutrophication reduce benthic fauna and carbon processing in a coastal sea, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19063, https://doi.org/10.5194/egusphere-egu2020-19063, 2020.
EGU2020-12312 | Displays | ITS5.7/CL2.14
Synergies and trade-offs for the SDGs in a deltaic socio-ecological system: Development of an Integrated Assessment ModelCharlotte Marcinko, Andrew Harfoot, Tim Daw, Derek Clarke, Sugata Hazra, Craig Hutton, Chris Hill, Attila Lazar, and Robert Nicholls
The United Nations Sustainable Development Goals (SDGs) promote sustainable development and aim to address multiple challenges including those related to poverty, hunger, inequality, climate change and environmental degradation. Interlinkages between SDGS means there is potential for interactions, synergies and trade-offs between individual goals across multiple temporal and spatial scales. We aim to develop an Integrated Assessment Model (IAM) of a complex deltaic socio-ecological system where opportunities and trade-offs between the SDGs can be analysed. This is designed to inform local/regional policy. We focus on the Sundarban Biosphere Reserve (SBR) within the Indian Ganga Delta. This is home to 5.6 million often poor people with a strong dependence on rural livelihoods and also includes the Indian portion of the world’s largest mangrove forest – the Sundarbans. The area is subject to multiple drivers of environmental change operating at multiple scales (e.g. global climate change and sea-level rise, deltaic subsidence, extensive land use conversion and widespread migration). Here we discuss the challenges of linking models of human and natural systems to each other in the context of local policy decisions and SDG indicators. Challenges include linking processes derived at multiple spatial and temporal scales and data limitations. We present a framework for an IAM, based on the Delta Dynamic Emulator Model (ΔDIEM), to investigate the affects of current and future trends in environmental change and policy decisions within the SBR across a broad range of sub-thematic SDG indicators. This work brings together a wealth of experience in understanding and modelling changes in complex human and natural systems within deltas from previous projects (ESPA Deltas and DECCMA), along with local government and stakeholder expert knowledge within the Indian Ganga Delta.
How to cite: Marcinko, C., Harfoot, A., Daw, T., Clarke, D., Hazra, S., Hutton, C., Hill, C., Lazar, A., and Nicholls, R.: Synergies and trade-offs for the SDGs in a deltaic socio-ecological system: Development of an Integrated Assessment Model , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12312, https://doi.org/10.5194/egusphere-egu2020-12312, 2020.
The United Nations Sustainable Development Goals (SDGs) promote sustainable development and aim to address multiple challenges including those related to poverty, hunger, inequality, climate change and environmental degradation. Interlinkages between SDGS means there is potential for interactions, synergies and trade-offs between individual goals across multiple temporal and spatial scales. We aim to develop an Integrated Assessment Model (IAM) of a complex deltaic socio-ecological system where opportunities and trade-offs between the SDGs can be analysed. This is designed to inform local/regional policy. We focus on the Sundarban Biosphere Reserve (SBR) within the Indian Ganga Delta. This is home to 5.6 million often poor people with a strong dependence on rural livelihoods and also includes the Indian portion of the world’s largest mangrove forest – the Sundarbans. The area is subject to multiple drivers of environmental change operating at multiple scales (e.g. global climate change and sea-level rise, deltaic subsidence, extensive land use conversion and widespread migration). Here we discuss the challenges of linking models of human and natural systems to each other in the context of local policy decisions and SDG indicators. Challenges include linking processes derived at multiple spatial and temporal scales and data limitations. We present a framework for an IAM, based on the Delta Dynamic Emulator Model (ΔDIEM), to investigate the affects of current and future trends in environmental change and policy decisions within the SBR across a broad range of sub-thematic SDG indicators. This work brings together a wealth of experience in understanding and modelling changes in complex human and natural systems within deltas from previous projects (ESPA Deltas and DECCMA), along with local government and stakeholder expert knowledge within the Indian Ganga Delta.
How to cite: Marcinko, C., Harfoot, A., Daw, T., Clarke, D., Hazra, S., Hutton, C., Hill, C., Lazar, A., and Nicholls, R.: Synergies and trade-offs for the SDGs in a deltaic socio-ecological system: Development of an Integrated Assessment Model , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12312, https://doi.org/10.5194/egusphere-egu2020-12312, 2020.
EGU2020-7997 | Displays | ITS5.7/CL2.14 | Highlight
Regional Impacts of Offshore Windfarming on the Hydro- and Ecosystem Dynamics in the North SeaNils Christiansen, Ute Daewel, Corinna Schrum, Jeff Carpenter, Bughsin Djath, and Johannes Schulz-Stellenfleth
The production of renewable offshore wind energy in the North Sea increases rapidly, including development in ecologically significant regions. Recent studies identified implications like large-scale wind wake effects and mixing of the water column induced by wind turbines foundations. Depending on atmospheric stability, wind wakes imply changes in momentum flux and increased turbulence up to 70 km downstream, affecting the local conditions (e.g. wind speed, cloud development) near offshore wind farms. Atmospheric wake effects likely translate to the sea-surface boundary layer and hence influence vertical transport in the surface mixing layer. Changes in ocean stratification raise concerns about substantial consequences for local hydrodynamic and biogeochemical processes as well as for the marine ecosystem.
Using newly developed wind wake parametrisations together with the unstructured-grid model SCHISM and the biogeochemistry model ECOSMO, this study addresses windfarming impacts in the North Sea for future offshore wind farm scenarios. We focus on wind wake implications on ocean dynamics as well as on changes in tidal mixing fronts near the Dogger Bank and potential ecological consequences. At this, we create important knowledge on how the cross-scale wind farm impacts can be modelled suitably on the system scale.
How to cite: Christiansen, N., Daewel, U., Schrum, C., Carpenter, J., Djath, B., and Schulz-Stellenfleth, J.: Regional Impacts of Offshore Windfarming on the Hydro- and Ecosystem Dynamics in the North Sea, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7997, https://doi.org/10.5194/egusphere-egu2020-7997, 2020.
The production of renewable offshore wind energy in the North Sea increases rapidly, including development in ecologically significant regions. Recent studies identified implications like large-scale wind wake effects and mixing of the water column induced by wind turbines foundations. Depending on atmospheric stability, wind wakes imply changes in momentum flux and increased turbulence up to 70 km downstream, affecting the local conditions (e.g. wind speed, cloud development) near offshore wind farms. Atmospheric wake effects likely translate to the sea-surface boundary layer and hence influence vertical transport in the surface mixing layer. Changes in ocean stratification raise concerns about substantial consequences for local hydrodynamic and biogeochemical processes as well as for the marine ecosystem.
Using newly developed wind wake parametrisations together with the unstructured-grid model SCHISM and the biogeochemistry model ECOSMO, this study addresses windfarming impacts in the North Sea for future offshore wind farm scenarios. We focus on wind wake implications on ocean dynamics as well as on changes in tidal mixing fronts near the Dogger Bank and potential ecological consequences. At this, we create important knowledge on how the cross-scale wind farm impacts can be modelled suitably on the system scale.
How to cite: Christiansen, N., Daewel, U., Schrum, C., Carpenter, J., Djath, B., and Schulz-Stellenfleth, J.: Regional Impacts of Offshore Windfarming on the Hydro- and Ecosystem Dynamics in the North Sea, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7997, https://doi.org/10.5194/egusphere-egu2020-7997, 2020.
EGU2020-21553 | Displays | ITS5.7/CL2.14
The added value of high-resolution river runoff forcing for simulating long-term ecosystem dynamics and biogeochemical cycling in northern European shelf seasFabian Werner, Ute Daewel, Stefan Hagemann, Rohini Kumar, Oldrich Rakovec, Michael Weber, Sabine Attinger, and Corinna Schrum
Regional climate change and anthropogenic activities are altering land-based freshwater runoff and nutrient loads to northern European shelf seas, which both leave their imprint on the hydrography and biogeochemistry of coastal ecosystems on annual to multi-decadal timescales. Long-term simulations forced by regional climate models have been proven as powerful tools to identify these impacts on the variability of the North Sea and Baltic Sea ecosystems. However, the simulations are prone to substantial biases concerning the land-sea coupling. Long-term river discharge forcing for regional ocean models usually needs to be compiled from different data sources and climatologies. Typically resulting in patchy, inconsistent datasets. Additionally, the contribution of smaller river catchments and day-to-day discharge variability is not adequately resolved. In this study, we used two novel high-resolution river runoff datasets to simulate 66-years ecosystem hindcasts with the 3D coupled physical-biogeochemical NPZD-model ECOSMO II, to study the role of river discharge forcing for the quality of the ecosystem simulation. The forcing datasets are based on consistent long-term reconstructions of the hydrological discharge model (HD) at 5 min. resolution and the mesoscale hydrological model (mHm) at 0.0625° resolution, both covering the entire northern European catchment region. We compare long-term seasonal and annual statistics of salinity, oxygen, inorganic nitrogen and phosphorus from the model simulations to those from a reference simulation with standard runoff, compiled from various data sources, and to those estimated from observations. Furthermore, potential bottom-up effects on lower-tropic-level dynamics are investigated, by comparatively analyzing long-term variability in phytoplankton biomass and primary production in model simulations forced by different runoff products.
How to cite: Werner, F., Daewel, U., Hagemann, S., Kumar, R., Rakovec, O., Weber, M., Attinger, S., and Schrum, C.: The added value of high-resolution river runoff forcing for simulating long-term ecosystem dynamics and biogeochemical cycling in northern European shelf seas, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21553, https://doi.org/10.5194/egusphere-egu2020-21553, 2020.
Regional climate change and anthropogenic activities are altering land-based freshwater runoff and nutrient loads to northern European shelf seas, which both leave their imprint on the hydrography and biogeochemistry of coastal ecosystems on annual to multi-decadal timescales. Long-term simulations forced by regional climate models have been proven as powerful tools to identify these impacts on the variability of the North Sea and Baltic Sea ecosystems. However, the simulations are prone to substantial biases concerning the land-sea coupling. Long-term river discharge forcing for regional ocean models usually needs to be compiled from different data sources and climatologies. Typically resulting in patchy, inconsistent datasets. Additionally, the contribution of smaller river catchments and day-to-day discharge variability is not adequately resolved. In this study, we used two novel high-resolution river runoff datasets to simulate 66-years ecosystem hindcasts with the 3D coupled physical-biogeochemical NPZD-model ECOSMO II, to study the role of river discharge forcing for the quality of the ecosystem simulation. The forcing datasets are based on consistent long-term reconstructions of the hydrological discharge model (HD) at 5 min. resolution and the mesoscale hydrological model (mHm) at 0.0625° resolution, both covering the entire northern European catchment region. We compare long-term seasonal and annual statistics of salinity, oxygen, inorganic nitrogen and phosphorus from the model simulations to those from a reference simulation with standard runoff, compiled from various data sources, and to those estimated from observations. Furthermore, potential bottom-up effects on lower-tropic-level dynamics are investigated, by comparatively analyzing long-term variability in phytoplankton biomass and primary production in model simulations forced by different runoff products.
How to cite: Werner, F., Daewel, U., Hagemann, S., Kumar, R., Rakovec, O., Weber, M., Attinger, S., and Schrum, C.: The added value of high-resolution river runoff forcing for simulating long-term ecosystem dynamics and biogeochemical cycling in northern European shelf seas, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21553, https://doi.org/10.5194/egusphere-egu2020-21553, 2020.
EGU2020-2080 | Displays | ITS5.7/CL2.14
Methodology for sustainable development along an eco-fragile zone for land use management and mitigating climate change: Evidence of East Kolkata WetlandsSudeshna Kumar, Haimanti Banerji, and Biplab Kanti Sengupta
Kolkata’s city core is getting depopulated but has experienced an explosive population growth leading to rapid urbanization which is encroaching the ecologically fragile wetlands of the eastern fringe of the main city. This contrasting urban growth along the East Kolkata wetland is accounted mainly for the increase in city size, expansion of tertiary and service sector activities especially the IT boom, and the improved transit facilities along the eastern fringe. This has helped the real estate sector to thrive along the vulnerable eastern fringe of the city, leading to a drastic change in the wetland ecosystem. Secondary studies have also indicated that consumption of wetlands, indicated by fragmented land use has altered the microclimate of Kolkata. The significant land cover change due to human-induced perturbations has led to an insurgence of temperature in the region (Li, Mitra, Dong, & Yang, 2018). The entire transit corridor is subjected to verticalization juxtaposing the cultural essence of Kolkata bringing with it a myriad of Economic, Social, Cultural and subsequent planning challenges. The critical review of the selective literature shows how the best planning practices have integrated transit policies with land use. This has further helped the researcher in formulating strategies and policies specific to the regional context in order to render sustainable development in the study area. The study explores how the transit policies in Kolkata have actually transformed the city physically, socially, culturally and changed its microclimate. The study identifies future trends and assesses the future development potential, intensification with the help of qualitative and quantitative analysis. The study also conducts land suitability analysis for framing proposals and recommendations for ensuring sustainable development along the East Kolkata Wetland. The outcome of this study is a methodology for sustainability strategic planning for developing the growth node along the eastern fringe of Kolkata which will curb the encroachment of the East Kolkata Wetlands. The study also provides a platform for policy recommendations for land use management and mitigate future climate changes in this eco-fragile zone.
Keywords: landuse; climate change; transit policies; sustainable planning; wetlands
Reference
Li, X., Mitra, C., Dong, L., & Yang, Q. (2018). Understanding land use change impacts on microclimate using Weather Research and Forecasting (WRF) model. Physics and Chemistry of the Earth, 103, 115–126. https://doi.org/10.1016/j.pce.2017.01.017
How to cite: Kumar, S., Banerji, H., and Kanti Sengupta, B.: Methodology for sustainable development along an eco-fragile zone for land use management and mitigating climate change: Evidence of East Kolkata Wetlands, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2080, https://doi.org/10.5194/egusphere-egu2020-2080, 2020.
Kolkata’s city core is getting depopulated but has experienced an explosive population growth leading to rapid urbanization which is encroaching the ecologically fragile wetlands of the eastern fringe of the main city. This contrasting urban growth along the East Kolkata wetland is accounted mainly for the increase in city size, expansion of tertiary and service sector activities especially the IT boom, and the improved transit facilities along the eastern fringe. This has helped the real estate sector to thrive along the vulnerable eastern fringe of the city, leading to a drastic change in the wetland ecosystem. Secondary studies have also indicated that consumption of wetlands, indicated by fragmented land use has altered the microclimate of Kolkata. The significant land cover change due to human-induced perturbations has led to an insurgence of temperature in the region (Li, Mitra, Dong, & Yang, 2018). The entire transit corridor is subjected to verticalization juxtaposing the cultural essence of Kolkata bringing with it a myriad of Economic, Social, Cultural and subsequent planning challenges. The critical review of the selective literature shows how the best planning practices have integrated transit policies with land use. This has further helped the researcher in formulating strategies and policies specific to the regional context in order to render sustainable development in the study area. The study explores how the transit policies in Kolkata have actually transformed the city physically, socially, culturally and changed its microclimate. The study identifies future trends and assesses the future development potential, intensification with the help of qualitative and quantitative analysis. The study also conducts land suitability analysis for framing proposals and recommendations for ensuring sustainable development along the East Kolkata Wetland. The outcome of this study is a methodology for sustainability strategic planning for developing the growth node along the eastern fringe of Kolkata which will curb the encroachment of the East Kolkata Wetlands. The study also provides a platform for policy recommendations for land use management and mitigate future climate changes in this eco-fragile zone.
Keywords: landuse; climate change; transit policies; sustainable planning; wetlands
Reference
Li, X., Mitra, C., Dong, L., & Yang, Q. (2018). Understanding land use change impacts on microclimate using Weather Research and Forecasting (WRF) model. Physics and Chemistry of the Earth, 103, 115–126. https://doi.org/10.1016/j.pce.2017.01.017
How to cite: Kumar, S., Banerji, H., and Kanti Sengupta, B.: Methodology for sustainable development along an eco-fragile zone for land use management and mitigating climate change: Evidence of East Kolkata Wetlands, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2080, https://doi.org/10.5194/egusphere-egu2020-2080, 2020.
EGU2020-8183 | Displays | ITS5.7/CL2.14
Soft rock cliff retreat under changing climate driversThomas Spencer and Susan Brooks
Shoreline retreat can happen rapidly in cliffs composed of loosely consolidated glacial and pre-glacial sediments. Typical centennial-scale average retreat rates for some cliffed coastlines of East Anglia, UK are 2 - 5 m a-1 where cliffs have no protection from storm energetics. Recent research using pre- and post-storm clifftop (geomorphological) surveys, as well as aerial photographs, has shown that in single events retreat can be 3 – 4 times the long-term average, with up to 15 m of retreat in a single event. Periods of clifftop stasis are thus interspersed with short term shocks, when meteorological conditions generate energetic drivers of change (elevated still water levels, high onshore waves, high rainfall inputs). Furthermore, short term shocks deliver enhanced sediment supply to the nearshore region, which is an important factor to take into account within future management planning strategies.
This paper uses the latest Earth Observation data to quantify and evaluate rates of soft rock cliff retreat, thereby identifying periods when short term shocks have been delivered to the cliffs. It then explores the climate drivers of these shocks and assesses the associated synoptic meteorological scenarios. Finally it considers the implications for quantities of sediment released, in terms of both overall magnitude and alongshore variability.
Results suggest that three recent events stand out as having a significant impact on rates of cliff retreat and associated sediment release on the Suffolk coast, southern North Sea. The “Big Freeze” of the UK winter of 2010-11 involved a protracted period of easterly air flow from mid-November and into December, 2010. The process drivers were high magnitude onshore winds, generating nearshore waves of around 4 m. The 5 December 2013 North Sea surge similarly resulted in rapid cliff retreat and sediment release. In this event winds were alongshore so the wave impacts were lower, but the elevated water levels generated by the surge meant that wave action could be directed onto the cliffs. By far the biggest recent event in terms of storm forcing energetics was the February – March 2018 “Beast from the East” and “Mini Beast”, where persistent onshore winds generated waves of almost 4.5 m at Southwold Approaches (highest on record) that coincided with two phases of high spring tides (no surge). When regional-scale Sudden Stratospheric Warming (SSW) generates strong and persistent easterly winds there are widespread potentially irreversible consequences for cliff and beach sediments around the western North Sea coastline.
How to cite: Spencer, T. and Brooks, S.: Soft rock cliff retreat under changing climate drivers, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8183, https://doi.org/10.5194/egusphere-egu2020-8183, 2020.
Shoreline retreat can happen rapidly in cliffs composed of loosely consolidated glacial and pre-glacial sediments. Typical centennial-scale average retreat rates for some cliffed coastlines of East Anglia, UK are 2 - 5 m a-1 where cliffs have no protection from storm energetics. Recent research using pre- and post-storm clifftop (geomorphological) surveys, as well as aerial photographs, has shown that in single events retreat can be 3 – 4 times the long-term average, with up to 15 m of retreat in a single event. Periods of clifftop stasis are thus interspersed with short term shocks, when meteorological conditions generate energetic drivers of change (elevated still water levels, high onshore waves, high rainfall inputs). Furthermore, short term shocks deliver enhanced sediment supply to the nearshore region, which is an important factor to take into account within future management planning strategies.
This paper uses the latest Earth Observation data to quantify and evaluate rates of soft rock cliff retreat, thereby identifying periods when short term shocks have been delivered to the cliffs. It then explores the climate drivers of these shocks and assesses the associated synoptic meteorological scenarios. Finally it considers the implications for quantities of sediment released, in terms of both overall magnitude and alongshore variability.
Results suggest that three recent events stand out as having a significant impact on rates of cliff retreat and associated sediment release on the Suffolk coast, southern North Sea. The “Big Freeze” of the UK winter of 2010-11 involved a protracted period of easterly air flow from mid-November and into December, 2010. The process drivers were high magnitude onshore winds, generating nearshore waves of around 4 m. The 5 December 2013 North Sea surge similarly resulted in rapid cliff retreat and sediment release. In this event winds were alongshore so the wave impacts were lower, but the elevated water levels generated by the surge meant that wave action could be directed onto the cliffs. By far the biggest recent event in terms of storm forcing energetics was the February – March 2018 “Beast from the East” and “Mini Beast”, where persistent onshore winds generated waves of almost 4.5 m at Southwold Approaches (highest on record) that coincided with two phases of high spring tides (no surge). When regional-scale Sudden Stratospheric Warming (SSW) generates strong and persistent easterly winds there are widespread potentially irreversible consequences for cliff and beach sediments around the western North Sea coastline.
How to cite: Spencer, T. and Brooks, S.: Soft rock cliff retreat under changing climate drivers, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8183, https://doi.org/10.5194/egusphere-egu2020-8183, 2020.
EGU2020-9203 | Displays | ITS5.7/CL2.14
Occurrence of Freak Waves in near-coast regions of the North Sea and their potential relationship with objective weather typesJens Möller, Ina Teutsch, and Ralf Weisse
Rogue waves are a potential threat for both shipping and offshore structures like wind power stations or oil platforms. While individual Rogue waves are short-lived and almost unpredictable, there is a chance to predict the probability of the occurrence of freak waves in conjunction with different weather types. The German Ministry of Transport and digital Infrastructure has tasked its Network of Experts to investigate the possible evolutions of extreme threats for shipping and offshore wind energy plants in the German Bight, the south-eastern part of the North Sea near the German coast.
In this study, we present an analysis from the co-occurrence of freak waves with different weather types in the German Bight in the past (from observations). In addition, we investigate potential changes of the occurrence of freak waves in the future due to a changing climate and changing appearance of the relevant weather types (by use of a coupled Regional Ocean-Atmosphere Climate Model, MPI-OM).
The investigation indicates a connection between the probability of the occurrence of freak waves at different stations and certain weather types. Potentially, this relationship could be used for warning crews of ships or offshore constructions. In a coupled Regional Ocean-Atmosphere Climate Model (MPI-OM) under scenario RCP8.5 we detect an increase of just such weather types, which are correlated with high waves, for the future.
How to cite: Möller, J., Teutsch, I., and Weisse, R.: Occurrence of Freak Waves in near-coast regions of the North Sea and their potential relationship with objective weather types , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9203, https://doi.org/10.5194/egusphere-egu2020-9203, 2020.
Rogue waves are a potential threat for both shipping and offshore structures like wind power stations or oil platforms. While individual Rogue waves are short-lived and almost unpredictable, there is a chance to predict the probability of the occurrence of freak waves in conjunction with different weather types. The German Ministry of Transport and digital Infrastructure has tasked its Network of Experts to investigate the possible evolutions of extreme threats for shipping and offshore wind energy plants in the German Bight, the south-eastern part of the North Sea near the German coast.
In this study, we present an analysis from the co-occurrence of freak waves with different weather types in the German Bight in the past (from observations). In addition, we investigate potential changes of the occurrence of freak waves in the future due to a changing climate and changing appearance of the relevant weather types (by use of a coupled Regional Ocean-Atmosphere Climate Model, MPI-OM).
The investigation indicates a connection between the probability of the occurrence of freak waves at different stations and certain weather types. Potentially, this relationship could be used for warning crews of ships or offshore constructions. In a coupled Regional Ocean-Atmosphere Climate Model (MPI-OM) under scenario RCP8.5 we detect an increase of just such weather types, which are correlated with high waves, for the future.
How to cite: Möller, J., Teutsch, I., and Weisse, R.: Occurrence of Freak Waves in near-coast regions of the North Sea and their potential relationship with objective weather types , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9203, https://doi.org/10.5194/egusphere-egu2020-9203, 2020.
EGU2020-13315 | Displays | ITS5.7/CL2.14
Effects of sea surface temperature on distributions of two tropical tunas under the shifts of climate periodicityYi-Ting Hsiao, Min-Hui Lo, and Hui-Yu Wang
Tunas provide an important marine resource for the countries that surround the Pacific Ocean. Under climate change, climate models project an increasing frequency of central Pacific El Niño/Southern Oscillation events but a decreasing frequency of eastern Pacific El Niño/Southern Oscillation events, and may cause sea temperature rising in the central western Pacific relative to the eastern Pacific region and leading to corresponding changes in biological productivity, in turn influencing early life stage of tunas. Consequently, it is crucial to investigate how such climatic periodicity will impact the distribution and abundance of tunas.
Here, yellowfin and albacore tunas are selected as our study species. Yellowfin tuna prefer warmer environments and have smaller body size and younger age-at-maturation (1 year) compared to albacore tunas (mature in 2 years). We use the spatially-explicit (5° grids) longline catch-and-effort data across 20°N~20°S and105°E~75°W,1970~2015, from Inter American Tropical Tuna Commission (IATTC) and Western & Central Pacific Fisheries Commission (WCPFC). We analyze the spatial and inter-annual variation in the catch-per-unit-effort for the two tunas with respect to changes in sea surface temperature in the central Pacific El Niño/Southern Oscillation events vs. the eastern Pacific El Niño/Southern Oscillation events. To investigate whether the distribution of tunas change under the central Pacific El Niño/Southern Oscillation events and the eastern Pacific El Niño/Southern Oscillation events will greatly help fisheries management and sustainable development of marine resources.
How to cite: Hsiao, Y.-T., Lo, M.-H., and Wang, H.-Y.: Effects of sea surface temperature on distributions of two tropical tunas under the shifts of climate periodicity, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13315, https://doi.org/10.5194/egusphere-egu2020-13315, 2020.
Tunas provide an important marine resource for the countries that surround the Pacific Ocean. Under climate change, climate models project an increasing frequency of central Pacific El Niño/Southern Oscillation events but a decreasing frequency of eastern Pacific El Niño/Southern Oscillation events, and may cause sea temperature rising in the central western Pacific relative to the eastern Pacific region and leading to corresponding changes in biological productivity, in turn influencing early life stage of tunas. Consequently, it is crucial to investigate how such climatic periodicity will impact the distribution and abundance of tunas.
Here, yellowfin and albacore tunas are selected as our study species. Yellowfin tuna prefer warmer environments and have smaller body size and younger age-at-maturation (1 year) compared to albacore tunas (mature in 2 years). We use the spatially-explicit (5° grids) longline catch-and-effort data across 20°N~20°S and105°E~75°W,1970~2015, from Inter American Tropical Tuna Commission (IATTC) and Western & Central Pacific Fisheries Commission (WCPFC). We analyze the spatial and inter-annual variation in the catch-per-unit-effort for the two tunas with respect to changes in sea surface temperature in the central Pacific El Niño/Southern Oscillation events vs. the eastern Pacific El Niño/Southern Oscillation events. To investigate whether the distribution of tunas change under the central Pacific El Niño/Southern Oscillation events and the eastern Pacific El Niño/Southern Oscillation events will greatly help fisheries management and sustainable development of marine resources.
How to cite: Hsiao, Y.-T., Lo, M.-H., and Wang, H.-Y.: Effects of sea surface temperature on distributions of two tropical tunas under the shifts of climate periodicity, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13315, https://doi.org/10.5194/egusphere-egu2020-13315, 2020.
EGU2020-18083 | Displays | ITS5.7/CL2.14
Linking weather types to tense dewatering situations of the Kiel Canal (NOK)Corinna Jensen, Jens Möller, and Peter Löwe
Within the “Network of experts” of the German Federal Ministry of Transport and Digital Infrastructure (BMVI), the effect of climate change on infrastructure is investigated. One aspect of this project is the future dewatering situation of the Kiel Canal (“Nord-Ostsee-Kanal” (NOK)). The Kiel Canal is one of the world’s busiest man-made waterways navigable by seagoing ships. It connects the North Sea to the Baltic Sea and can save the ships hundreds of kilometers of distance. With a total annual sum of transferred cargo of up to 100 million tons it is an economically very important transportation way. Additionally to the transportation of cargo, the canal is also used to discharge water from smaller rivers as well as drainage of a catchments area of about 1500 km².
The canal can only operate in a certain water level range. If its water level exceeds the maximum level, the water must be drained into the sea. In 90% of the time, the water is drained into the North Sea during time windows with low tide. If the water level outside of the canal is too high, drainage is not possible and the canal traffic has to be reduced or, in extreme cases, shut down. Due to the expected sea level rise, the potential time windows for dewatering are decreasing in the future. With a decrease in operational hours, there will be substantial economic losses as well as an increase in traffic around Denmark.
To get a better understanding of what causes tense dewatering situations other than sea level rise a linkage between high water levels on the outside of the canal and weather types is made. Weather types describe large-scale circulation patterns and can therefore give an estimate on tracks of low-pressure systems as well as the prevailing winds, which can explain surges and water levels at the coast. This analysis is conducted for one weather type classification method based solely on sea level pressure fields. Weather types derived from regionally coupled climate models as well as reanalyses are investigated.
How to cite: Jensen, C., Möller, J., and Löwe, P.: Linking weather types to tense dewatering situations of the Kiel Canal (NOK), EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18083, https://doi.org/10.5194/egusphere-egu2020-18083, 2020.
Within the “Network of experts” of the German Federal Ministry of Transport and Digital Infrastructure (BMVI), the effect of climate change on infrastructure is investigated. One aspect of this project is the future dewatering situation of the Kiel Canal (“Nord-Ostsee-Kanal” (NOK)). The Kiel Canal is one of the world’s busiest man-made waterways navigable by seagoing ships. It connects the North Sea to the Baltic Sea and can save the ships hundreds of kilometers of distance. With a total annual sum of transferred cargo of up to 100 million tons it is an economically very important transportation way. Additionally to the transportation of cargo, the canal is also used to discharge water from smaller rivers as well as drainage of a catchments area of about 1500 km².
The canal can only operate in a certain water level range. If its water level exceeds the maximum level, the water must be drained into the sea. In 90% of the time, the water is drained into the North Sea during time windows with low tide. If the water level outside of the canal is too high, drainage is not possible and the canal traffic has to be reduced or, in extreme cases, shut down. Due to the expected sea level rise, the potential time windows for dewatering are decreasing in the future. With a decrease in operational hours, there will be substantial economic losses as well as an increase in traffic around Denmark.
To get a better understanding of what causes tense dewatering situations other than sea level rise a linkage between high water levels on the outside of the canal and weather types is made. Weather types describe large-scale circulation patterns and can therefore give an estimate on tracks of low-pressure systems as well as the prevailing winds, which can explain surges and water levels at the coast. This analysis is conducted for one weather type classification method based solely on sea level pressure fields. Weather types derived from regionally coupled climate models as well as reanalyses are investigated.
How to cite: Jensen, C., Möller, J., and Löwe, P.: Linking weather types to tense dewatering situations of the Kiel Canal (NOK), EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18083, https://doi.org/10.5194/egusphere-egu2020-18083, 2020.
EGU2020-19110 | Displays | ITS5.7/CL2.14
Impacts of sand extraction and deposition on the ecosystem recovery rate in the southern coastal zone of PortugalTeresa Drago, Rui Taborda, Sebastião Teixeira, Marcos Rosa, João Pedro Cascalho, Miriam Tuaty-Guerra, Maria José Gaudêncio, Jorge Gonçalves, Paulo Relvas, Erwan Garel, Luciano Júnior, Victor Henriques, Pedro Terrinha, Jorge Arteaga, and Ana Ramos
Artificial nourishment of sandy beaches using sediment from borrow areas located on the continental shelf is increasingly a recommended solution for reversing the erosion process that affects the coastal zone. However, the impact of sand extraction in the shelf and deposition on the beach on the benthic communities (structure and functioning) is still poorly known, contributing to the lack of information needed for the assessment of Descriptor 6 (Sea-floor integrity) of the Marine Strategy Framework Directive (MSFD).
The aim of this work is to evaluate the morphological and ecosystem impacts of sand extraction at the inner shelf, as well as the consequent impacts of sand deposition at the nourished beaches. In this context, short-term and long-term monitoring activities based on a multidisciplinary approach were implemented at new and former borrow areas located in the southern Portugal (Algarve) inner continental shelf and adjacent beaches. These activities include multibeam bathymetric surveys complemented by surface sediment sampling, wave and current measurements, and a fluorescent tracer-marked sand experiment. Moreover, benthic macrofauna composition and structure are being studied at borrow areas (former and new) and at the nourished beaches. The acquired data allow a first assessment of the recovery rates regarding the sea-bottom morphology and benthic communities, and contribute to a better understanding of the involved processes.
The authors would like to acknowledge the financial support FCT through project UIDB/50019/2020 – IDL and ECOEXA project (MAR-01.04.02-FEAMP-0016)
How to cite: Drago, T., Taborda, R., Teixeira, S., Rosa, M., Cascalho, J. P., Tuaty-Guerra, M., Gaudêncio, M. J., Gonçalves, J., Relvas, P., Garel, E., Júnior, L., Henriques, V., Terrinha, P., Arteaga, J., and Ramos, A.: Impacts of sand extraction and deposition on the ecosystem recovery rate in the southern coastal zone of Portugal, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19110, https://doi.org/10.5194/egusphere-egu2020-19110, 2020.
Artificial nourishment of sandy beaches using sediment from borrow areas located on the continental shelf is increasingly a recommended solution for reversing the erosion process that affects the coastal zone. However, the impact of sand extraction in the shelf and deposition on the beach on the benthic communities (structure and functioning) is still poorly known, contributing to the lack of information needed for the assessment of Descriptor 6 (Sea-floor integrity) of the Marine Strategy Framework Directive (MSFD).
The aim of this work is to evaluate the morphological and ecosystem impacts of sand extraction at the inner shelf, as well as the consequent impacts of sand deposition at the nourished beaches. In this context, short-term and long-term monitoring activities based on a multidisciplinary approach were implemented at new and former borrow areas located in the southern Portugal (Algarve) inner continental shelf and adjacent beaches. These activities include multibeam bathymetric surveys complemented by surface sediment sampling, wave and current measurements, and a fluorescent tracer-marked sand experiment. Moreover, benthic macrofauna composition and structure are being studied at borrow areas (former and new) and at the nourished beaches. The acquired data allow a first assessment of the recovery rates regarding the sea-bottom morphology and benthic communities, and contribute to a better understanding of the involved processes.
The authors would like to acknowledge the financial support FCT through project UIDB/50019/2020 – IDL and ECOEXA project (MAR-01.04.02-FEAMP-0016)
How to cite: Drago, T., Taborda, R., Teixeira, S., Rosa, M., Cascalho, J. P., Tuaty-Guerra, M., Gaudêncio, M. J., Gonçalves, J., Relvas, P., Garel, E., Júnior, L., Henriques, V., Terrinha, P., Arteaga, J., and Ramos, A.: Impacts of sand extraction and deposition on the ecosystem recovery rate in the southern coastal zone of Portugal, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19110, https://doi.org/10.5194/egusphere-egu2020-19110, 2020.
EGU2020-21248 | Displays | ITS5.7/CL2.14
Consumer as a source of marine litter: Eye-tracking research in Baltic Sea coastal areaIndre Razbadauskaite Venske, Inga Dailidiene, Remigijus Dailide, Vitalijus Kondrat, Egle Baltranaite, and Toma Dabulevičienė
This part is one of the research components for the Ph.D. thesis of geographical sustainable consumption which analyzes the influence of daily household waste causing a significant increase in marine litter. According to the "Marine Litter Socio-economic" study typical items founded on urban beaches usually are bottle caps, plastic bags, plastic food containers, wrappers, and plastic cutlery. In general, 48% of marine litter is caused by household-related waste and 70% of marine litter are plastics in the Baltic Sea (UN Environment, 2017; Helcom, 2018ad).
The main aim of this socio-economic research part is to investigate consumer as a source of marine litter in the Baltic Sea (pilot: Lithuania). It combines both qualitative and quantitative methods. Eye-tracker assists in the analysis of gaze points data which is collected on the beach photos with marine litter (focus on macro litter). Semi-structured interviews provide more in-depth insights about consumer behavior and habits.
Thus, the main goal of this research is to find out and introduce awareness-raising activities among consumers that minimize marine litter and prevent waste generation behavior. To sum up, it is interdisciplinary research that combines three different areas: physical geography, environmental protection, and marketing with a special focus on consumer behavior.
How to cite: Razbadauskaite Venske, I., Dailidiene, I., Dailide, R., Kondrat, V., Baltranaite, E., and Dabulevičienė, T.: Consumer as a source of marine litter: Eye-tracking research in Baltic Sea coastal area , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21248, https://doi.org/10.5194/egusphere-egu2020-21248, 2020.
This part is one of the research components for the Ph.D. thesis of geographical sustainable consumption which analyzes the influence of daily household waste causing a significant increase in marine litter. According to the "Marine Litter Socio-economic" study typical items founded on urban beaches usually are bottle caps, plastic bags, plastic food containers, wrappers, and plastic cutlery. In general, 48% of marine litter is caused by household-related waste and 70% of marine litter are plastics in the Baltic Sea (UN Environment, 2017; Helcom, 2018ad).
The main aim of this socio-economic research part is to investigate consumer as a source of marine litter in the Baltic Sea (pilot: Lithuania). It combines both qualitative and quantitative methods. Eye-tracker assists in the analysis of gaze points data which is collected on the beach photos with marine litter (focus on macro litter). Semi-structured interviews provide more in-depth insights about consumer behavior and habits.
Thus, the main goal of this research is to find out and introduce awareness-raising activities among consumers that minimize marine litter and prevent waste generation behavior. To sum up, it is interdisciplinary research that combines three different areas: physical geography, environmental protection, and marketing with a special focus on consumer behavior.
How to cite: Razbadauskaite Venske, I., Dailidiene, I., Dailide, R., Kondrat, V., Baltranaite, E., and Dabulevičienė, T.: Consumer as a source of marine litter: Eye-tracking research in Baltic Sea coastal area , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21248, https://doi.org/10.5194/egusphere-egu2020-21248, 2020.
EGU2020-17713 | Displays | ITS5.7/CL2.14
A new wave hindcast for the North and Baltic Sea region using COSMO-REA6Nikolaus Groll
Wave hindcasts are still required as improved knowledge of climate variables representing the present marine climate are needed. Most long regional wave hindcasts are driven by numerically downscaled wind fields from global reanalysis. Whereas this approach gives a good representation of the regional wave climate in general, there are some deficits in the characteristics of extreme events. Using regional atmospheric reanalysis, which assimilates atmospheric observations into the numerical model, a better description of extreme events is expected. The regional atmospheric reanalysis COSMO-REA6 from the German Weather Service (DWD) showed that it is capable of a better representation of atmospheric extreme events. For the new regional wave hindcast, covering the North Sea and the Baltic Sea, we use the COSMO-REA6 to force the wave model WAM. It is shown, that his new wind hindcast leads to an improved representation of extreme wave events compared to other regional wave hindcasts and thus supports an important contribution to the understanding of the wave climate of extremes and for the design phase of offshore activities.
How to cite: Groll, N.: A new wave hindcast for the North and Baltic Sea region using COSMO-REA6 , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-17713, https://doi.org/10.5194/egusphere-egu2020-17713, 2020.
Wave hindcasts are still required as improved knowledge of climate variables representing the present marine climate are needed. Most long regional wave hindcasts are driven by numerically downscaled wind fields from global reanalysis. Whereas this approach gives a good representation of the regional wave climate in general, there are some deficits in the characteristics of extreme events. Using regional atmospheric reanalysis, which assimilates atmospheric observations into the numerical model, a better description of extreme events is expected. The regional atmospheric reanalysis COSMO-REA6 from the German Weather Service (DWD) showed that it is capable of a better representation of atmospheric extreme events. For the new regional wave hindcast, covering the North Sea and the Baltic Sea, we use the COSMO-REA6 to force the wave model WAM. It is shown, that his new wind hindcast leads to an improved representation of extreme wave events compared to other regional wave hindcasts and thus supports an important contribution to the understanding of the wave climate of extremes and for the design phase of offshore activities.
How to cite: Groll, N.: A new wave hindcast for the North and Baltic Sea region using COSMO-REA6 , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-17713, https://doi.org/10.5194/egusphere-egu2020-17713, 2020.
EGU2020-19862 | Displays | ITS5.7/CL2.14
Underwater light field changes in Pärnu Bay influenced by weather phenomena and captured by Sentinel-3Kristi Uudeberg, Mirjam Randla, Age Arikas, Tuuli Soomets, Kaire Toming, and Anu Reinart
Climate change is expected to continue in the 21st century, but the magnitude of change depends on future actions. In the Baltic Sea, specifically in the Pärnu Bay region, this is predicted to mean warmer temperatures, less ice cover, more precipitations and a slight increase in average wind speed, furthermore extreme climatic events such as heavy rains, strong winds and storms will be more intense and frequent. The coastal waters play a central role in humans and nature's everyday lives as providing food, living and recreational opportunities. Since Pärnu Bay is one of the most eutrophied area in the Baltic Sea and provides living hood more the 800 fishermen, then regular monitoring is strongly recommended, but with traditional methods often unfeasible. The availability of free Sentinel satellites data with good spectral, spatial and temporal resolution has generated wide interest in how to use remote sensing capabilities to monitor coastal waters water quality, which affects the underwater light field and can lead even to changes in fish composition. However, these waters are optically complex and influenced independently by coloured dissolved organic matter, phytoplankton and an amount of suspended sediments. Therefore, the remote sensing of optically complex waters is more challenging, and standard remote sensing products often fail. In this study, we use satellite Sentinel-3 data to investigate weather phenomena as strong wind and precipitations effect to Pärnu Bay water quality parameters. We study the spatial and temporal scope of change of water quality parameters after the event. For that, we use optical water type classification based chlorophyll-a, suspended sediments and coloured dissolved organic matter algorithms on Sentinel-3 images and estimate underwater light field changes. Furthermore, we also use in situ data to analyses the frequency and the strength of weather events. Finally, we look at the composition of fish based on literature and we investigate the possible effects of the change of the underwater light field on fish composition.
How to cite: Uudeberg, K., Randla, M., Arikas, A., Soomets, T., Toming, K., and Reinart, A.: Underwater light field changes in Pärnu Bay influenced by weather phenomena and captured by Sentinel-3, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19862, https://doi.org/10.5194/egusphere-egu2020-19862, 2020.
Climate change is expected to continue in the 21st century, but the magnitude of change depends on future actions. In the Baltic Sea, specifically in the Pärnu Bay region, this is predicted to mean warmer temperatures, less ice cover, more precipitations and a slight increase in average wind speed, furthermore extreme climatic events such as heavy rains, strong winds and storms will be more intense and frequent. The coastal waters play a central role in humans and nature's everyday lives as providing food, living and recreational opportunities. Since Pärnu Bay is one of the most eutrophied area in the Baltic Sea and provides living hood more the 800 fishermen, then regular monitoring is strongly recommended, but with traditional methods often unfeasible. The availability of free Sentinel satellites data with good spectral, spatial and temporal resolution has generated wide interest in how to use remote sensing capabilities to monitor coastal waters water quality, which affects the underwater light field and can lead even to changes in fish composition. However, these waters are optically complex and influenced independently by coloured dissolved organic matter, phytoplankton and an amount of suspended sediments. Therefore, the remote sensing of optically complex waters is more challenging, and standard remote sensing products often fail. In this study, we use satellite Sentinel-3 data to investigate weather phenomena as strong wind and precipitations effect to Pärnu Bay water quality parameters. We study the spatial and temporal scope of change of water quality parameters after the event. For that, we use optical water type classification based chlorophyll-a, suspended sediments and coloured dissolved organic matter algorithms on Sentinel-3 images and estimate underwater light field changes. Furthermore, we also use in situ data to analyses the frequency and the strength of weather events. Finally, we look at the composition of fish based on literature and we investigate the possible effects of the change of the underwater light field on fish composition.
How to cite: Uudeberg, K., Randla, M., Arikas, A., Soomets, T., Toming, K., and Reinart, A.: Underwater light field changes in Pärnu Bay influenced by weather phenomena and captured by Sentinel-3, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19862, https://doi.org/10.5194/egusphere-egu2020-19862, 2020.
EGU2020-19922 | Displays | ITS5.7/CL2.14
Bi-temporal Monitoring the Spatial Pattern and Variations of the Surface Urban Heat Island in three Chinese Coastal Megacities: A Comparative Study of Guangzhou, Hangzhou, and Shanghai.Fei Liu
The side-effect of booming urbanization on the ecosystem and climate system has been continuously exacerbating. The coastal metropolises are located at the interface between land and ocean, unavoidably influenced by multiple aspects of the terrestrial environments, aquatic ecosystems, and urban developments. Thus, the environmental health of coastal metropolis should be more concerned. In this study, targeting Guangzhou, Hangzhou, and Shanghai, an attempt was made to evaluate the spatiotemporal patterns and variations of surface urban heat island (SUHI) in three coastal metropolises of China based on Landsat-derived land surface temperatures (LST) and land cover data. The results indicate that overall, within a nearly 15-year interval, the extents of hot spots in three metropolises were significantly expanded, the spatial patterns of SUHI have been transformed from monocentric to polycentric high-LST clusters, which were identical to the trend of urban expansion. However, these three metropolises possess distinct features in terms of the thermal layouts and land cover/use composition. Although the total area of SUHI hot spots in Shanghai has surged, the intensity of some hot spots has been a shrink. Besides, the interactions and associations between SUHI and urban development were investigated using spatial regression analysis. The urban composition and configuration considerably affected the intensity of SUHI. Terrain morphology constrained the SUHI. Prolific population growth had a continuing effect on SUHI formation. The proportion of forests displayed a consistently critical influence on easing the adverse of SUHI. Additionally, it is essential to appropriately consider the impacts of water in the comparative analysis of different thermal environments. However, water might be treated as a time-invariant factor and have a limited effect on the bi-temporal comparison for each metropolis. These findings suggest the policy-makers and urban planners should balance and optimize the land cover/use configurations with accommodating the increasing population, reasonably maximize the reservations of the greenbelt and green space under improving the utilization of urban infrastructures and constructions.
How to cite: Liu, F.: Bi-temporal Monitoring the Spatial Pattern and Variations of the Surface Urban Heat Island in three Chinese Coastal Megacities: A Comparative Study of Guangzhou, Hangzhou, and Shanghai., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19922, https://doi.org/10.5194/egusphere-egu2020-19922, 2020.
The side-effect of booming urbanization on the ecosystem and climate system has been continuously exacerbating. The coastal metropolises are located at the interface between land and ocean, unavoidably influenced by multiple aspects of the terrestrial environments, aquatic ecosystems, and urban developments. Thus, the environmental health of coastal metropolis should be more concerned. In this study, targeting Guangzhou, Hangzhou, and Shanghai, an attempt was made to evaluate the spatiotemporal patterns and variations of surface urban heat island (SUHI) in three coastal metropolises of China based on Landsat-derived land surface temperatures (LST) and land cover data. The results indicate that overall, within a nearly 15-year interval, the extents of hot spots in three metropolises were significantly expanded, the spatial patterns of SUHI have been transformed from monocentric to polycentric high-LST clusters, which were identical to the trend of urban expansion. However, these three metropolises possess distinct features in terms of the thermal layouts and land cover/use composition. Although the total area of SUHI hot spots in Shanghai has surged, the intensity of some hot spots has been a shrink. Besides, the interactions and associations between SUHI and urban development were investigated using spatial regression analysis. The urban composition and configuration considerably affected the intensity of SUHI. Terrain morphology constrained the SUHI. Prolific population growth had a continuing effect on SUHI formation. The proportion of forests displayed a consistently critical influence on easing the adverse of SUHI. Additionally, it is essential to appropriately consider the impacts of water in the comparative analysis of different thermal environments. However, water might be treated as a time-invariant factor and have a limited effect on the bi-temporal comparison for each metropolis. These findings suggest the policy-makers and urban planners should balance and optimize the land cover/use configurations with accommodating the increasing population, reasonably maximize the reservations of the greenbelt and green space under improving the utilization of urban infrastructures and constructions.
How to cite: Liu, F.: Bi-temporal Monitoring the Spatial Pattern and Variations of the Surface Urban Heat Island in three Chinese Coastal Megacities: A Comparative Study of Guangzhou, Hangzhou, and Shanghai., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-19922, https://doi.org/10.5194/egusphere-egu2020-19922, 2020.
ITS5.9/EOS4.14 – Trans-disciplinary aspects of researching Arctic change: science communication, outreach and eductation, integration, monitoring, modelling and risk perception
EGU2020-12610 | Displays | ITS5.9/EOS4.14
Developing a collaborative permafrost research program: The Dempster - Inuvik to Tuktoyaktuk highway research corridor, Northwest Territories, CanadaAshley Rudy, Steve Kokelj, Alice Wilson, Tim Ensom, Peter Morse, and Charles Klengenberg
The Beaufort Delta region in Northwest Territories, Canada is one of the most rapidly warming areas on Earth. Permafrost thaw and climate change are major stressors on northern infrastructure, particularly in this region which hosts the highest density of Arctic communities and the longest road network constructed on ice-rich permafrost in Canada. The Dempster and Inuvik to Tuktoyaktuk Highways (ITH) comprise a 400-km corridor connecting the region with southern Canada. The corridor delivers a unique opportunity to develop a societally-relevant, northern-driven permafrost research network to encourage collaboration, and support pure and applied studies that engage stakeholders, encourage community participation, and acknowledge northern interests. Successful implementation requires leadership and institutional support from the Government of the Northwest Territories (GNWT) and Inuvialuit and Gwich’in Boards and landowners, and coordination between research organizations including NWT Geological Survey, Aurora Research Institute, Geological Survey of Canada, and universities to define key research priorities, human and financial resources to undertake studies, and protocols to manage data collection and reporting.
In 2017, a state of the art ground temperature monitoring network was established along the Dempster-ITH corridor by the GNWT in collaboration with Federal and Academic partners. This network in combination with the maintenance of the Dempster Highway and recent design and construction of the ITH, has created a national legacy of permafrost geotechnical, terrain and geohazard information in this region. The objectives of this program are to integrate old and new data to synthesize physiographic, hydrological, thermal, and geotechnical conditions along the corridor, and to develop applied permafrost research projects that support planning and maintenance of this critical northern infrastructure. In this presentation, we highlight: 1) a collaborative research framework that builds northern capacity and involves northerners in the generation of knowledge and its application to increase community based permafrost monitoring; 2) summaries of existing infrastructure datasets and their foundation for research; and 3) new projects that address emerging climate-driven infrastructure stressors. As the effects of climate change on permafrost environments, infrastructure and communities continue to increase, the need for northern scientific capacity and applied research to support informed decision-making, climate change adaptation and risk management will become increasingly critical. The development of resilient researcher-stakeholder-community relationships is also necessary for the scientific and research initiatives to reach their potential.
How to cite: Rudy, A., Kokelj, S., Wilson, A., Ensom, T., Morse, P., and Klengenberg, C.: Developing a collaborative permafrost research program: The Dempster - Inuvik to Tuktoyaktuk highway research corridor, Northwest Territories, Canada, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12610, https://doi.org/10.5194/egusphere-egu2020-12610, 2020.
The Beaufort Delta region in Northwest Territories, Canada is one of the most rapidly warming areas on Earth. Permafrost thaw and climate change are major stressors on northern infrastructure, particularly in this region which hosts the highest density of Arctic communities and the longest road network constructed on ice-rich permafrost in Canada. The Dempster and Inuvik to Tuktoyaktuk Highways (ITH) comprise a 400-km corridor connecting the region with southern Canada. The corridor delivers a unique opportunity to develop a societally-relevant, northern-driven permafrost research network to encourage collaboration, and support pure and applied studies that engage stakeholders, encourage community participation, and acknowledge northern interests. Successful implementation requires leadership and institutional support from the Government of the Northwest Territories (GNWT) and Inuvialuit and Gwich’in Boards and landowners, and coordination between research organizations including NWT Geological Survey, Aurora Research Institute, Geological Survey of Canada, and universities to define key research priorities, human and financial resources to undertake studies, and protocols to manage data collection and reporting.
In 2017, a state of the art ground temperature monitoring network was established along the Dempster-ITH corridor by the GNWT in collaboration with Federal and Academic partners. This network in combination with the maintenance of the Dempster Highway and recent design and construction of the ITH, has created a national legacy of permafrost geotechnical, terrain and geohazard information in this region. The objectives of this program are to integrate old and new data to synthesize physiographic, hydrological, thermal, and geotechnical conditions along the corridor, and to develop applied permafrost research projects that support planning and maintenance of this critical northern infrastructure. In this presentation, we highlight: 1) a collaborative research framework that builds northern capacity and involves northerners in the generation of knowledge and its application to increase community based permafrost monitoring; 2) summaries of existing infrastructure datasets and their foundation for research; and 3) new projects that address emerging climate-driven infrastructure stressors. As the effects of climate change on permafrost environments, infrastructure and communities continue to increase, the need for northern scientific capacity and applied research to support informed decision-making, climate change adaptation and risk management will become increasingly critical. The development of resilient researcher-stakeholder-community relationships is also necessary for the scientific and research initiatives to reach their potential.
How to cite: Rudy, A., Kokelj, S., Wilson, A., Ensom, T., Morse, P., and Klengenberg, C.: Developing a collaborative permafrost research program: The Dempster - Inuvik to Tuktoyaktuk highway research corridor, Northwest Territories, Canada, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12610, https://doi.org/10.5194/egusphere-egu2020-12610, 2020.
EGU2020-1840 | Displays | ITS5.9/EOS4.14 | Highlight
Permafrost Dynamics and Indigenous Land Use: Tracing Past and Current Landscape Conditions and Effects of Environmental Change in Sakha/Yakutia, RussiaMathias Ulrich and J. Otto Habeck
Arctic and Subarctic regions are currently experiencing a more rapid warming than other parts of the Earth. This trend is of particular salience for the Republic of Sakha/Yakutia (East Siberia, Russia) – a vast region where both permafrost research and social science research on animal husbandry have been conducted intensively but thus far separately. Here we are presenting a new project that will combine these disconnected strands and utilize an interdisciplinary approach for examining landscape and land use development under climatic change. Such an approach is topical because effects of past and imminent permafrost degradation on indigenous livelihoods have hitherto been described in rather simplistic terms. The project is designed as a comparative study of two regions in Central and Northeast Sakha/Yakutia. Both areas are susceptible to permafrost degradation, but under divergent zonal and socio-economic conditions (taiga vs. tundra; cattle and horse vs. reindeer husbandry).
A key element of landscape dynamics in both regions is thermokarst, i.e. the thawing of ice-rich deposits leading to soil subsidence and lake formation. Thaw lakes mark an early phase of thermokarst formation; they can serve as indicators for changes in climate, permafrost and vegetation. On the one hand, thermokarst processes have taken place in earlier millennia, notably in the Pleistocene/Holocene transition and during the mid-Holocene climate optimum; in the long run, this has led to the formation of grass-rich depressions (known as alas), creating the preconditions for cattle farming in Central Sakha/Yakutia which emerged at least 500 years ago. On the other hand, thermokarst processes occur at present in connection with global warming; the effects of the latter are likely to produce unprecedented rapid change, with very grave consequences for local land users.
In the analysis of landscape development and land use, we distinguish between two periods: before and after the start of pastoralism and farming. We test the hypothesis that landscape and land-use changes occurred at different scales and speeds in the two zonal settings (Central vs Northeastern Sakha/Yakutia). Furthermore, we postulate that existing forms of land use are going to influence landscape development in different ways: They (i) correlate with, (ii) exacerbate or (iii) neutralize the effects of climate change (owing to different feedback mechanisms). Finally, taking into account the most important demographic, economic and socio-cultural influences, the project will contribute to formulating parameters for modelling the future risks that permafrost degradation exerts on rural communities.
How to cite: Ulrich, M. and Habeck, J. O.: Permafrost Dynamics and Indigenous Land Use: Tracing Past and Current Landscape Conditions and Effects of Environmental Change in Sakha/Yakutia, Russia, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1840, https://doi.org/10.5194/egusphere-egu2020-1840, 2020.
Arctic and Subarctic regions are currently experiencing a more rapid warming than other parts of the Earth. This trend is of particular salience for the Republic of Sakha/Yakutia (East Siberia, Russia) – a vast region where both permafrost research and social science research on animal husbandry have been conducted intensively but thus far separately. Here we are presenting a new project that will combine these disconnected strands and utilize an interdisciplinary approach for examining landscape and land use development under climatic change. Such an approach is topical because effects of past and imminent permafrost degradation on indigenous livelihoods have hitherto been described in rather simplistic terms. The project is designed as a comparative study of two regions in Central and Northeast Sakha/Yakutia. Both areas are susceptible to permafrost degradation, but under divergent zonal and socio-economic conditions (taiga vs. tundra; cattle and horse vs. reindeer husbandry).
A key element of landscape dynamics in both regions is thermokarst, i.e. the thawing of ice-rich deposits leading to soil subsidence and lake formation. Thaw lakes mark an early phase of thermokarst formation; they can serve as indicators for changes in climate, permafrost and vegetation. On the one hand, thermokarst processes have taken place in earlier millennia, notably in the Pleistocene/Holocene transition and during the mid-Holocene climate optimum; in the long run, this has led to the formation of grass-rich depressions (known as alas), creating the preconditions for cattle farming in Central Sakha/Yakutia which emerged at least 500 years ago. On the other hand, thermokarst processes occur at present in connection with global warming; the effects of the latter are likely to produce unprecedented rapid change, with very grave consequences for local land users.
In the analysis of landscape development and land use, we distinguish between two periods: before and after the start of pastoralism and farming. We test the hypothesis that landscape and land-use changes occurred at different scales and speeds in the two zonal settings (Central vs Northeastern Sakha/Yakutia). Furthermore, we postulate that existing forms of land use are going to influence landscape development in different ways: They (i) correlate with, (ii) exacerbate or (iii) neutralize the effects of climate change (owing to different feedback mechanisms). Finally, taking into account the most important demographic, economic and socio-cultural influences, the project will contribute to formulating parameters for modelling the future risks that permafrost degradation exerts on rural communities.
How to cite: Ulrich, M. and Habeck, J. O.: Permafrost Dynamics and Indigenous Land Use: Tracing Past and Current Landscape Conditions and Effects of Environmental Change in Sakha/Yakutia, Russia, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1840, https://doi.org/10.5194/egusphere-egu2020-1840, 2020.
EGU2020-1184 | Displays | ITS5.9/EOS4.14 | Highlight
Spatiotemporal influence of permafrost thaw on anthrax diffusionElisa Stella, Lorenzo Mari, Carlo Barbante, Jacopo Gabrieli, and Enrico Bertuzzo
The recent 2016 outbreak of anthrax disease affecting reindeer herds in Siberia has been associated to the presence of old infected carcasses released from thawing permafrost, underlying the emerging character of such disease in the Arctic region due to climate change. Anthrax occurs in nature as a global zoonotic and epizootic disease caused by the spore-forming bacterium Bacillus anthracis. It principally affects herbivores and causes high animal mortality. Transmission occurs mainly via environmental contamination through spores which can remain viable in permafrost for many decades.
We propose and analyze a novel epidemiological model for anthrax transmission specifically tailored for the Arctic region. It conceptualizes the transmission of disease between susceptible and infected animals in the presence of environmental contamination, considering also herding practices (e.g. seasonal grazing) and the seasonal environmental forcing caused by thawing permafrost. We performed stability analyses and implemented Floquet theory for periodically forced systems, and therefore applied our model to the 17-year-long records of permafrost thawing depth available at the Lena River Delta (northern Siberia). Accordingly, in order to spatialize potential anthrax incidence and consequently the possible hazardous areas in the Arctic, we used the Maximum Entropy (Maxent) approach considering environmental variables and, in particular, accounting for current and expected permafrost thawing rates.
Results show how temporal variability of grazing and thawing may influence and favor sustained anthrax transmission. Also, particularly warm years are associated to increased risk of anthrax incidence. Accordingly, we show that such risk could be mitigated with specific precautions involving herding practices, for example by anticipating or postponing seasonal grazing. Finally, a spatial map of the potential Arctic areas at risk is presented, providing a tool for local authorities in view of eventual targeted prevention measures.
How to cite: Stella, E., Mari, L., Barbante, C., Gabrieli, J., and Bertuzzo, E.: Spatiotemporal influence of permafrost thaw on anthrax diffusion, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1184, https://doi.org/10.5194/egusphere-egu2020-1184, 2020.
The recent 2016 outbreak of anthrax disease affecting reindeer herds in Siberia has been associated to the presence of old infected carcasses released from thawing permafrost, underlying the emerging character of such disease in the Arctic region due to climate change. Anthrax occurs in nature as a global zoonotic and epizootic disease caused by the spore-forming bacterium Bacillus anthracis. It principally affects herbivores and causes high animal mortality. Transmission occurs mainly via environmental contamination through spores which can remain viable in permafrost for many decades.
We propose and analyze a novel epidemiological model for anthrax transmission specifically tailored for the Arctic region. It conceptualizes the transmission of disease between susceptible and infected animals in the presence of environmental contamination, considering also herding practices (e.g. seasonal grazing) and the seasonal environmental forcing caused by thawing permafrost. We performed stability analyses and implemented Floquet theory for periodically forced systems, and therefore applied our model to the 17-year-long records of permafrost thawing depth available at the Lena River Delta (northern Siberia). Accordingly, in order to spatialize potential anthrax incidence and consequently the possible hazardous areas in the Arctic, we used the Maximum Entropy (Maxent) approach considering environmental variables and, in particular, accounting for current and expected permafrost thawing rates.
Results show how temporal variability of grazing and thawing may influence and favor sustained anthrax transmission. Also, particularly warm years are associated to increased risk of anthrax incidence. Accordingly, we show that such risk could be mitigated with specific precautions involving herding practices, for example by anticipating or postponing seasonal grazing. Finally, a spatial map of the potential Arctic areas at risk is presented, providing a tool for local authorities in view of eventual targeted prevention measures.
How to cite: Stella, E., Mari, L., Barbante, C., Gabrieli, J., and Bertuzzo, E.: Spatiotemporal influence of permafrost thaw on anthrax diffusion, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-1184, https://doi.org/10.5194/egusphere-egu2020-1184, 2020.
EGU2020-21723 | Displays | ITS5.9/EOS4.14
Some recent efforts for the education in the Russian Arctic: the examples from institutes to individualsAnastasiia Tarasenko, Alexandra Mushta, Arina Kosareva, Veleta Yarygina, and Daria Frolova
In this talk, we present several examples of existing interactions between scientific community and large public, mainly in educational form. The talk is divided into 2 parts: the first is an experience from Arctic and Antarctic Research institute (AARI), and the second, from Saint-Petersburg State University (SPbSU).
From 2018, the AARI started to work with public, when several social networks were reached: Instagram, Facebook and Vkontakte, with an audience of more than 10 000 accounts. Daily posts with a constant feedback are written together with polar scientists: we widely use the photographs and videos from the expeditions to show in livethe current state of the Arctic and its changing, the work and the instruments of polar scientists, and the basic knowledge about it. We regularly publish the interviews with the scientists and have a special hashtag #childrenofpolarscientists. During these 2 years, we created a special concours for the undergraduates and secondary school: “66° 63”, where several winners can visit the Svalbard archipelago and realize their scientific or artistic projects. These projects are officially supported by a special Media department of the AARI.
The scientists continue promoting their activity at their level: giving the lectures, participating in the “Scientific Slams”, writing the blogs during the expedition. Publishing classical albums and books after the expedition, such as recent Transarktika-2019, stay important as a result of scientific journalism.
To illustrate the effect of local educational programs, we present the efforts of an ecological team from Saint-Petersburg State University working at Ust’-Yany village in the Arkhangesk region. The program lasted from 2013 to 2016 with a main purpose of letting know the local children what happen to their region and describe them the possibilities to work in a new environment, to develop their forests, create new jobs. The main audience was the secondary school children. A large support of local administration was obtained and helped to realize multiple short-term visits, 2 conferences, scholar projects. A very good feedback from children and administration was received for this personal initiative organized by students and professors of SPbSU.
How to cite: Tarasenko, A., Mushta, A., Kosareva, A., Yarygina, V., and Frolova, D.: Some recent efforts for the education in the Russian Arctic: the examples from institutes to individuals, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21723, https://doi.org/10.5194/egusphere-egu2020-21723, 2020.
In this talk, we present several examples of existing interactions between scientific community and large public, mainly in educational form. The talk is divided into 2 parts: the first is an experience from Arctic and Antarctic Research institute (AARI), and the second, from Saint-Petersburg State University (SPbSU).
From 2018, the AARI started to work with public, when several social networks were reached: Instagram, Facebook and Vkontakte, with an audience of more than 10 000 accounts. Daily posts with a constant feedback are written together with polar scientists: we widely use the photographs and videos from the expeditions to show in livethe current state of the Arctic and its changing, the work and the instruments of polar scientists, and the basic knowledge about it. We regularly publish the interviews with the scientists and have a special hashtag #childrenofpolarscientists. During these 2 years, we created a special concours for the undergraduates and secondary school: “66° 63”, where several winners can visit the Svalbard archipelago and realize their scientific or artistic projects. These projects are officially supported by a special Media department of the AARI.
The scientists continue promoting their activity at their level: giving the lectures, participating in the “Scientific Slams”, writing the blogs during the expedition. Publishing classical albums and books after the expedition, such as recent Transarktika-2019, stay important as a result of scientific journalism.
To illustrate the effect of local educational programs, we present the efforts of an ecological team from Saint-Petersburg State University working at Ust’-Yany village in the Arkhangesk region. The program lasted from 2013 to 2016 with a main purpose of letting know the local children what happen to their region and describe them the possibilities to work in a new environment, to develop their forests, create new jobs. The main audience was the secondary school children. A large support of local administration was obtained and helped to realize multiple short-term visits, 2 conferences, scholar projects. A very good feedback from children and administration was received for this personal initiative organized by students and professors of SPbSU.
How to cite: Tarasenko, A., Mushta, A., Kosareva, A., Yarygina, V., and Frolova, D.: Some recent efforts for the education in the Russian Arctic: the examples from institutes to individuals, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-21723, https://doi.org/10.5194/egusphere-egu2020-21723, 2020.
EGU2020-7154 | Displays | ITS5.9/EOS4.14
The Sea Ice Tracking System (SITU): A Community Tool for the Arctic and AntarcticBruno Tremblay, Stephanie Pfirman, Garrett Campbell, Robert Newton, and Walt Meier
The Sea Ice Tracking System (SITU), formerly known as the IceTracker or Lagrangian Ice Tracking System, has been expanded to include new functions facilitating a wide range of new applications (http://icemotion.labs.nsidc.org/SITU/). Ice motion vectors are calculated from an optimal interpolation of satellite-derived, free-drift and buoy drift estimates (Polar Pathfinder dataset, version 4, https://nsidc.org/data/nsidc-0116; International Arctic Buoy Program, http://iabp.apl.washington.edu/; NCEP/NCAR reanalysis, https://www.esrl.noaa.gov/). SITU now calculates forward and backward trajectories of Antarctic as well as Arctic sea ice from 1979 to 2018 and incorporates basin-wide contextual information including timeseries of bathymetry, ice concentration, ice age, ice motion, air temperature, pressure, and wind speed, along the tracks. A new animated background option allows users to visualize these basin-wide changing environmental conditions as the tracking progresses. SITU can be used by researchers, educators, local and indigenous communities, policy and planning professionals, and industries. For instance, geologists can use SITU to determine the provenance of sediment transported by sea-ice and deposited at an ocean core site; biologists can identify source region of biomass transported by sea-ice and seeding algal bloom in a given sea, or overlay bear and birds tracks over ice conditions or ice types animated in the background; coastal communities can backtrack ice to reveal age, origin and other factors that influence habitats of ice-associated species; people planning future expeditions can review recent ice conditions along potential cruise tracks, historians can compare current air temperatures, wind conditions, and ice concentration with past expeditions; students can learn about sea ice motion in the Arctic or compare recent ice drift (Tara or MOSAIC) with that of the epic expedition of Nansen. A new Eulerian option allows users to see changing conditions at one point over the full satellite record (1978 to present). This Eulerian depiction reveals variability as well as trends, and can provide context for data retrieved from a mooring, sediment trap, or sediment core. Publically hosted on the NSIDC Labs webpage, data can be downloaded graphically or in spreadsheet format for deeper analysis.
How to cite: Tremblay, B., Pfirman, S., Campbell, G., Newton, R., and Meier, W.: The Sea Ice Tracking System (SITU): A Community Tool for the Arctic and Antarctic, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7154, https://doi.org/10.5194/egusphere-egu2020-7154, 2020.
The Sea Ice Tracking System (SITU), formerly known as the IceTracker or Lagrangian Ice Tracking System, has been expanded to include new functions facilitating a wide range of new applications (http://icemotion.labs.nsidc.org/SITU/). Ice motion vectors are calculated from an optimal interpolation of satellite-derived, free-drift and buoy drift estimates (Polar Pathfinder dataset, version 4, https://nsidc.org/data/nsidc-0116; International Arctic Buoy Program, http://iabp.apl.washington.edu/; NCEP/NCAR reanalysis, https://www.esrl.noaa.gov/). SITU now calculates forward and backward trajectories of Antarctic as well as Arctic sea ice from 1979 to 2018 and incorporates basin-wide contextual information including timeseries of bathymetry, ice concentration, ice age, ice motion, air temperature, pressure, and wind speed, along the tracks. A new animated background option allows users to visualize these basin-wide changing environmental conditions as the tracking progresses. SITU can be used by researchers, educators, local and indigenous communities, policy and planning professionals, and industries. For instance, geologists can use SITU to determine the provenance of sediment transported by sea-ice and deposited at an ocean core site; biologists can identify source region of biomass transported by sea-ice and seeding algal bloom in a given sea, or overlay bear and birds tracks over ice conditions or ice types animated in the background; coastal communities can backtrack ice to reveal age, origin and other factors that influence habitats of ice-associated species; people planning future expeditions can review recent ice conditions along potential cruise tracks, historians can compare current air temperatures, wind conditions, and ice concentration with past expeditions; students can learn about sea ice motion in the Arctic or compare recent ice drift (Tara or MOSAIC) with that of the epic expedition of Nansen. A new Eulerian option allows users to see changing conditions at one point over the full satellite record (1978 to present). This Eulerian depiction reveals variability as well as trends, and can provide context for data retrieved from a mooring, sediment trap, or sediment core. Publically hosted on the NSIDC Labs webpage, data can be downloaded graphically or in spreadsheet format for deeper analysis.
How to cite: Tremblay, B., Pfirman, S., Campbell, G., Newton, R., and Meier, W.: The Sea Ice Tracking System (SITU): A Community Tool for the Arctic and Antarctic, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-7154, https://doi.org/10.5194/egusphere-egu2020-7154, 2020.
EGU2020-12661 * | Displays | ITS5.9/EOS4.14 | Highlight
Voices of the Sea Ice: engaging an Arctic community to communicate impacts of climate changeDavid Lipson, Kim Reasor, and Kååre Sikuaq Erickson
The predominantly Inupiat people of Utqiaġvik, Alaska are among those who will be most impacted by
climate change and the loss of Arctic sea ice in the near future. Subsistence hunting of marine mammals
associated with sea ice is central to the Inupiat way of life. Furthermore, their coastal homes and
infrastructure are increasingly subject to damage from increased wave action on ice-free Beaufort and
Chukchi Seas. While the people of this region are among the most directly vulnerable to climate change,
the subject is not often discussed in the elementary school curriculum. Meanwhile, in many other parts
of the world, the impacts of climate change are viewed as abstract and remote. We worked with fifth
grade children in Utqiaġvik both to educate them, but also to engage them in helping us communicate
to rest of the world, in an emotionally resonant way, the direct impacts of climate change on families in
this Arctic region.
The team consisted of a scientist (Lipson), an artist (Reasor) and an outreach specialist (Erickson) of
Inupiat heritage from a village in Alaska. We worked with four 5th grade classes of about 25 students
each at Fred Ipalook Elementary in Utqiaġvik, AK. The scientist gave a short lecture about sea ice and
climate change in the Arctic, with emphasis on local impacts to hunting and infrastructure (with
interjections from the local outreach specialist). We then showed the students a large poster of
historical and projected sea ice decline, and asked the students to help us fill in the white space beneath
the lines. The artist led the children in making small art pieces that represent things that are important
to their lives in Utqiaġvik (they were encouraged to paint animals, but they were free to do whatever
they wanted). We returned to the class later that week and had each student briefly introduce
themselves and their painting, and place it to the large graph of sea ice decline, which included the dire
predictions of the RCP8.5 scenario. At the end we added the more hopeful RCP2.6 scenario to end on a
positive note. The artist then painted in the more hopeful green line by hand.
The result was a poster showing historical and projected Arctic sea ice cover, with 100 beautiful
paintings by children of things that are dear to them about their home being squeezed into a smaller
region as the sea ice cover diminishes. We scanned all the artwork to make a digital version of the
poster, and left the original with the school. These materials are being converted into an interactive
webpage where viewers can click on the individual painting for detail, and get selected recordings of the
children’s statements about their artwork. This project can serve as a nucleus for communicating to
other classes and adults about the real impacts of climate change in people’s lives.
How to cite: Lipson, D., Reasor, K., and Erickson, K. S.: Voices of the Sea Ice: engaging an Arctic community to communicate impacts of climate change, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12661, https://doi.org/10.5194/egusphere-egu2020-12661, 2020.
The predominantly Inupiat people of Utqiaġvik, Alaska are among those who will be most impacted by
climate change and the loss of Arctic sea ice in the near future. Subsistence hunting of marine mammals
associated with sea ice is central to the Inupiat way of life. Furthermore, their coastal homes and
infrastructure are increasingly subject to damage from increased wave action on ice-free Beaufort and
Chukchi Seas. While the people of this region are among the most directly vulnerable to climate change,
the subject is not often discussed in the elementary school curriculum. Meanwhile, in many other parts
of the world, the impacts of climate change are viewed as abstract and remote. We worked with fifth
grade children in Utqiaġvik both to educate them, but also to engage them in helping us communicate
to rest of the world, in an emotionally resonant way, the direct impacts of climate change on families in
this Arctic region.
The team consisted of a scientist (Lipson), an artist (Reasor) and an outreach specialist (Erickson) of
Inupiat heritage from a village in Alaska. We worked with four 5th grade classes of about 25 students
each at Fred Ipalook Elementary in Utqiaġvik, AK. The scientist gave a short lecture about sea ice and
climate change in the Arctic, with emphasis on local impacts to hunting and infrastructure (with
interjections from the local outreach specialist). We then showed the students a large poster of
historical and projected sea ice decline, and asked the students to help us fill in the white space beneath
the lines. The artist led the children in making small art pieces that represent things that are important
to their lives in Utqiaġvik (they were encouraged to paint animals, but they were free to do whatever
they wanted). We returned to the class later that week and had each student briefly introduce
themselves and their painting, and place it to the large graph of sea ice decline, which included the dire
predictions of the RCP8.5 scenario. At the end we added the more hopeful RCP2.6 scenario to end on a
positive note. The artist then painted in the more hopeful green line by hand.
The result was a poster showing historical and projected Arctic sea ice cover, with 100 beautiful
paintings by children of things that are dear to them about their home being squeezed into a smaller
region as the sea ice cover diminishes. We scanned all the artwork to make a digital version of the
poster, and left the original with the school. These materials are being converted into an interactive
webpage where viewers can click on the individual painting for detail, and get selected recordings of the
children’s statements about their artwork. This project can serve as a nucleus for communicating to
other classes and adults about the real impacts of climate change in people’s lives.
How to cite: Lipson, D., Reasor, K., and Erickson, K. S.: Voices of the Sea Ice: engaging an Arctic community to communicate impacts of climate change, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12661, https://doi.org/10.5194/egusphere-egu2020-12661, 2020.
EGU2020-12248 | Displays | ITS5.9/EOS4.14 | Highlight
Community-based observations help interface Indigenous and local knowledge, scientific research, and education in response to rapid Arctic coastal changeHajo Eicken, Finn Danielsen, Matthew Druckenmiller, Maryann Fidel, Donna Hauser, Lisbeth Iversen, Noor Johnson, Joshua Jones, Mette Kaufman, Olivia Lee, Peter Pulsifer, and Josephine-Mary Sam
Arctic coastal sea-ice environments are undergoing some of the most rapid changes anywhere in the Arctic, with implications for coastal communities’ food security and infrastructure, marine ecosystems, and permafrost. We argue that responses to such rapid change are most effective when informed by Indigenous and local knowledge and local observations to provide understanding of relevant processes, their impacts, and potential adaptation options. Community-based observations in particular can help create an interface across which different forms of knowledge, scientific research, and formal and informal education can co-develop meaningful responses. Through a broader literature review and a series of workshops, we have identified principles that can aid in this process, which include matching observing program and community priorities, creating sufficient organizational support structures, and ensuring sustained community members’ commitment. Drawing on a set of interconnected examples from Arctic Alaska focused on changing sea-ice environments and their impacts on coastal communities, we illustrate how these approaches can be implemented to provide knowledge sharing resources and tools. Specifically, in the context of the Alaska Arctic Observatory and Knowledge Hub (A-OK), a group of Iñupiat ice and coastal marine ecosystem experts is working with sea-ice geophysicists, marine biologists, and others to track changes in coastal environments as well as the services that the ice cover provides to coastal communities. The co-development of an observing framework and a web-based searchable database of observations has provided an interface for exchange and an education resource. An annual survey of hunting trails across the shorefast ice cover in the community of Utqiaġvik serves to further illustrate how different, response-focused activities such as the tracking of ice hazards – increasingly a concern with loss of ice stability and shortening of the ice season – can be embedded within a community-based monitoring framework.
How to cite: Eicken, H., Danielsen, F., Druckenmiller, M., Fidel, M., Hauser, D., Iversen, L., Johnson, N., Jones, J., Kaufman, M., Lee, O., Pulsifer, P., and Sam, J.-M.: Community-based observations help interface Indigenous and local knowledge, scientific research, and education in response to rapid Arctic coastal change, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12248, https://doi.org/10.5194/egusphere-egu2020-12248, 2020.
Arctic coastal sea-ice environments are undergoing some of the most rapid changes anywhere in the Arctic, with implications for coastal communities’ food security and infrastructure, marine ecosystems, and permafrost. We argue that responses to such rapid change are most effective when informed by Indigenous and local knowledge and local observations to provide understanding of relevant processes, their impacts, and potential adaptation options. Community-based observations in particular can help create an interface across which different forms of knowledge, scientific research, and formal and informal education can co-develop meaningful responses. Through a broader literature review and a series of workshops, we have identified principles that can aid in this process, which include matching observing program and community priorities, creating sufficient organizational support structures, and ensuring sustained community members’ commitment. Drawing on a set of interconnected examples from Arctic Alaska focused on changing sea-ice environments and their impacts on coastal communities, we illustrate how these approaches can be implemented to provide knowledge sharing resources and tools. Specifically, in the context of the Alaska Arctic Observatory and Knowledge Hub (A-OK), a group of Iñupiat ice and coastal marine ecosystem experts is working with sea-ice geophysicists, marine biologists, and others to track changes in coastal environments as well as the services that the ice cover provides to coastal communities. The co-development of an observing framework and a web-based searchable database of observations has provided an interface for exchange and an education resource. An annual survey of hunting trails across the shorefast ice cover in the community of Utqiaġvik serves to further illustrate how different, response-focused activities such as the tracking of ice hazards – increasingly a concern with loss of ice stability and shortening of the ice season – can be embedded within a community-based monitoring framework.
How to cite: Eicken, H., Danielsen, F., Druckenmiller, M., Fidel, M., Hauser, D., Iversen, L., Johnson, N., Jones, J., Kaufman, M., Lee, O., Pulsifer, P., and Sam, J.-M.: Community-based observations help interface Indigenous and local knowledge, scientific research, and education in response to rapid Arctic coastal change, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12248, https://doi.org/10.5194/egusphere-egu2020-12248, 2020.
EGU2020-11935 | Displays | ITS5.9/EOS4.14
Strengthening connections across disciplines and borders through an international permafrost coastal systems network (PerCS-Net)Benjamin Jones and the Permafrost Coastal Systems Network (PerCS-Net)
Changes in the Arctic system have increased the vulnerability of permafrost coasts to erosion and altered coastal morphologies, ecosystems, biogeochemical cycling, infrastructure, cultural and heritage sites, community well-being, and human subsistence lifestyles. Better understanding the pace and nature of rapid changes occurring along permafrost coastlines is urgent, since a high proportion of Arctic residents live on or near coastlines, and many derive their livelihood from terrestrial and nearshore marine resources
The US National Science Foundation’s AccelNet and Arctic System Sciences Programs, recently awarded a collaborative grant funding the Permafrost Coastal Systems Network (PerCS-Net). PerCS-Net focuses on leveraging resources from existing national and international networks that have a common vision of better understanding permafrost coastal system dynamics and emerging transdisciplinary science, engineering, and societal issues in order to amplify the broader impacts by each individual network. PerCS-Net strengthens linkages between existing networks based in Germany, Russia, Norway, Denmark, Poland, and Canada with the activities of several active US NSF-funded networks as well as several local, state, and federally funded US-based networks.
PerCS-Net will benefit the US and international research communities by (1) developing internationally recognized protocols for quantifying the multitude of changes and impacts occurring in Arctic coastal permafrost systems, (2) sustaining long-term observations from representative coastal key sites, (3) unifying annual and decadal-scale observations of circum-arctic permafrost-influenced coasts, (4) refining a circum-arctic coastal mapping classification system and web-based delivery of geospatial information for management planning purposes and readily accessible information exchange for vulnerability assessments, (5) engaging local communities and observers to capture impacts on subsistence and traditional livelihoods, and (6) promoting synergy across networks to foster the next generation of students, postdoctoral scholars, and early-career researchers faced with the known and unknown challenges of the future Arctic System.
Ultimately, PerCS-Net will develop a circumpolar alliance for Arctic coastal community information exchange between stake-, rights- and knowledge holders, scientists, and land managers. There is increasingly diverse interest in permafrost coastal system issues and currently no unified source of information on the past, present, and potential future state of permafrost coastal systems that provide the level of detail needed to make decisions at scales relevant for indigenous communities across the Arctic. Such new engagement will inform intergovernmental agencies and international research and outreach programs in making science-based decisions and policies to adapt to changing permafrost coastal system dynamics. PerCS-Net will build a network of networks to assess risks posed by permafrost coastal system changes to local and global economies and well-being and facilitate knowledge transfer that will lead to circum-arctic adaptation strategies.
How to cite: Jones, B. and the Permafrost Coastal Systems Network (PerCS-Net): Strengthening connections across disciplines and borders through an international permafrost coastal systems network (PerCS-Net), EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11935, https://doi.org/10.5194/egusphere-egu2020-11935, 2020.
Changes in the Arctic system have increased the vulnerability of permafrost coasts to erosion and altered coastal morphologies, ecosystems, biogeochemical cycling, infrastructure, cultural and heritage sites, community well-being, and human subsistence lifestyles. Better understanding the pace and nature of rapid changes occurring along permafrost coastlines is urgent, since a high proportion of Arctic residents live on or near coastlines, and many derive their livelihood from terrestrial and nearshore marine resources
The US National Science Foundation’s AccelNet and Arctic System Sciences Programs, recently awarded a collaborative grant funding the Permafrost Coastal Systems Network (PerCS-Net). PerCS-Net focuses on leveraging resources from existing national and international networks that have a common vision of better understanding permafrost coastal system dynamics and emerging transdisciplinary science, engineering, and societal issues in order to amplify the broader impacts by each individual network. PerCS-Net strengthens linkages between existing networks based in Germany, Russia, Norway, Denmark, Poland, and Canada with the activities of several active US NSF-funded networks as well as several local, state, and federally funded US-based networks.
PerCS-Net will benefit the US and international research communities by (1) developing internationally recognized protocols for quantifying the multitude of changes and impacts occurring in Arctic coastal permafrost systems, (2) sustaining long-term observations from representative coastal key sites, (3) unifying annual and decadal-scale observations of circum-arctic permafrost-influenced coasts, (4) refining a circum-arctic coastal mapping classification system and web-based delivery of geospatial information for management planning purposes and readily accessible information exchange for vulnerability assessments, (5) engaging local communities and observers to capture impacts on subsistence and traditional livelihoods, and (6) promoting synergy across networks to foster the next generation of students, postdoctoral scholars, and early-career researchers faced with the known and unknown challenges of the future Arctic System.
Ultimately, PerCS-Net will develop a circumpolar alliance for Arctic coastal community information exchange between stake-, rights- and knowledge holders, scientists, and land managers. There is increasingly diverse interest in permafrost coastal system issues and currently no unified source of information on the past, present, and potential future state of permafrost coastal systems that provide the level of detail needed to make decisions at scales relevant for indigenous communities across the Arctic. Such new engagement will inform intergovernmental agencies and international research and outreach programs in making science-based decisions and policies to adapt to changing permafrost coastal system dynamics. PerCS-Net will build a network of networks to assess risks posed by permafrost coastal system changes to local and global economies and well-being and facilitate knowledge transfer that will lead to circum-arctic adaptation strategies.
How to cite: Jones, B. and the Permafrost Coastal Systems Network (PerCS-Net): Strengthening connections across disciplines and borders through an international permafrost coastal systems network (PerCS-Net), EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11935, https://doi.org/10.5194/egusphere-egu2020-11935, 2020.
EGU2020-8978 | Displays | ITS5.9/EOS4.14
Large-scale Mapping of Arctic Coastal Infrastructure using Copernicus Sentinel Data and Machine Learning and Deep Learning MethodsGeorg Pointner, Annett Bartsch, and Thomas Ingeman-Nielsen
The climate change induced increased warming of the Arctic is leading to an accelerated thawing of permafrost, which can cause ground subsidence. In consequence, buildings and other infrastructure of local settlements are endangered from destabilization and collapsing in many Arctic regions. The increase of the exploitation of Arctic natural resources has led to the establishment of large industrial infrastructures that are at risk likewise. Most of the human activity in the Arctic is located near permafrost coasts. The thawing of coastal permafrost additionally leads to coastal erosion, which makes Arctic coastal settlements even more vulnerable.
The European Union (EU) Horizon2020 project “Nunataryuk” aims to assess the impacts of thawing land, coast and subsea permafrost on the climate and on local communities in the Arctic. One task of the project is to determine the impacts of permafrost thaw on coastal Arctic infrastructures and to provide appropriate adaptation and mitigation strategies. For that purpose, a circumpolar account of infrastructure is needed.
During recent years, the two polar-orbiting Sentinel-2 satellites of the Copernicus program of the EU have been acquiring multi-spectral imagery at high spatial and temporal resolution. Sentinel-2 data is a common choice for land cover mapping. Most land cover products only include one class for built-up areas, however. The fusion of optical and Synthetic Aperture Radar (SAR) data for land cover mapping has gained more and more attention over the last years. By combining Sentinel-2 and Sentinel-1 SAR data, the classification of multiple types of infrastructure can be anticipated. Another emerging trend is the application machine learning and deep learning methods for land cover mapping.
We present an automated workflow for downloading, processing and classifying Sentinel-2 and Sentinel-1 data in order to map coastal infrastructure with circum-Arctic extent, developed on a highly performant virtual machine (VM) provided by the Copernicus Research and User Support (RUS). We further assess the first classification results mapped with two different methods, one being a pixel-based classification using a Gradient Boosting Machine and the other being a windowed semantic segmentation approach using the deep-learning framework keras.
How to cite: Pointner, G., Bartsch, A., and Ingeman-Nielsen, T.: Large-scale Mapping of Arctic Coastal Infrastructure using Copernicus Sentinel Data and Machine Learning and Deep Learning Methods, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8978, https://doi.org/10.5194/egusphere-egu2020-8978, 2020.
The climate change induced increased warming of the Arctic is leading to an accelerated thawing of permafrost, which can cause ground subsidence. In consequence, buildings and other infrastructure of local settlements are endangered from destabilization and collapsing in many Arctic regions. The increase of the exploitation of Arctic natural resources has led to the establishment of large industrial infrastructures that are at risk likewise. Most of the human activity in the Arctic is located near permafrost coasts. The thawing of coastal permafrost additionally leads to coastal erosion, which makes Arctic coastal settlements even more vulnerable.
The European Union (EU) Horizon2020 project “Nunataryuk” aims to assess the impacts of thawing land, coast and subsea permafrost on the climate and on local communities in the Arctic. One task of the project is to determine the impacts of permafrost thaw on coastal Arctic infrastructures and to provide appropriate adaptation and mitigation strategies. For that purpose, a circumpolar account of infrastructure is needed.
During recent years, the two polar-orbiting Sentinel-2 satellites of the Copernicus program of the EU have been acquiring multi-spectral imagery at high spatial and temporal resolution. Sentinel-2 data is a common choice for land cover mapping. Most land cover products only include one class for built-up areas, however. The fusion of optical and Synthetic Aperture Radar (SAR) data for land cover mapping has gained more and more attention over the last years. By combining Sentinel-2 and Sentinel-1 SAR data, the classification of multiple types of infrastructure can be anticipated. Another emerging trend is the application machine learning and deep learning methods for land cover mapping.
We present an automated workflow for downloading, processing and classifying Sentinel-2 and Sentinel-1 data in order to map coastal infrastructure with circum-Arctic extent, developed on a highly performant virtual machine (VM) provided by the Copernicus Research and User Support (RUS). We further assess the first classification results mapped with two different methods, one being a pixel-based classification using a Gradient Boosting Machine and the other being a windowed semantic segmentation approach using the deep-learning framework keras.
How to cite: Pointner, G., Bartsch, A., and Ingeman-Nielsen, T.: Large-scale Mapping of Arctic Coastal Infrastructure using Copernicus Sentinel Data and Machine Learning and Deep Learning Methods, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-8978, https://doi.org/10.5194/egusphere-egu2020-8978, 2020.
EGU2020-11919 | Displays | ITS5.9/EOS4.14
Mapping lake drainage and drained lake basins around Point Lay, Alaska using multi-source remote sensing dataHelena Bergstedt, Benjamin Jones, Donald Walker, Louise Farquharson, Amy Breen, and Kenneth Hinkel
The North Slope of Alaska is a permafrost affected landscape dominated by lakes and drained lake basins of different sizes, depths and ages. Local communities across the North Slope region rely on lakes as a fresh water source and as locations for subsistence fishing, while industry relies on lakes as a source of water for winter transportation. Lake drainage events are often disruptive to both communities and industry that rely on being in close proximity to surface water sources in a region underlain by continuous permafrost. Drained lake basins of different ages can provide information on the past effects of climate change in the region. Studying past drainage events gives insight about the causes and mechanisms of these complex systems and benefits our understanding of lake evolution on the Arctic Coastal Plain in Alaska and the circumpolar Arctic as a whole.
Lakes and drained lake basins can be identified using high to medium resolution multispectral imagery from a range of satellite-based sensors. We explore the history of lake drainage in the region around Point Lay, a community located on the northern Chukchi Coast of Alaska, using a multi-source remote sensing approach. We study the evolution of lake basins before and after drainage events, their transformation from fishing grounds and water sources to grazing grounds and the geomorphological changes in the surrounding permafrost-dominated landscapes associated with these transitions.
We build a dense and long time series of satellite imagery of past lake drainage events by including a multitude of remote sensing acquisitions from different sources into our analysis. Incorporating imagery from different sensors that have different temporal and spatial resolutions allows us to assess past drainage events and current geomorphological states of lakes and drained lake basins at different temporal and spatial scales. Point Lay is known to be an area where drainage events occur frequently and are of high relevance to the community. In August of 2016, the village drinking water source drained during a period of intense rainfall causing the village to seek alternative sources for a freshwater supply. Our results from the analysis of the remotely sensed imagery were shared directly with the community as part of a public seminar series in the Spring of 2020. We hope that results from our study near Point Lay, Alaska can contribute towards the selection of a new freshwater source lake for the village.
How to cite: Bergstedt, H., Jones, B., Walker, D., Farquharson, L., Breen, A., and Hinkel, K.: Mapping lake drainage and drained lake basins around Point Lay, Alaska using multi-source remote sensing data, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11919, https://doi.org/10.5194/egusphere-egu2020-11919, 2020.
The North Slope of Alaska is a permafrost affected landscape dominated by lakes and drained lake basins of different sizes, depths and ages. Local communities across the North Slope region rely on lakes as a fresh water source and as locations for subsistence fishing, while industry relies on lakes as a source of water for winter transportation. Lake drainage events are often disruptive to both communities and industry that rely on being in close proximity to surface water sources in a region underlain by continuous permafrost. Drained lake basins of different ages can provide information on the past effects of climate change in the region. Studying past drainage events gives insight about the causes and mechanisms of these complex systems and benefits our understanding of lake evolution on the Arctic Coastal Plain in Alaska and the circumpolar Arctic as a whole.
Lakes and drained lake basins can be identified using high to medium resolution multispectral imagery from a range of satellite-based sensors. We explore the history of lake drainage in the region around Point Lay, a community located on the northern Chukchi Coast of Alaska, using a multi-source remote sensing approach. We study the evolution of lake basins before and after drainage events, their transformation from fishing grounds and water sources to grazing grounds and the geomorphological changes in the surrounding permafrost-dominated landscapes associated with these transitions.
We build a dense and long time series of satellite imagery of past lake drainage events by including a multitude of remote sensing acquisitions from different sources into our analysis. Incorporating imagery from different sensors that have different temporal and spatial resolutions allows us to assess past drainage events and current geomorphological states of lakes and drained lake basins at different temporal and spatial scales. Point Lay is known to be an area where drainage events occur frequently and are of high relevance to the community. In August of 2016, the village drinking water source drained during a period of intense rainfall causing the village to seek alternative sources for a freshwater supply. Our results from the analysis of the remotely sensed imagery were shared directly with the community as part of a public seminar series in the Spring of 2020. We hope that results from our study near Point Lay, Alaska can contribute towards the selection of a new freshwater source lake for the village.
How to cite: Bergstedt, H., Jones, B., Walker, D., Farquharson, L., Breen, A., and Hinkel, K.: Mapping lake drainage and drained lake basins around Point Lay, Alaska using multi-source remote sensing data, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11919, https://doi.org/10.5194/egusphere-egu2020-11919, 2020.
EGU2020-6824 | Displays | ITS5.9/EOS4.14 | Highlight
Negative changes in permafrost due to waste storageValery Grebenets, Fedor Iurov, and Vasily Tolmanov
Keywords: permafrost, waste, hazardous cryogenic processes
The problem of waste storage is particularly acute in Arctic. This is due to the vulnerability of northern ecosystems, the existence of permafrost, especially vulnerable to anthropogenic impact, the water-resistant properties of frozen rocks and the effect of destructive cryogenic processes. In addition, the causes of concern are the trends in air and frozen soil temperatures reported for the northern regions: pollutants stored in relatively stable frozen state can be released into the environment as a result of thawing. This is especially true for industrial regions, where billions of cubic meters of waste from the mining and beneficiation of ores and coal, form timber processing, mine water spills and drilling fluids, etc. are stored in a frozen state.
Field investigations were carried out in number of settlements in cryolithozone of Russia (Norilsk, Vorkuta, Igarka, settlements in the lower Ob, national villages of Taimyr, etc.). The observations involved remote sensing methods and included estimation of the area of littering and the types of waste. In many cases sampling for chemical analyzes, thermometry, and mapping of hazardous processes were made.
The impact of stored wastes on permafrost was divided into three main types: a) mechanical (changing the relief and the flow paths of surface and ground waters); b) physical and chemical (pollution by the waste itself and by its decomposition products); c) thermal (heating of frozen soils by high-temperature waste or heat generation during various chemical reactions).
During the research, 6 main types of waste storage were identified, each of which had a destructive effect on permafrost soils and northern ecosystems:
1) dumps of municipal solid waste (inherent in all settlements);
2) storages of industrial waste, tailing storage facilities in the industrial centers of the north;
3) abandoned and cluttered territories;
4) landfills of timber processing waste in the centers of the timber industry;
5) rock dumps in open-cast mining sites, which in the cold climate can transform into rock glaciers;
6) storage areas for polluted snow tranfered from built-up areas.
Particular attention was paid to the accumulation of chemical pollutants in industrial centers (with Norilsk industrial region as an example). This problem in conditions of permafrost is exacerbated by the low self-purification of northern biogeocenoses; slowdown of oxidation and some other chemical reactions in cold climates; drainage and unloading of groundwater of seasonally thawed layer, intra-permafrost and under-permafrost taliks into the water bodies.
The use of imperfect technologies for the extraction and processing the raw materials, remains of past years practices with neglected environmental situation, the lack of special standards for the storage of waste and industrial by-products, the lack of development of waste disposal methods for severe climatic conditions led to the pollution of vast territories and to destruction of many ecosystems.
This work was supported by the RFBR grant 18-05-60080 “Dangerous nival-glacial and cryogenic processes and their impact on infrastructure in the Arctic”.
How to cite: Grebenets, V., Iurov, F., and Tolmanov, V.: Negative changes in permafrost due to waste storage, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6824, https://doi.org/10.5194/egusphere-egu2020-6824, 2020.
Keywords: permafrost, waste, hazardous cryogenic processes
The problem of waste storage is particularly acute in Arctic. This is due to the vulnerability of northern ecosystems, the existence of permafrost, especially vulnerable to anthropogenic impact, the water-resistant properties of frozen rocks and the effect of destructive cryogenic processes. In addition, the causes of concern are the trends in air and frozen soil temperatures reported for the northern regions: pollutants stored in relatively stable frozen state can be released into the environment as a result of thawing. This is especially true for industrial regions, where billions of cubic meters of waste from the mining and beneficiation of ores and coal, form timber processing, mine water spills and drilling fluids, etc. are stored in a frozen state.
Field investigations were carried out in number of settlements in cryolithozone of Russia (Norilsk, Vorkuta, Igarka, settlements in the lower Ob, national villages of Taimyr, etc.). The observations involved remote sensing methods and included estimation of the area of littering and the types of waste. In many cases sampling for chemical analyzes, thermometry, and mapping of hazardous processes were made.
The impact of stored wastes on permafrost was divided into three main types: a) mechanical (changing the relief and the flow paths of surface and ground waters); b) physical and chemical (pollution by the waste itself and by its decomposition products); c) thermal (heating of frozen soils by high-temperature waste or heat generation during various chemical reactions).
During the research, 6 main types of waste storage were identified, each of which had a destructive effect on permafrost soils and northern ecosystems:
1) dumps of municipal solid waste (inherent in all settlements);
2) storages of industrial waste, tailing storage facilities in the industrial centers of the north;
3) abandoned and cluttered territories;
4) landfills of timber processing waste in the centers of the timber industry;
5) rock dumps in open-cast mining sites, which in the cold climate can transform into rock glaciers;
6) storage areas for polluted snow tranfered from built-up areas.
Particular attention was paid to the accumulation of chemical pollutants in industrial centers (with Norilsk industrial region as an example). This problem in conditions of permafrost is exacerbated by the low self-purification of northern biogeocenoses; slowdown of oxidation and some other chemical reactions in cold climates; drainage and unloading of groundwater of seasonally thawed layer, intra-permafrost and under-permafrost taliks into the water bodies.
The use of imperfect technologies for the extraction and processing the raw materials, remains of past years practices with neglected environmental situation, the lack of special standards for the storage of waste and industrial by-products, the lack of development of waste disposal methods for severe climatic conditions led to the pollution of vast territories and to destruction of many ecosystems.
This work was supported by the RFBR grant 18-05-60080 “Dangerous nival-glacial and cryogenic processes and their impact on infrastructure in the Arctic”.
How to cite: Grebenets, V., Iurov, F., and Tolmanov, V.: Negative changes in permafrost due to waste storage, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6824, https://doi.org/10.5194/egusphere-egu2020-6824, 2020.
EGU2020-6695 | Displays | ITS5.9/EOS4.14
Bearing capacity of frozen soils for foundations of objects in the North of Western SiberiaFedor Iurov and Valery Grebenets
Keywords: permafrost, forecast, bearing capacity, foundation
The North of Western Siberia is a very promising region for industrial development. It is rich in oil and gas deposits, large settlements are located here and there is an extensive system of transport infrastructure (gas and oil pipelines, roads and railways). The territory has very differentiated permafrost-geological conditions in various types of landscapes. The development of new production sites, the construction and operation of infrastructure objects often activates dangerous cryogenic processes.
Trends in increasing air temperatures result in increase in the active layer depth, which leads to the decrease in the freezing area of frozen foundations, as well as in increase of the soil temperature, which reduces the forces of freezing. The problem is enhanced by the anthropogenic impact, which intensifies the negative changes in permafrost.
Quantitative estimation of changes in the bearing capacity of frozen pile foundations in the North of Western Siberia was carried out up to 2050 for various types of soils (sand, clay soils, peat) with trends in increasing temperatures of frozen soils and trends in increasing thickness of the active layer taken into account. Detailed calculations were carried out for the route of the “Vankor-Purpe” oil pipeline.
The calculations showed that maintaining current rate of climate warming, by 2050, there will be significant deterioration of the engineering-geocryological situation. The largest negative changes will take place in the southern part of the permafrost zone of Western Siberia (in the Tazovsky, Novourengoysky and Nadymsky districts), where the decrease in bearing capacity will exceed 50%. In the more northern regions (on the territory of Yamal), the predicted changes in the bearing capacity of frozen pile foundations by 2050 will not be so critical (no more than 20%). However, an increase in the thickness of the active layer can cause activation of the thermokarst process due to closeness of the thick stratal ice to the surface, as well as other destructive cryogenic processes.
In the region of investigation, under the influence of rising soil temperatures and an increase in the depth of seasonal thawing, the most vulnerable to climatic changes are loamy soils, which, according to the calculations, are characterized by the maximum decrease in the bearing capacity of frozen piles (up to 10% over 10 years). Sandy soils are more stable, a decrease in bearing capacity occurs in such areas at a lower speed (up to 5–7% over 10 years). Areas with moss-peat layer at the surface are less susceptible to changes in bearing capacity, however, with industrial methods of foundation construction, the layer is destroyed in places where the piles are built.
This work was supported by the RFBR grant 18-05-60080 “Dangerous nival-glacial and cryogenic processes and their impact on infrastructure in the Arctic”.
How to cite: Iurov, F. and Grebenets, V.: Bearing capacity of frozen soils for foundations of objects in the North of Western Siberia, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6695, https://doi.org/10.5194/egusphere-egu2020-6695, 2020.
Keywords: permafrost, forecast, bearing capacity, foundation
The North of Western Siberia is a very promising region for industrial development. It is rich in oil and gas deposits, large settlements are located here and there is an extensive system of transport infrastructure (gas and oil pipelines, roads and railways). The territory has very differentiated permafrost-geological conditions in various types of landscapes. The development of new production sites, the construction and operation of infrastructure objects often activates dangerous cryogenic processes.
Trends in increasing air temperatures result in increase in the active layer depth, which leads to the decrease in the freezing area of frozen foundations, as well as in increase of the soil temperature, which reduces the forces of freezing. The problem is enhanced by the anthropogenic impact, which intensifies the negative changes in permafrost.
Quantitative estimation of changes in the bearing capacity of frozen pile foundations in the North of Western Siberia was carried out up to 2050 for various types of soils (sand, clay soils, peat) with trends in increasing temperatures of frozen soils and trends in increasing thickness of the active layer taken into account. Detailed calculations were carried out for the route of the “Vankor-Purpe” oil pipeline.
The calculations showed that maintaining current rate of climate warming, by 2050, there will be significant deterioration of the engineering-geocryological situation. The largest negative changes will take place in the southern part of the permafrost zone of Western Siberia (in the Tazovsky, Novourengoysky and Nadymsky districts), where the decrease in bearing capacity will exceed 50%. In the more northern regions (on the territory of Yamal), the predicted changes in the bearing capacity of frozen pile foundations by 2050 will not be so critical (no more than 20%). However, an increase in the thickness of the active layer can cause activation of the thermokarst process due to closeness of the thick stratal ice to the surface, as well as other destructive cryogenic processes.
In the region of investigation, under the influence of rising soil temperatures and an increase in the depth of seasonal thawing, the most vulnerable to climatic changes are loamy soils, which, according to the calculations, are characterized by the maximum decrease in the bearing capacity of frozen piles (up to 10% over 10 years). Sandy soils are more stable, a decrease in bearing capacity occurs in such areas at a lower speed (up to 5–7% over 10 years). Areas with moss-peat layer at the surface are less susceptible to changes in bearing capacity, however, with industrial methods of foundation construction, the layer is destroyed in places where the piles are built.
This work was supported by the RFBR grant 18-05-60080 “Dangerous nival-glacial and cryogenic processes and their impact on infrastructure in the Arctic”.
How to cite: Iurov, F. and Grebenets, V.: Bearing capacity of frozen soils for foundations of objects in the North of Western Siberia, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-6695, https://doi.org/10.5194/egusphere-egu2020-6695, 2020.
EGU2020-10350 | Displays | ITS5.9/EOS4.14 | Highlight
Assessment of Dangerous Permafrost Processes in Urban Settlements of the Russian ArcticDmitry Streletskiy, Valery Grebenets, and Nadezhda Zamyatina
Russian Arctic is characterized by developed infrastructure and high percentage of urban population on permafrost. The settlements on permafrost represent hot spots of permafrost transformations as rapidly changing climatic conditions are exacerbated by various types of human activities. To evaluate the exposure and risks of settlements to permafrost related dangerous processes, we selected several criteria, including geographic extent, duration, probability of occurrence, and total risk of damages associated with each permafrost process in 37 settlements located in various parts of the Russian Arctic. The following six types of potentially dangerous permafrost processes were considered: a) thermokarst, b) thermal erosion and thermal abrasion, c) frost heave, d) frost cracking, e) formation of icings, f) human-induced slope processes on permafrost. While risk from particular process was rather location specific, the integral assessment of all selected categories allowed to classify the overall exposure of settlements to permafrost processes. Results show that cities of Anadyr, Nadym and Kharp have rather small risk exposure, while cities of Igarka and Vorkuta have relatively high exposure. Bilibino and Norislk were among the cities having the highest overall exposure and potential risk associated with permafrost related processes considered in this study. The research is supported by Russian Foundation for Basic Research project 18-05-600888 “Urban Arctic resilience in the context of climate change and socio-economic transformations”.
How to cite: Streletskiy, D., Grebenets, V., and Zamyatina, N.: Assessment of Dangerous Permafrost Processes in Urban Settlements of the Russian Arctic, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10350, https://doi.org/10.5194/egusphere-egu2020-10350, 2020.
Russian Arctic is characterized by developed infrastructure and high percentage of urban population on permafrost. The settlements on permafrost represent hot spots of permafrost transformations as rapidly changing climatic conditions are exacerbated by various types of human activities. To evaluate the exposure and risks of settlements to permafrost related dangerous processes, we selected several criteria, including geographic extent, duration, probability of occurrence, and total risk of damages associated with each permafrost process in 37 settlements located in various parts of the Russian Arctic. The following six types of potentially dangerous permafrost processes were considered: a) thermokarst, b) thermal erosion and thermal abrasion, c) frost heave, d) frost cracking, e) formation of icings, f) human-induced slope processes on permafrost. While risk from particular process was rather location specific, the integral assessment of all selected categories allowed to classify the overall exposure of settlements to permafrost processes. Results show that cities of Anadyr, Nadym and Kharp have rather small risk exposure, while cities of Igarka and Vorkuta have relatively high exposure. Bilibino and Norislk were among the cities having the highest overall exposure and potential risk associated with permafrost related processes considered in this study. The research is supported by Russian Foundation for Basic Research project 18-05-600888 “Urban Arctic resilience in the context of climate change and socio-economic transformations”.
How to cite: Streletskiy, D., Grebenets, V., and Zamyatina, N.: Assessment of Dangerous Permafrost Processes in Urban Settlements of the Russian Arctic, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-10350, https://doi.org/10.5194/egusphere-egu2020-10350, 2020.
EGU2020-13995 | Displays | ITS5.9/EOS4.14
Impacts of infrastructure and climate changes on reindeer herding in the Yamal, west Siberia.Timo Kumpula, Roza Laptander, and Bruce C. Forbes
The traditional landuse in the Yamal is reindeer herding practiced by nomadic Nenets herders. The hydrocarbon industry is presently the source of most ecological changes in the Yamal peninsula and socio-economic impacts experienced by migratory Nenets herders who move annually between winter pastures at treeline and the coastal summer pastures by the Kara Sea.
In central Yamal peninsula which is permafrost area both natural and anthropogenic changes have occurred during the past 40 years. Mega size Bovanenkovo Gas Field was discovered in 1972 and it was opened in production and in 2012. We have studied gas field development and natural changes like increases in shrub growth, cryogenic landslides, drying lakes in the region and these impacts to Nenets reindeer herding.
Nenets managing collective and privately owned herds of reindeer have proven adapt in responding to a broad range of intensifying industrial impacts at the same time as they have been dealing with symptoms of a warming climate and thawing permafrost phenomena.
The results of climate change together with the industrial development of the Yamal Peninsula have a serious impact to the Nenets nomadic reindeer husbandry. Their consequences make Nenets reindeer herders to change their migration routes and the way of working with reindeer. During several years, we were making interviews with Nenets reindeer herders about the influence of climate change and industrialization of the tundra on the quality of Nenets nomads’ life and their work with reindeer. Reindeer herders said that impacts of industrial development have reduced their migration opportunities, as well as the quality of pastures for grazing, which has fatal the effects during icing on the tundra in the winter. At the same time, in the summer reindeer have more food because increasing of the green vegetation.
Here we detail both the climate change impacts and spatial extent of gas field growth, landslides drying lakes, shrub increase and the dynamic relationship between Nenets nomads and their rapidly evolving social-ecological system.
How to cite: Kumpula, T., Laptander, R., and Forbes, B. C.: Impacts of infrastructure and climate changes on reindeer herding in the Yamal, west Siberia., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13995, https://doi.org/10.5194/egusphere-egu2020-13995, 2020.
The traditional landuse in the Yamal is reindeer herding practiced by nomadic Nenets herders. The hydrocarbon industry is presently the source of most ecological changes in the Yamal peninsula and socio-economic impacts experienced by migratory Nenets herders who move annually between winter pastures at treeline and the coastal summer pastures by the Kara Sea.
In central Yamal peninsula which is permafrost area both natural and anthropogenic changes have occurred during the past 40 years. Mega size Bovanenkovo Gas Field was discovered in 1972 and it was opened in production and in 2012. We have studied gas field development and natural changes like increases in shrub growth, cryogenic landslides, drying lakes in the region and these impacts to Nenets reindeer herding.
Nenets managing collective and privately owned herds of reindeer have proven adapt in responding to a broad range of intensifying industrial impacts at the same time as they have been dealing with symptoms of a warming climate and thawing permafrost phenomena.
The results of climate change together with the industrial development of the Yamal Peninsula have a serious impact to the Nenets nomadic reindeer husbandry. Their consequences make Nenets reindeer herders to change their migration routes and the way of working with reindeer. During several years, we were making interviews with Nenets reindeer herders about the influence of climate change and industrialization of the tundra on the quality of Nenets nomads’ life and their work with reindeer. Reindeer herders said that impacts of industrial development have reduced their migration opportunities, as well as the quality of pastures for grazing, which has fatal the effects during icing on the tundra in the winter. At the same time, in the summer reindeer have more food because increasing of the green vegetation.
Here we detail both the climate change impacts and spatial extent of gas field growth, landslides drying lakes, shrub increase and the dynamic relationship between Nenets nomads and their rapidly evolving social-ecological system.
How to cite: Kumpula, T., Laptander, R., and Forbes, B. C.: Impacts of infrastructure and climate changes on reindeer herding in the Yamal, west Siberia., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-13995, https://doi.org/10.5194/egusphere-egu2020-13995, 2020.
EGU2020-2458 | Displays | ITS5.9/EOS4.14
Svalbard's Arctic Settlements: From Mining Sites to Urbanized EnvironmentsDan Baciu and Anna Abramova
A century ago, Svalbard became the northernmost permanently inhabited Arctic archipelago when international treaties allowed multiple nations to extract coal and exploit the lands. The first settlements were founded as industrial production bases. Meanwhile, mining has gradually declined, but the previous settlements have become urbanized, which opens wholly new perspectives for Svalbard’s present and future: A new university center is thriving; and local media and tourism industries are expanding. Local authorities use these developments to quantify urban growth in the last decades. They hope that this growth will eventually substitute previous mining activities and point towards a future that could make Svalbard a prime example of sustainable urbanization in the Arctic. We integrate novel digital humanities techniques with historical analysis and chemical screening of snowpack, which makes it possible to holistically evaluate the interplay between multiple layers of urbanization, tourism, research initiatives, and mining activities and relics as integral parts of a larger and constantly evolving cultural multifold. From this vantage point of view, a double-phased evolution becomes identifiable. In the 20th century, Svalbard’s urban and cultural life diversified. The urban growth observed in the 21st century is a result of this initial diversification. This new perspective may help local authorities manage urban growth. In particular, we attract attention to urban and cultural diversification and suggest that diversity is a source for urban growth, rather than a mere byproduct thereof. In addition, the new results also constitute a further test for our previous work on Svalbard and on cultural diversification. In previous conference contributions, we showed that persistent environmental awareness formed in Svalbard only long after mining activity affected the environment. We now continue along these lines by proposing that the formation of persistent environmental awareness is only part of urban and cultural diversification, which includes the rise of diverse local cultures and identities. Beyond Svalbard, our past and present work may help policy makers and populations around the world understand diversification processes and their impact on urban and cultural growth.
How to cite: Baciu, D. and Abramova, A.: Svalbard's Arctic Settlements: From Mining Sites to Urbanized Environments , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2458, https://doi.org/10.5194/egusphere-egu2020-2458, 2020.
A century ago, Svalbard became the northernmost permanently inhabited Arctic archipelago when international treaties allowed multiple nations to extract coal and exploit the lands. The first settlements were founded as industrial production bases. Meanwhile, mining has gradually declined, but the previous settlements have become urbanized, which opens wholly new perspectives for Svalbard’s present and future: A new university center is thriving; and local media and tourism industries are expanding. Local authorities use these developments to quantify urban growth in the last decades. They hope that this growth will eventually substitute previous mining activities and point towards a future that could make Svalbard a prime example of sustainable urbanization in the Arctic. We integrate novel digital humanities techniques with historical analysis and chemical screening of snowpack, which makes it possible to holistically evaluate the interplay between multiple layers of urbanization, tourism, research initiatives, and mining activities and relics as integral parts of a larger and constantly evolving cultural multifold. From this vantage point of view, a double-phased evolution becomes identifiable. In the 20th century, Svalbard’s urban and cultural life diversified. The urban growth observed in the 21st century is a result of this initial diversification. This new perspective may help local authorities manage urban growth. In particular, we attract attention to urban and cultural diversification and suggest that diversity is a source for urban growth, rather than a mere byproduct thereof. In addition, the new results also constitute a further test for our previous work on Svalbard and on cultural diversification. In previous conference contributions, we showed that persistent environmental awareness formed in Svalbard only long after mining activity affected the environment. We now continue along these lines by proposing that the formation of persistent environmental awareness is only part of urban and cultural diversification, which includes the rise of diverse local cultures and identities. Beyond Svalbard, our past and present work may help policy makers and populations around the world understand diversification processes and their impact on urban and cultural growth.
How to cite: Baciu, D. and Abramova, A.: Svalbard's Arctic Settlements: From Mining Sites to Urbanized Environments , EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2458, https://doi.org/10.5194/egusphere-egu2020-2458, 2020.
EGU2020-22647 | Displays | ITS5.9/EOS4.14
Memories of Mining: First Nation of Na-Cho Nyäk Dun Elders’ perspectivesSusanna Gartler and Gertrude Saxinger
This poster addresses the need to understand perspectives of change, both societal and environmental, from indigenous viewpoints in Canada. It is based on six years of collaborative, community-based research in Mayo, including semi-structured and narrative interviews with First Nation of Na-Cho Nyäk Dun Elders. Their accounts tell of over one century of interaction and involvement with the extractive industry. The poster addresses the way First Nation of Na-Cho Nyäk Dun Elders experienced and make sense of several major shifts, from settling at the onset of galena ore extraction, to life in and relocation from ‘Dän Ku’ (Our Home) to the townsite of Mayo, to life and work in Elsa and Keno – the mining hills nearby, which are home today to one of Canada’s largest gold mine. It discusses contemporary concerns with the industry, such as increased access to and thus pressure on wildlife due to mining roads, pollution, economic benefits and local employment. The poster further considers the methodological process which was centered on a community-based participatory approach. It is part of the outreach and science communication activities of the ReSDA (Ressources and Sustainable Development in the Arctic) funded project “LACE – Labour Mobiltiy and Community Participation in the Extractive Industry, Case Study in the Yukon”.
How to cite: Gartler, S. and Saxinger, G.: Memories of Mining: First Nation of Na-Cho Nyäk Dun Elders’ perspectives, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22647, https://doi.org/10.5194/egusphere-egu2020-22647, 2020.
This poster addresses the need to understand perspectives of change, both societal and environmental, from indigenous viewpoints in Canada. It is based on six years of collaborative, community-based research in Mayo, including semi-structured and narrative interviews with First Nation of Na-Cho Nyäk Dun Elders. Their accounts tell of over one century of interaction and involvement with the extractive industry. The poster addresses the way First Nation of Na-Cho Nyäk Dun Elders experienced and make sense of several major shifts, from settling at the onset of galena ore extraction, to life in and relocation from ‘Dän Ku’ (Our Home) to the townsite of Mayo, to life and work in Elsa and Keno – the mining hills nearby, which are home today to one of Canada’s largest gold mine. It discusses contemporary concerns with the industry, such as increased access to and thus pressure on wildlife due to mining roads, pollution, economic benefits and local employment. The poster further considers the methodological process which was centered on a community-based participatory approach. It is part of the outreach and science communication activities of the ReSDA (Ressources and Sustainable Development in the Arctic) funded project “LACE – Labour Mobiltiy and Community Participation in the Extractive Industry, Case Study in the Yukon”.
How to cite: Gartler, S. and Saxinger, G.: Memories of Mining: First Nation of Na-Cho Nyäk Dun Elders’ perspectives, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-22647, https://doi.org/10.5194/egusphere-egu2020-22647, 2020.
EGU2020-9648 | Displays | ITS5.9/EOS4.14
What Framework Promotes Saliency of Climate Change Issues on Online Public Agenda: A Quantitative Study of Online Knowledge Community QuoraWen Shi
Though scientists have achieved consensus on the severity and urgency of climate change years ago, the public still considers this issue not that important, as the influence of climate change is widely thought to be geographically and temporally bounded. The discrepancy between scientific consensus and public's misperception calls for more dedicated public communication strategies to get climate change issues back on the front line of public agenda. Based on the large-scale data acquired from the online knowledge community Quora, we conduct a computational linguistic analysis followed by regression model to address the climate change communication from the agenda setting perspective. To be specific, our results find that certain narrative strategies may make climate change issues more salient by engaging public into discussion or evoking their long-term interest. Though scientific communicators have long been blaming lack of scientific literacy for low saliency of climate change issues, cognitive framework is proved to be least effective in raising public concern. Affective framework is relatively more influential in motivating people to participate in climate change discussion: the stronger the affective intensity is, the more prominent the issue is, but the affective polarity is not important. Perceptual framework is most powerful in promoting public discussion and the only variable that can significantly motivate public's long-term desire to track issues, among which feeling plays the most critical role compared with seeing and hearing. This study extends existing science communication literature by shedding light on the role of previously ignored affective and perceptual frameworks in making issues salient and the conclusions may provide theoretical and practical implications for future climate change communication.
How to cite: Shi, W.: What Framework Promotes Saliency of Climate Change Issues on Online Public Agenda: A Quantitative Study of Online Knowledge Community Quora, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9648, https://doi.org/10.5194/egusphere-egu2020-9648, 2020.
Though scientists have achieved consensus on the severity and urgency of climate change years ago, the public still considers this issue not that important, as the influence of climate change is widely thought to be geographically and temporally bounded. The discrepancy between scientific consensus and public's misperception calls for more dedicated public communication strategies to get climate change issues back on the front line of public agenda. Based on the large-scale data acquired from the online knowledge community Quora, we conduct a computational linguistic analysis followed by regression model to address the climate change communication from the agenda setting perspective. To be specific, our results find that certain narrative strategies may make climate change issues more salient by engaging public into discussion or evoking their long-term interest. Though scientific communicators have long been blaming lack of scientific literacy for low saliency of climate change issues, cognitive framework is proved to be least effective in raising public concern. Affective framework is relatively more influential in motivating people to participate in climate change discussion: the stronger the affective intensity is, the more prominent the issue is, but the affective polarity is not important. Perceptual framework is most powerful in promoting public discussion and the only variable that can significantly motivate public's long-term desire to track issues, among which feeling plays the most critical role compared with seeing and hearing. This study extends existing science communication literature by shedding light on the role of previously ignored affective and perceptual frameworks in making issues salient and the conclusions may provide theoretical and practical implications for future climate change communication.
How to cite: Shi, W.: What Framework Promotes Saliency of Climate Change Issues on Online Public Agenda: A Quantitative Study of Online Knowledge Community Quora, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-9648, https://doi.org/10.5194/egusphere-egu2020-9648, 2020.
EGU2020-2016 | Displays | ITS5.9/EOS4.14 | Highlight
Calls to Action: Climate Change and Indigenous Food Sovereignty in ArcticRanjan Datta
This study is responding reconciling Indigenous climate change and food sovereignty in Arctic. We will explore, how recent climate change (and interpretation) is challenging to Indigenous food sovereignty sources; and what is at stake in processes such as hunting consultation, impact assessment, regulatory hearings, approvals (including negotiation of benefits), monitoring? and what reformed processes can build Indigenous community capacity and supports robust decisions? The outcomes will assist policy makers and communities to guide future consultations and impacts assessment guideline and climate change planning initiatives. We (as an interdisciplinary research team of Indigenous Elders, knowledge-keepers, Indigenous and non-Indigenous scholars) will focus on Indigenous understanding of Indigenous philosophies of climate change and the connectivity between climate change and food sovereignty and sustainability related to the interactions and inter-dependencies with health security, Indigenous environmental and cultural value protection. Indigenous knowledge-ways have much to offer in support of resiliency of climate change and water infrastructure in Indigenous communities, intercultural reconceptualization of research methodologies, environmental sustainability, and educational programs which support Indigenous communities.
Action Plan: Objective: Supporting Indigenous perspectives on climate change impact management and food sovereignty. This includes involving members of Indigenous community to offer insight into Indigenous cultural and community responsibilities of Indigenous climate change impacts management to inform food sovereignty performance review policy development. Contribution: The designing, coordinating, and hosting an interdisciplinary Focused Dialogue Session on the relationship between climate change impacts management and food sovereignty. This Dialogue Session creates new scholarly knowledge about pipeline leak impacts and food sovereignty processes. Objective: Developing effective and trustful engagement dialogs to build capacity among Indigenous Elders, Knowledge-keepers, and scholars. Contribution: This objective supports Indigenous perspectives through specific, policy-orientated research that positively impacts their vision and allow them to develop new ways of climate change impacts and food sovereignty. This reveals climate change impacts management and food sovereignty policy and practices in Arctic. Objective: Mobilize knowledge and partnership for reconciliation (specifically translate research results into evidence for policy-making) through developing an impact assessment policy guideline. Contribution: The impact assessment policy guideline shares knowledge and implications of climate change impacts management policy documents local, provincially, and nationally and assist in the articulation and practice of food sovereignty source protection, as culturally and community informed.
How to cite: Datta, R.: Calls to Action: Climate Change and Indigenous Food Sovereignty in Arctic, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2016, https://doi.org/10.5194/egusphere-egu2020-2016, 2020.
This study is responding reconciling Indigenous climate change and food sovereignty in Arctic. We will explore, how recent climate change (and interpretation) is challenging to Indigenous food sovereignty sources; and what is at stake in processes such as hunting consultation, impact assessment, regulatory hearings, approvals (including negotiation of benefits), monitoring? and what reformed processes can build Indigenous community capacity and supports robust decisions? The outcomes will assist policy makers and communities to guide future consultations and impacts assessment guideline and climate change planning initiatives. We (as an interdisciplinary research team of Indigenous Elders, knowledge-keepers, Indigenous and non-Indigenous scholars) will focus on Indigenous understanding of Indigenous philosophies of climate change and the connectivity between climate change and food sovereignty and sustainability related to the interactions and inter-dependencies with health security, Indigenous environmental and cultural value protection. Indigenous knowledge-ways have much to offer in support of resiliency of climate change and water infrastructure in Indigenous communities, intercultural reconceptualization of research methodologies, environmental sustainability, and educational programs which support Indigenous communities.
Action Plan: Objective: Supporting Indigenous perspectives on climate change impact management and food sovereignty. This includes involving members of Indigenous community to offer insight into Indigenous cultural and community responsibilities of Indigenous climate change impacts management to inform food sovereignty performance review policy development. Contribution: The designing, coordinating, and hosting an interdisciplinary Focused Dialogue Session on the relationship between climate change impacts management and food sovereignty. This Dialogue Session creates new scholarly knowledge about pipeline leak impacts and food sovereignty processes. Objective: Developing effective and trustful engagement dialogs to build capacity among Indigenous Elders, Knowledge-keepers, and scholars. Contribution: This objective supports Indigenous perspectives through specific, policy-orientated research that positively impacts their vision and allow them to develop new ways of climate change impacts and food sovereignty. This reveals climate change impacts management and food sovereignty policy and practices in Arctic. Objective: Mobilize knowledge and partnership for reconciliation (specifically translate research results into evidence for policy-making) through developing an impact assessment policy guideline. Contribution: The impact assessment policy guideline shares knowledge and implications of climate change impacts management policy documents local, provincially, and nationally and assist in the articulation and practice of food sovereignty source protection, as culturally and community informed.
How to cite: Datta, R.: Calls to Action: Climate Change and Indigenous Food Sovereignty in Arctic, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-2016, https://doi.org/10.5194/egusphere-egu2020-2016, 2020.
EGU2020-18853 | Displays | ITS5.9/EOS4.14 | Highlight
INTAROS teaching and outreach materials: how a multidisciplinary project creates opportunities for teachers and the general public.Ruth Higgins, Thomas Juul-Pedersen, Agata Goździk, Walter Oechl, Donatella Zona, Kjetil Lygre, and Stein Sandven
The INTAROS project has a strong multidisciplinary focus, with tools for integration of data from atmosphere, ocean, cryosphere and terrestrial sciences, provided by institutions in Europe, North America and Asia. The dissemination activities aim to share knowledge about the Arctic with academia and with the general public.
The dissemination and exploitation activities are closely linked with communication and stakeholder engagement: the target audiences include research, public services, commercial operators, investment, insurance, environmental organizations, policy makers, local communities, and educational institutes. One of the INTAROS objectives is to disseminate project results to raise awareness of Arctic challenges and to inform and engage key users and stakeholder communities to improve their understanding of the Arctic environmental state and processes. The further aim is to build capacity in using the new products and services originating from the INTAROS project.
This contribution provides an overview of dissemination materials and products that are targeted towards teaching and/or intended for outreach purposes. The referenced teaching materials include products aimed at students ranging from school to university level, as well as the general public. The outreach materials are aimed at communicating knowledge about the INTAROS project, the scientific work, key findings as well as promoting general knowledge about climate and climate change.
How to cite: Higgins, R., Juul-Pedersen, T., Goździk, A., Oechl, W., Zona, D., Lygre, K., and Sandven, S.: INTAROS teaching and outreach materials: how a multidisciplinary project creates opportunities for teachers and the general public., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18853, https://doi.org/10.5194/egusphere-egu2020-18853, 2020.
The INTAROS project has a strong multidisciplinary focus, with tools for integration of data from atmosphere, ocean, cryosphere and terrestrial sciences, provided by institutions in Europe, North America and Asia. The dissemination activities aim to share knowledge about the Arctic with academia and with the general public.
The dissemination and exploitation activities are closely linked with communication and stakeholder engagement: the target audiences include research, public services, commercial operators, investment, insurance, environmental organizations, policy makers, local communities, and educational institutes. One of the INTAROS objectives is to disseminate project results to raise awareness of Arctic challenges and to inform and engage key users and stakeholder communities to improve their understanding of the Arctic environmental state and processes. The further aim is to build capacity in using the new products and services originating from the INTAROS project.
This contribution provides an overview of dissemination materials and products that are targeted towards teaching and/or intended for outreach purposes. The referenced teaching materials include products aimed at students ranging from school to university level, as well as the general public. The outreach materials are aimed at communicating knowledge about the INTAROS project, the scientific work, key findings as well as promoting general knowledge about climate and climate change.
How to cite: Higgins, R., Juul-Pedersen, T., Goździk, A., Oechl, W., Zona, D., Lygre, K., and Sandven, S.: INTAROS teaching and outreach materials: how a multidisciplinary project creates opportunities for teachers and the general public., EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-18853, https://doi.org/10.5194/egusphere-egu2020-18853, 2020.
EGU2020-4936 | Displays | ITS5.9/EOS4.14
Resources for teachers on the “Ocean and Cryosphere in a Changing Climate”Eric Guilyardi, Lydie Lescarmontier, Robin Matthews, Nathalie Morata, Mariana Rocha, Jenny Schlüpmann, Mathilde Tricoire, and David Wilgenbus
The essential role of education in addressing the causes and consequences of anthropogenic climate change is increasingly being recognised at an international level.
The Office for Climate Education (OCE) develops Climate Change Education (CCE) resources that support teachers and education systems in developed and developing countries to mainstream climate change education in their respective contexts. The OCE organises capacity building/professional development workshops worldwide for educators. It has also initiated and coordinates a large network of stakeholders to scale up their actions towards climate change resilience.
Drawing upon the IPCC Special Report on the Ocean and Cryosphere in a Changing Climate, the OCE has produced a set of educational resources and tools for students to understand climate change in the context of the ocean and the cryosphere. These cover the scientific and societal dimensions, at local and global levels, while developing students’ reasoning abilities and guiding them to take action (mitigation and/or adaptation) in their schools or communities. These resources include:
- Ready-to-use teacher handbook that (i) target students from the last years of primary school to the end of lower-secondary school (aged 9 to 15), (ii) include scientific and pedagogical overviews, lesson plans, activities and worksheets, (iii) are interdisciplinary, covering topics in the natural sciences, social sciences, arts and physical education, (iv) promote active pedagogies: inquiry-based science education, role-play, debate, project-based learning.
- Summaries for teachers of two IPCC Special Reports (“Ocean and Cryosphere in a changing climate” and “Global Warming of 1.5°C”). They are presented together with a selection of related activities and exercises that can be implemented in the classroom.
- A set of 10 videos where experts speak about a specific issue related to the ocean or the cryosphere, in the context of climate change. These videos can be used either to initiate or to conclude a discussion with students on their specific topic: urban heat islands, glaciers, ocean acidification, tropical cyclones, marine energy, sea ice melt, thermohaline circulation, El Niño, mangroves, sea level rise.
- A set of 4 multimedia activities offering students the possibility of working interactively in different topics related to climate change: sea level rise, food webs, carbon footprints and mitigation/adaptation solutions.
- A set of 3 resources for teacher trainers, offering turnkey training protocols on the topics “greenhouse effect” and “ocean”, as well as a methodology for producing locally-relevant education projects.
How to cite: Guilyardi, E., Lescarmontier, L., Matthews, R., Morata, N., Rocha, M., Schlüpmann, J., Tricoire, M., and Wilgenbus, D.: Resources for teachers on the “Ocean and Cryosphere in a Changing Climate”, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4936, https://doi.org/10.5194/egusphere-egu2020-4936, 2020.
The essential role of education in addressing the causes and consequences of anthropogenic climate change is increasingly being recognised at an international level.
The Office for Climate Education (OCE) develops Climate Change Education (CCE) resources that support teachers and education systems in developed and developing countries to mainstream climate change education in their respective contexts. The OCE organises capacity building/professional development workshops worldwide for educators. It has also initiated and coordinates a large network of stakeholders to scale up their actions towards climate change resilience.
Drawing upon the IPCC Special Report on the Ocean and Cryosphere in a Changing Climate, the OCE has produced a set of educational resources and tools for students to understand climate change in the context of the ocean and the cryosphere. These cover the scientific and societal dimensions, at local and global levels, while developing students’ reasoning abilities and guiding them to take action (mitigation and/or adaptation) in their schools or communities. These resources include:
- Ready-to-use teacher handbook that (i) target students from the last years of primary school to the end of lower-secondary school (aged 9 to 15), (ii) include scientific and pedagogical overviews, lesson plans, activities and worksheets, (iii) are interdisciplinary, covering topics in the natural sciences, social sciences, arts and physical education, (iv) promote active pedagogies: inquiry-based science education, role-play, debate, project-based learning.
- Summaries for teachers of two IPCC Special Reports (“Ocean and Cryosphere in a changing climate” and “Global Warming of 1.5°C”). They are presented together with a selection of related activities and exercises that can be implemented in the classroom.
- A set of 10 videos where experts speak about a specific issue related to the ocean or the cryosphere, in the context of climate change. These videos can be used either to initiate or to conclude a discussion with students on their specific topic: urban heat islands, glaciers, ocean acidification, tropical cyclones, marine energy, sea ice melt, thermohaline circulation, El Niño, mangroves, sea level rise.
- A set of 4 multimedia activities offering students the possibility of working interactively in different topics related to climate change: sea level rise, food webs, carbon footprints and mitigation/adaptation solutions.
- A set of 3 resources for teacher trainers, offering turnkey training protocols on the topics “greenhouse effect” and “ocean”, as well as a methodology for producing locally-relevant education projects.
How to cite: Guilyardi, E., Lescarmontier, L., Matthews, R., Morata, N., Rocha, M., Schlüpmann, J., Tricoire, M., and Wilgenbus, D.: Resources for teachers on the “Ocean and Cryosphere in a Changing Climate”, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-4936, https://doi.org/10.5194/egusphere-egu2020-4936, 2020.
EGU2020-12600 | Displays | ITS5.9/EOS4.14
Arctic ice in cross-disciplinary undergraduate education: Experiences across natural science, social science, international policy, and public writingMichelle Koutnik, Nadine Fabbi, Elizabeth Wessells, Ellen Ahlness, Max Showalter, Dan Mandeville, Jason Young, and Hans Christian Steen-Larsen
With the Arctic currently warming at a rate at least twice that of the global average, the coupled Arctic ecosystem is losing ice. This includes significant land-ice loss from the Greenland Ice Sheet and Arctic ice caps and glaciers, reduction in extent and thickness of Arctic sea ice, and thawing permafrost. This scale of environmental change significantly affects Arctic people, wildlife, infrastructure, transportation, and access. Societal response to these changes relies on advances in and application of research spanning multiple scientific disciplines, with policy-making done in partnership with Indigenous people, governments, private agencies, multinational corporations, and other interested groups. Everyone will interface with outcomes due to a changing climate and the challenge is mounting for the next generation of leaders. The cross-disciplinary nature of the challenge of Arctic ice loss and climate change must be met by cross-disciplinary undergraduate education. While higher education aims for disciplinary training in natural sciences and social sciences, there is an increasing responsibility to integrate topics and immerse students in real-world issues. And, in our experience the undergraduates we teach are eager for courses that can do this well.
What is immersive undergraduate education? We consider this as either immersing students in a focused topic in the classroom, immersing students in a place (especially while abroad), or combining the two through targeted lectures, informed discussions, travel, and writing. With regard to the Arctic, it is necessary to bring scientific understanding to learning activities otherwise focused on societal impacts, policy making, and knowledge exchange through public writing.
We share from our practical experience teaching Arctic-focused courses to classes each with 10-30 students with majors from across the University of Washington (UW) campus (total undergraduate student body of 32,000). Three recent activities that integrate the state of science with impacts on society in undergraduate courses include: 1) a four-week study abroad course to Greenland and Denmark focusing on changes in the Greenland Ice Sheet and sea-level rise, 2) a 10-week Task Force course in Arctic Sea Ice and International Policy in partnership with the UW International Policy Institute at the Henry M. Jackson School of International Studies that includes one-week in Ottawa where students develop a mock Arctic sea ice policy for Canada consistent with Inuit priorities, and 3) a 10-week seminar in public writing where students write mock newspaper articles, book reviews, and policy summaries about ice in a changing climate. These courses were designed to include a similar subset of earth science, atmospheric science, and oceanography, but the distinct structure and application of the science in these three separate courses led to distinct learning outcomes. In addition, we present how the academic minor in Arctic Studies at the University of Washington has allowed students to design their own integrated understanding of Indigenous and nation-state Arctic geopolitics, Arctic environmental change, and policy by taking a selection of courses and engaging in research and report writing.
How to cite: Koutnik, M., Fabbi, N., Wessells, E., Ahlness, E., Showalter, M., Mandeville, D., Young, J., and Steen-Larsen, H. C.: Arctic ice in cross-disciplinary undergraduate education: Experiences across natural science, social science, international policy, and public writing, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12600, https://doi.org/10.5194/egusphere-egu2020-12600, 2020.
With the Arctic currently warming at a rate at least twice that of the global average, the coupled Arctic ecosystem is losing ice. This includes significant land-ice loss from the Greenland Ice Sheet and Arctic ice caps and glaciers, reduction in extent and thickness of Arctic sea ice, and thawing permafrost. This scale of environmental change significantly affects Arctic people, wildlife, infrastructure, transportation, and access. Societal response to these changes relies on advances in and application of research spanning multiple scientific disciplines, with policy-making done in partnership with Indigenous people, governments, private agencies, multinational corporations, and other interested groups. Everyone will interface with outcomes due to a changing climate and the challenge is mounting for the next generation of leaders. The cross-disciplinary nature of the challenge of Arctic ice loss and climate change must be met by cross-disciplinary undergraduate education. While higher education aims for disciplinary training in natural sciences and social sciences, there is an increasing responsibility to integrate topics and immerse students in real-world issues. And, in our experience the undergraduates we teach are eager for courses that can do this well.
What is immersive undergraduate education? We consider this as either immersing students in a focused topic in the classroom, immersing students in a place (especially while abroad), or combining the two through targeted lectures, informed discussions, travel, and writing. With regard to the Arctic, it is necessary to bring scientific understanding to learning activities otherwise focused on societal impacts, policy making, and knowledge exchange through public writing.
We share from our practical experience teaching Arctic-focused courses to classes each with 10-30 students with majors from across the University of Washington (UW) campus (total undergraduate student body of 32,000). Three recent activities that integrate the state of science with impacts on society in undergraduate courses include: 1) a four-week study abroad course to Greenland and Denmark focusing on changes in the Greenland Ice Sheet and sea-level rise, 2) a 10-week Task Force course in Arctic Sea Ice and International Policy in partnership with the UW International Policy Institute at the Henry M. Jackson School of International Studies that includes one-week in Ottawa where students develop a mock Arctic sea ice policy for Canada consistent with Inuit priorities, and 3) a 10-week seminar in public writing where students write mock newspaper articles, book reviews, and policy summaries about ice in a changing climate. These courses were designed to include a similar subset of earth science, atmospheric science, and oceanography, but the distinct structure and application of the science in these three separate courses led to distinct learning outcomes. In addition, we present how the academic minor in Arctic Studies at the University of Washington has allowed students to design their own integrated understanding of Indigenous and nation-state Arctic geopolitics, Arctic environmental change, and policy by taking a selection of courses and engaging in research and report writing.
How to cite: Koutnik, M., Fabbi, N., Wessells, E., Ahlness, E., Showalter, M., Mandeville, D., Young, J., and Steen-Larsen, H. C.: Arctic ice in cross-disciplinary undergraduate education: Experiences across natural science, social science, international policy, and public writing, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-12600, https://doi.org/10.5194/egusphere-egu2020-12600, 2020.
EGU2020-17963 | Displays | ITS5.9/EOS4.14
Outreach and disseminations activities in North Slope of Alaska: how to build trust between local communities and arctic researchersKaare Sikuaq Erickson, Donatella Zona, Marco Montemayor, Walter Oechel, and Terenzio Zenone
The Alaskan Ukpeaġvik Iñupiat Corporation (UIC) is promoting and financilally supporting, with the contribution of the US National Science Foundation (NSF) and local organizations, outreach and dissemination events, in the form of science fair for the local communities in North Slope of Alaska. The science fair is part of a larger effort by UIC Science to bring coordination and collaboration to science outreach and engagement efforts across Arctic Alaska. The purpose is to provide a positive space for Arctic researchers and Arctic residents to meet, eat with each other, spend time, and to inspire the youth of the Arctic by providing fun and educational activities that are based in science and traditional knowledge. The Science Fair 2019 hosted by the Barrow Arctic Research Center (BARC) included three days of youth and family-friendly activities related to “Inupiat Knowledge about Plants” led by the College Inupiat Studies Department, “Eco-chains Activity” hosted by the North Slope Borough Office of Emergency Management, “Big Little World: Bugs Plants, and Microscopes” hosted by the National Ecological Observatory Network, “Microplastics in the Arctic” hosted by the North Slope Borough Department of Wildlife Management, “BARC Scavenger Hunt” hosted by UIC Science, “Our Role in the Carbon and Methane Cycle” hosted by the University of Texas El Paso (UTEP) and San Diego State University, and “How Permafrost Works” hosted by the University of Alaska, Fairbanks, Geophysical Institute. Each day hundreds of students, from both the local community and the science community came together to take part in mutually beneficial engagement: students from Utqiaġvik were excited about science and now know of the realistic and fulfilling careers in research that takes place in their backyard. The Utqiaġvik community members and elders now have a better idea of the breadth of research that takes place in and near their home. The locals, especially the elders, are very concerned about the drastic changes in our environment: scientists share these concerns, and the discussions during the fair was a chance to recognize this common ground. Breaking the ice between Arctic researchers and residents can lead to endless opportunities for collaboration, sharing ideas, and even lifelong friendships.
How to cite: Erickson, K. S., Zona, D., Montemayor, M., Oechel, W., and Zenone, T.: Outreach and disseminations activities in North Slope of Alaska: how to build trust between local communities and arctic researchers, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-17963, https://doi.org/10.5194/egusphere-egu2020-17963, 2020.
The Alaskan Ukpeaġvik Iñupiat Corporation (UIC) is promoting and financilally supporting, with the contribution of the US National Science Foundation (NSF) and local organizations, outreach and dissemination events, in the form of science fair for the local communities in North Slope of Alaska. The science fair is part of a larger effort by UIC Science to bring coordination and collaboration to science outreach and engagement efforts across Arctic Alaska. The purpose is to provide a positive space for Arctic researchers and Arctic residents to meet, eat with each other, spend time, and to inspire the youth of the Arctic by providing fun and educational activities that are based in science and traditional knowledge. The Science Fair 2019 hosted by the Barrow Arctic Research Center (BARC) included three days of youth and family-friendly activities related to “Inupiat Knowledge about Plants” led by the College Inupiat Studies Department, “Eco-chains Activity” hosted by the North Slope Borough Office of Emergency Management, “Big Little World: Bugs Plants, and Microscopes” hosted by the National Ecological Observatory Network, “Microplastics in the Arctic” hosted by the North Slope Borough Department of Wildlife Management, “BARC Scavenger Hunt” hosted by UIC Science, “Our Role in the Carbon and Methane Cycle” hosted by the University of Texas El Paso (UTEP) and San Diego State University, and “How Permafrost Works” hosted by the University of Alaska, Fairbanks, Geophysical Institute. Each day hundreds of students, from both the local community and the science community came together to take part in mutually beneficial engagement: students from Utqiaġvik were excited about science and now know of the realistic and fulfilling careers in research that takes place in their backyard. The Utqiaġvik community members and elders now have a better idea of the breadth of research that takes place in and near their home. The locals, especially the elders, are very concerned about the drastic changes in our environment: scientists share these concerns, and the discussions during the fair was a chance to recognize this common ground. Breaking the ice between Arctic researchers and residents can lead to endless opportunities for collaboration, sharing ideas, and even lifelong friendships.
How to cite: Erickson, K. S., Zona, D., Montemayor, M., Oechel, W., and Zenone, T.: Outreach and disseminations activities in North Slope of Alaska: how to build trust between local communities and arctic researchers, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-17963, https://doi.org/10.5194/egusphere-egu2020-17963, 2020.
EGU2020-591 | Displays | ITS5.9/EOS4.14
Community Engagement in Permafrost Research at the Western Arctic Research Centre, Inuvik, Northwest Territories, CanadaErika Hille, Joel McAlister, Alice Wilson, and Steve Kokelj
The Arctic is experiencing climate warming at a more pronounced rate than other regions. This has significant implications for the thermal stability of permafrost, which strongly depends on the long, cold winters typical of the region. Canada’s western Arctic is typically more sensitive to permafrost thaw than other Arctic regions in Canada, because it is underlain by large regions of ice-rich permafrost that are only protected by a thin layer of organic and mineral soil. As a result, disturbances (i.e. fire, shallow landslides, thermal and mechanical erosion, construction) often lead to the exposure and thaw of the underlying permafrost. Climate-induced permafrost thaw has led to dramatic changes to the landscape, impacting communities, infrastructure, and traditional ways of being. In this region, northern stakeholders have invested in research infrastructure that enables them to actively participate in research, research design and implementation, and lead their own research programs. Since permafrost is intrinsically linked to the social, cultural, and economic fabric of the region, it is critical that local stakeholders be engaged in permafrost research.
The Western Arctic Research Centre (WARC) is located in Inuvik, Northwest Territories, Canada. Inuvik is situated in the Beaufort Delta Region of Northwestern Canada, approximately 120km from the Arctic Ocean. A key goal of WARC is to support and conduct research that fosters the social, cultural, and economic prosperity of the people of the Northwest Territories. In response to local concerns, WARC has developed a suite of research programs that focus on the impacts of permafrost thaw on terrestrial, freshwater, and marine systems. To ensure that these research programs are responsive to the concerns of northern and Indigenous residents, WARC works in partnership with researchers, communities, government bodies, and Indigenous and co-management organizations. Project partners provide critical feedback on research design, study site selection, and how to communicate research to a northern audience. Furthermore, the Permafrost Information Hub at WARC is working with local organizations to establish community-based permafrost research and monitoring in the Beaufort Delta Region. This includes the development and delivery of training programs for local environmental monitors, increasing capacity in the region to support permafrost research. Northerners need to be involved in permafrost research. How Northerners want to be involved will differ depending on the location within the region and the nature of the research. This emphasizes the need for consistent, open lines of communication between researchers and local partners.
This oral presentation will outline the steps WARC has taken to engage northern and Indigenous residents in its permafrost research programs, lessons learned, and successes.
How to cite: Hille, E., McAlister, J., Wilson, A., and Kokelj, S.: Community Engagement in Permafrost Research at the Western Arctic Research Centre, Inuvik, Northwest Territories, Canada, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-591, https://doi.org/10.5194/egusphere-egu2020-591, 2020.
The Arctic is experiencing climate warming at a more pronounced rate than other regions. This has significant implications for the thermal stability of permafrost, which strongly depends on the long, cold winters typical of the region. Canada’s western Arctic is typically more sensitive to permafrost thaw than other Arctic regions in Canada, because it is underlain by large regions of ice-rich permafrost that are only protected by a thin layer of organic and mineral soil. As a result, disturbances (i.e. fire, shallow landslides, thermal and mechanical erosion, construction) often lead to the exposure and thaw of the underlying permafrost. Climate-induced permafrost thaw has led to dramatic changes to the landscape, impacting communities, infrastructure, and traditional ways of being. In this region, northern stakeholders have invested in research infrastructure that enables them to actively participate in research, research design and implementation, and lead their own research programs. Since permafrost is intrinsically linked to the social, cultural, and economic fabric of the region, it is critical that local stakeholders be engaged in permafrost research.
The Western Arctic Research Centre (WARC) is located in Inuvik, Northwest Territories, Canada. Inuvik is situated in the Beaufort Delta Region of Northwestern Canada, approximately 120km from the Arctic Ocean. A key goal of WARC is to support and conduct research that fosters the social, cultural, and economic prosperity of the people of the Northwest Territories. In response to local concerns, WARC has developed a suite of research programs that focus on the impacts of permafrost thaw on terrestrial, freshwater, and marine systems. To ensure that these research programs are responsive to the concerns of northern and Indigenous residents, WARC works in partnership with researchers, communities, government bodies, and Indigenous and co-management organizations. Project partners provide critical feedback on research design, study site selection, and how to communicate research to a northern audience. Furthermore, the Permafrost Information Hub at WARC is working with local organizations to establish community-based permafrost research and monitoring in the Beaufort Delta Region. This includes the development and delivery of training programs for local environmental monitors, increasing capacity in the region to support permafrost research. Northerners need to be involved in permafrost research. How Northerners want to be involved will differ depending on the location within the region and the nature of the research. This emphasizes the need for consistent, open lines of communication between researchers and local partners.
This oral presentation will outline the steps WARC has taken to engage northern and Indigenous residents in its permafrost research programs, lessons learned, and successes.
How to cite: Hille, E., McAlister, J., Wilson, A., and Kokelj, S.: Community Engagement in Permafrost Research at the Western Arctic Research Centre, Inuvik, Northwest Territories, Canada, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-591, https://doi.org/10.5194/egusphere-egu2020-591, 2020.
EGU2020-11764 | Displays | ITS5.9/EOS4.14
CrowdMag for personal interaction with Arctic magnetic variationRick Saltus and Manoj Nair
The Earth’s magnetic field is especially dynamic at high latitudes. The most awesome manifestation of this is certainly the aurora borealis or northern lights – caused by the interaction of the solar wind with the Earth’s magnetic field. Aside from the aurora you can’t see these magnetic variations. But your phone can. Virtually every modern smartphone is equipped with a 3-component magnetometer to enable the compass pointing capability for navigation. CrowdMag is a popular NOAA/CIRES citizen science app that we developed to tap into your smartphone’s magnetometer. It lets you interact with the Earth’s magnetic field.
The purpose of this presentation is to highlight the possibilities for using CrowdMag for science outreach and engagement, particularly in Arctic regions where day-to-day magnetic variations can exceed hundreds of nano-Teslas. We will show example projects that were carried out by summer interns as part of the University of Colorado’s “Research Experience for Community College Students” (RECCS) program. CrowdMag can be used to carry out various simple experiments for mapping and investigating the Earth’s magnetic field. We seek input and collaboration with others interested in Citizen Science and outreach in Arctic regions.
How to cite: Saltus, R. and Nair, M.: CrowdMag for personal interaction with Arctic magnetic variation, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11764, https://doi.org/10.5194/egusphere-egu2020-11764, 2020.
The Earth’s magnetic field is especially dynamic at high latitudes. The most awesome manifestation of this is certainly the aurora borealis or northern lights – caused by the interaction of the solar wind with the Earth’s magnetic field. Aside from the aurora you can’t see these magnetic variations. But your phone can. Virtually every modern smartphone is equipped with a 3-component magnetometer to enable the compass pointing capability for navigation. CrowdMag is a popular NOAA/CIRES citizen science app that we developed to tap into your smartphone’s magnetometer. It lets you interact with the Earth’s magnetic field.
The purpose of this presentation is to highlight the possibilities for using CrowdMag for science outreach and engagement, particularly in Arctic regions where day-to-day magnetic variations can exceed hundreds of nano-Teslas. We will show example projects that were carried out by summer interns as part of the University of Colorado’s “Research Experience for Community College Students” (RECCS) program. CrowdMag can be used to carry out various simple experiments for mapping and investigating the Earth’s magnetic field. We seek input and collaboration with others interested in Citizen Science and outreach in Arctic regions.
How to cite: Saltus, R. and Nair, M.: CrowdMag for personal interaction with Arctic magnetic variation, EGU General Assembly 2020, Online, 4–8 May 2020, EGU2020-11764, https://doi.org/10.5194/egusphere-egu2020-11764, 2020.